What can we expect of a machine soul, an organon of self awareness? We must not expect this organon to mirror our own selves. We have arisen as the result of purely natural processes; one of the great achievements of modern science has been the elimination of God or other teleologisms as a necessity from our explanations. The organon of machine soul will arise from conscious human design, however, or some extension of human design. Conscious design may prove to be far superior in creative power to natural evolution. We must not limit ourselves, or limit the natures of these organons, or we may impose horrible burdens upon these, our greatest offspring. --Bhuwani, Artificial Soul, from Greg Bear's novel Queen of Angels

I recently finished reading Slant by Greg Bear, the sequel to his thought provoking novel Queen of Angels. Although Slant was about a lot of things, the part that interested me the most was the depiction of two artificial entities vying for control of the country. One of these entities was named Jill and in some ways she was no more than a very advanced computer. Her "body" was a box filled with circuitry and she "saw" with electronic sensors. Yet the author treated her like any other character in the novel. Portions of the story are told from her viewpoint, and she is described in human terms. Jill feels happy and Jill feels sad. She pities certain individuals, loves others and hates still others. She can feel scared or curious. She is self-aware and she is one of the most sympathetic characters in the book. Greg Bear even saw fit to give her a gender!

Slant dredged up a whole slew of questions for me that I'm sure have become cliched by now but which I still find interesting (and which have never really been resolved). Can machines think? Can they feel emotions? Can they have a soul? These are utterly loaded questions, of course, and, it all depends on what you mean by "soul" and "think" and "feel" and "machine". I am going to answer these questions for the record.

Do I think it is possible for a machine to think and feel? Absolutely. At last count there were over 5.5 billion examples of thinking and feeling machines on this planet alone, and that number is increasing every day. Yes, human beings are machines, in the broadest sense of the word - they are biochemical and bioelectrical devices that need fuel to run and that can be damaged if not properly cared for. They have a certain nature and act a certain way and operate within certain parameters. Admittedly, they are not man-made machines and they are pretty complicated to boot and, of course, we've only begun to understand them, but that doesn't change the thrust of the argument.

I think the problem many people have with calling human beings "machines" is that the word has certain connotations that are hard to avoid. When many people think of "machine" the first thing they think of is their car or computer or, even worse, their CD player, and they automatically conclude that calling a human a "machine" means likening it to one of these objects. The term is much broader than that. In the context of this discussion, I take "machine" to be a synonym for "anything that produces work". So, yes, a CD player is a machine, and so is a human or a star, but, more importantly, these are all machines of radically different natures, that work in radically different ways. So let me be blunt: I am not saying that the human brain is simply an advanced computer. The kind of machine that we call a "computer" is a radically different device than the machine we call the "human brain". But both are machines.

It helps us, perhaps, to remember that human beings have always had their place in the natural universe. By the very nature of our existence, we are all natural entities, existing within the framework of natural law (by "natural law" I mean the fundamental laws governing the nature of the universe) and understandable within the context of science (defined here, for my purposes, to be the study and cataloguing of natural law). It is inconceivable that the situation could be otherwise - unless you throw off the shackles of logical discourse entirely and claim that we are all divine entities - and I am not prepared to do that! To claim that something is divine (or "unknowable" or "beyond logic" or "by it's very nature, utterly outside the realm of human understanding") is to claim that it exists outside the jurisdiction of the most fundamental laws of reality. You all know the fundamental laws I'm taking about, at least implicitly - the really basic ones like the law of identity (a thing can't be a star and a dog at the same time) and it's corollary, the law of non-contradiction (if a thing is a star, then it can't be a dog). The problem with claiming that humans are beyond logic is that it suddenly renders rational discussion unnecessary and sucks all the power out of statements like "But that doesn't make any sense!". Why would anybody try to argue anything if all you had to do was say "It's only logic" and that would be the end of that?

You can see where this is leading. If human beings are natural entities, then all the things we usually associate with them, like rational, conscious thought, a sense of humor, love, hate, pity and so on, are nothing more than natural phenomena. I am not saying these things don't exist or that they are "convenient illusions", as some materialists would have us believe. I am not trying to trivialize the human condition and I fully realize that love, hate, and pity are real and that people truly feel these things. I am saying that these things exist, like us, within the framework of natural law and hence can be understood and explained by scientific means. That being the case, there's no conceptual reason why it should be impossible to emulate such phenomena (rational thought, emotions, self-awareness) in a device, a man-made machine that mimics their operation in nature. In other words, there is no conceptual reason why it should be impossible to build a machine that works like the human brain, complete with the ability for rational thought and emotions to boot. Real, honest-to-goodness, thinking and feeling machines exist in nature already (approximately 5.5 billion of them) Why should it be philosophically impossible for us to build another one? There could be technological reasons why it may be impossible for the time being, but that is a different matter; at one point, it was technically impossible to build an airplane.

If we were ever to build such "human-like" device, I am of the firm opinion that it would be just as much of a "person" as anyone else on the planet, at least when it comes down to mental functioning. Even if we didn't succeed in recreating a body, a sufficiently "brain-like" device would, presumably, experience the same mental states that we go through every day. There would be nothing "artificial" about such an intelligence. It would be as real as you or me. Why shouldn't such a machine feel love or hate? Why shouldn't we love or hate it in return? Why shouldn't we accord it the same rights and privileges that we accord other people? Why can't it develop a personality, or a sense of humor, or write poetry?

Why can't the metal box have a soul? We supposedly do; why can't it?

I'm very bad at gauging public opinion on any issue, so I don't know what kind of flak, if any, I'm going to receive for the above statements. They all seem perfectly obvious to me, but I suspect some people will take offense to the notion that human beings don't have any particularly special status in the universe and that - horror of horrors! - we too are subject to laws of reality. Some still insist that there is just something different (read: divine) about the human species. Every time someone dredges up issues of religion and divinity in discussions such as these, I know how Darwin must have felt when confronted by hardcore Creationists. I have not read the Origin of Species and I can't pretend to know if all of Darwin's specific conclusions were correct. The last I heard, he may have been wrong on certain points. This is all, of course, completely irrelevant, because Darwin's greatest contribution was not any specific theory, but his general demystification of human biology. He put the human body in its place. He put it under control of science. I am waiting for someone to do the same thing to the human brain. To this end, I proudly consider myself a Mental Darwinist.

I've said a lot about how human beings are machines with a certain nature and as such should be "copyable" with other devices, but what exactly is this nature? What is the nature of the human brain, and exactly how would one go about emulating it? To be blunt I haven't a clue. Just because the human brain has a certain nature doesn't mean that anyone knows what it is (yet). But this is a technological limitation, not a philosophical one. I can put down some ideas, but since I haven't really done any research on the topic, nothing I write should be regarded as practical advice.

My personal opinion is that the human brain is a fundamentally different beast from what academics call the "Turing machine". A Turing machine is a programmable device that can perform certain kinds of mathematical calculations. All computers that were ever in practical use, from the ENIAC to the Cray to the Pentium III, are Turing machines. These devices are strictly deterministic and can never go beyond their original programming. Since, it seems to me, that humans can do this, what this means, in practice, is that, no matter how fast computers get, I don't believe we could ever fashion one into a human-like brain. At best, one could develop a program that could superficially imitate some static human behavior - probably good enough to consistently fool any person if he or she was placed behind a screen - but, that's a far cry from saying that the computer can "feel" the way we feel and "think" the way we think. This is why I have my doubts about people who insist on philosophizing about machine intelligence in terms of "programming", "instructions" and "algorithms". I think it's a dead end.

I could, of course, be wrong on this. Alan Turing believed that to qualify as "intelligent", a machine would only have to fool people into thinking that was the case. There is something to be said for this "external" approach. After all, you don't have to approach the subject of steam engine design armed with a knowledge of mesons and quarks. An approximate empirical knowledge of steam will suffice. I still have my doubts that this will result in a truly human-like machine, with comparable mental states, but who am I to argue with Alan Turing?

Another approach would be to try to figure how human consciousness arises from the physical processes inside the brain, and then try to emulate those processes in another machine and see what comes out of it. I have no opinion on this. It seems incredibly difficult, but what do I know? Yet another idea would be to try to understand how the human psyche works without necessarily referencing the physical processes that make it up. Presumably our understanding of it would then be in terms of metaphors; we could then build those metaphors into another machine and see how it works.

I don't know. I won't pretend to be a pioneer on the subject of artificial intelligence, but I firmly believe that a genuine machine consciousness is, at least, possible. I look forward to the time when my next door neighbor might just be the artificial entity created last week.