I never completed my childhood project to build a mechanical friend, but I never really gave up on it, either. As I grew up, I learned that my efforts followed a long tradition. Aristotle speculated on constructing a device that would “understand the will of its master.” “Barring that,” he wrote, “there will always be slavery.” In the 1500s, several philosophers and magicians were rumored to have built mechanical men, including Friar Bacon’s talking brass head and the Rabbi of Prague’s humanlike automaton. In the 17th century, philosopher Gottfried Wilhelm von Leibniz actually described in some detail how a thinking machine could work, but mindful of the church, he was careful to point out that it would not have a soul. He phrased his speculations carefully, asking whether “God could at least join the faculty of thought to a machine which was made ready for it.”

The issue that Leibniz so delicately avoids is at the heart of our fascination and our fear: if a machine can think, then what does that say about us? We humans have always favored theories that seem to put us comfortably apart from and above the rest of the natural world. Historically, we liked the idea that our Earth was at the center of the universe. The problem with tying our sense of self-importance to such theories is that we feel diminished when they turn out to be wrong. Galileo’s telescope seemed to make us less special. So did Darwin’s version of our family tree.

In the past, our ability to reason seemed to make us unique. We could deal with Darwin’s message by assuring ourselves that our noble rationality separated us from the beasts. Today we feel threatened, not by the beasts but by the machines. Already, computers can keep track of details better than we can. They can prove theorems that we cannot. They beat us at chess. They are not yet as clever as we are, but they are getting smarter–and getting smarter faster than us. Our monopoly on rationality is in peril, and it is time to move our sense of self-worth to a higher perch.

So, if reason is no longer a uniquely human attribute, how do we separate ourselves from machines? These days, many philosophers are placing their hopes on consciousness as a uniquely human attribute. According to this latest version of human-superiority theory, machines may become intelligent, but they will never feel. If this is so, we have a convenient distinction to justify our intuition that we are a cut above machines. My own guess is that this is another case of wishful thinking. There is no scientific reason to suspect that consciousness is unique to humans. Animals are probably conscious, at least a little. Intelligent machines will probably be conscious, too.

A few decades ago, the prospect of a computer’s beating humans at chess would have seemed shocking. Today we take it as a matter of course. We get used to things. Chess is still a fun game. We still enjoy each other’s company. It turns out that being the best at chess wasn’t such a definitive element of being human after all. Life goes on. Fortunately for the human ego, it is unlikely that machines will suddenly become as intelligent as people in every way. Intelligence is complicated and multifaceted; it is not a single magic principle. Machine intelligence will emerge gradually, improving decade by decade, giving us time to get accustomed to it.

Although I believe that we will someday build thinking machines, I am much less confident that we will ever really understand the process of thought. This may sound like a contradiction. How can we build a mechanism that does something that we do not understand? Actually, we do it all the time. Anyone who has written a complicated computer program realizes that there is a difference between understanding the parts and understanding the consequences of how they interact. Even a state-of-the-art computer like Deep Blue can surprise its designers. Computers a few decades from now will be thousands of times faster and more complicated. Their thoughts will be correspondingly more difficult to predict and understand.

Some people find this prospect of inscrutable machines disturbing, but I assume we will get used to it. After all, how many people today really understand the workings of the telephone system, the Internet or even a single personal computer? We understand how to get along with them, and that is enough. Humans are long accustomed to living with other organisms without knowing how they work. Since the Garden of Eden, we have been surrounded by mysteries. We humans will continue no doubt, to worry, to hope, to love and to wonder, surrounded by the garden of our own machines.