Published in UFO Magazine Apr/May 02

The creation and application of artificial intelligence will impact the 21st Century in much the way the internal combustion engine impacted the last century. Always trying to get a sense of the future, Zohara Hieronimus featured a series of interviews on her nationally syndicated radio program, “Future Talk”, to address this subject and how it will impact the quality of human life. Among the world renowned scientists and educators who have discussed artificial intelligence on “Future Talk” are Hans Moravec, Freeman Dyson, Ray Kurzweil, and James Martin. Unanimous in the opinion that computerization of tasks, networking of systems and autonomously reproducing systems are being created, the disagreements lie in determining the degree to which any of these changes could pose a threat to our own evolution. All agree that to some degree an automated evolution is imminent.


Ray Kurzweil, author of The Age of Spiritual Machines: When Computers Exceed Human Intelligence, believes that “the technological evolution itself is a continuation of the biological evolution that gave rise to the technology-creating species in the first place. Evolution involves both creation and destruction. What we perceive as good and evil is a spiritual progress—it’s a spiritual process. If we look at what evolution has done, it creates more and more intelligence, more and more beautiful entities, and it moves us closer to the ideals that we associate with the spiritual life and with God. And we call God infinite intelligence, infinite beauty.“

Freeman Dyson, Professor Emeritus of physics at the Institute for Advance Study, Princeton University, and author of The Sun, The Genome and the Internet: Tools of Scientific Revolutions, sees the progress simply as a natural unfolding from the last century of science. “The first half of the century was the blossoming of physics, and the second half was the blossoming of biology. Clearly the big question now is what about the next 50 years? Which will be the leading subject in the next 50 years? My bet is neurology, the science of nerves and brains, and astronomy. That is clearly ripe for a golden age.”

Dyson pointed out that much of scientific development is not the result of careful consideration for some well-planned outcome, but rather the result of an immediate response to immediately perceived problems, like security threats, environmental challenges or industrial needs. Rather than our inventions resulting from our vision or long time consideration, or even an adequate testing period to evaluate their impact on society, the environment, or our consciousness, our impulse for research and development is often the result of our fear of the moment.

But without this attention to our impact on the future, we could easily find ourselves in “a brave new world” in which artificial intelligence can redesign and reinvent itself without the control of human engineers. Indeed, James Martin, author of After the Internet: Alien Intelligence, believes artificial (or alien) intelligence will become so complex that humans will not even be able to understand the computers’ processes. “Ultimately the question becomes: Will we control the machines, or will they control us?”


Putting robots into an evolutionary scale as that on earth, Hans Moravec, author of Robot: Mere Machine to Transcendent Mind, says that most robots are currently insect-like “which means they can’t really be used in most places. We’re just now at the threshold of having machines that can freely navigate, but by 2020 we will have second generation robots who have what is called “accommodation learning”. Very “mouse-like” contrasted to first generation robots, which will be all-purpose robots “that have perception enough, for instance, to vacuum a room or work on an assembly line, but if something goes wrong, they can’t figure out what to do. Third generation robots, however, will be able to watch something and create a model in its mind and imitate it.“ Like a monkey. When we get to fourth generation robots, which Moravec predicts by 2040, “it will be able to reason like humans.”

James Martin disagrees, believing there is a massive flaw in the reasoning surrounding this topic. “Almost everything you read about artificial intelligence gives the impression that computers are going to be intelligent like humans are. And what I see evolving is something absolutely different from that. Artificial intelligence will not be like human intelligence, it will be completely different. In many cases we will not be able to understand it, so that is the reason I use the word “alien intelligence”, to mean an intelligence fundamentally different from ours. “

A Pulitzer Prize nominee, textbook writer and technology consultant, Martin noted that “most engineering is about finding a goal. It’s being done for a particular purpose. But quite often there are surprises along the way, and I think that in the computer industry the surprises are going to come at a pretty fast and furious rate. Part of the reason for that is that we’ve now reached a time where we can set the computers up so that they don’t just follow programs, but they can read, they can learn, they can evolve on their own. And when they do that, surprising things are going to happen.”


All four scientists were certain that with the speed of computation, computers were going to become self-automated, self-duplicating and creatively intelligent about enhancing their own performance due to their ability to distinguish patterns. Each expert saw the inevitable surprises that automated evolution would pose. We wouldn’t know their properties until they were manifest, using after-the-fact observation. We would play “catch-up” with our own creations.

Hans Moravec, a founder of the world’s largest robotics program at Carnegie Mellon University, described one of those surprises as an intelligent “outlaw robot”, or one that would disobey its programming (what is sometimes called computer DNA). “The wild,” Moravec said with a degree of playfulness. “Wildlife is what it [would be] like. Those machines will compete among each other for resources, for space and materials and energy. But they will be intelligent. In fact, they’ll be way more intelligent than we are.”

Worried about robots and semi-bionic entities interfering with our own evolution? The experts are divided on whether you should be. Kurzweil suggested that the technology itself won’t pose a threat, but the humans who control it could. Martin believes, on the other hand, that robotic power itself could pose a threat. “There’s no question,” he said. “Everybody knows the really giant computers are millions of times faster than the human brain, and they’re going to get much faster. So it could be billions of times faster, and hundreds of billions times faster. And that means that if they are doing their own thing, often doing something which we don’t have to program, then they’re going to be way ahead of us, and we’re not going to be able to understand, necessarily, the logic processes which are inherent in what the computers are saying we should do.”


What might it mean to create self-creating and replicating machines to process information in ways our brains are incapable of? We all know that the human memory distorts things and our brains are millions of times slower than super computers. The courses of science, the economy and human intelligence have begun to engage the machines of a post-industrial age, or as some suggest, a post-human age. By 2020 Kurzweil says, “we’re going to see such a radical change in the way we relate to one another, the way we integrate with machinery, even the kinds of neural implants available for the new bionic human.” Though we know we can do all of these things, wouldn’t it be wiser to sometimes pause and ask, “Should we?” Kurzweil, for one, can reply to that question without hesitation. “We have to realize that it’s hard to stop the sweep of progress. We’re not going to instantly create the world of 2020 or 2030 overnight. It comes through thousands of little steps, and each step makes sense as it’s happening, because each step is a small increment of progress, and there’s a tremendous economic imperative to keep progress going.” Given the speed at which the world is becoming dependent on these vast computer networks and these automatically integrated systems, however, wouldn’t it also make the world more vulnerable to electronic destabilization?

Designers often point to the Internet as an example of a relatively safe and stable technology because it is decentralized and no one has been able to take the entire Internet down for even a minute. When portions of it go down the information simply routes around it. Kurzweil pointed out the trend “of moving from real meetings and real communications to virtual meetings. From real mail to Email. The recent events are going to accelerate that because of the inherent safety of the virtual world…. You can go down the road even 20 years, and much of learning will be virtual, at a distance.“ Kurzweil’s image of the bionic human is more like Hollywood than most, however, and James Martin is more concerned that the artificial intelligence will be alien to our thought processes in many respects.

Yet, if Kurzweil’s vision of humans merging our biological intelligence with our non-biological intelligence becomes a reality, what would be the impact on society’s values, ethics, law and order? “We’re going to be actually augmenting our biological intelligence with an intimate connection to machine intelligence,” says Kurzweil. He cites the predicted use of nanobots, microscopic sized robots the size of blood cells, inserted into the capillaries of our brains where they will be able to communicate, ”wirelessly, non-invasively, with our biological neurons…. We could have billions of these nanobots in our brain communicating with each other on a wireless network, communicating with our biological neurons. We could all essentially be “on the web” or online all the time. By augmenting our real intelligence with artificial intelligence, we could go instantly into virtual reality environments that could be a full immersion, incorporating all of the senses.” In other words we would become even more dependent on machines and artificial intelligence—even for pleasure.


Moravec goes even further than Kurzweil’s semi-utopian virtual world. His bottom line is a future where “humans no longer matter.” Needless to say, this view is provocative and disturbing from a theological and spiritual perspective. Whether we live in a jungle tribe or city, most of us think that we matter and that “human beingness” is important. Moravec contends that this perspective will gradually change as we begin to accept these new technologies. “I mean, I’ve done it in my mind, but the intelligent machines don’t really exist yet, so at this point it’s strictly fantasy in my mind. But as they really do begin to exist, I think people will begin to relate to them as to other people. In fact, many of the robots will be nicer than the average person.”

It is the nature of any evolutionary process to grow exponentially, because, as Moravec stated, “one stage of evolution builds on the products of the last stage. So we use the more powerful results of the latest generation of technology to create the next generation. That’s why it accelerates.“ Moravec believes this exponential process could eventually eliminate the need for humans.

Ray Kurzweil’s theories cause him to see this subject quite differently. “It’s not going to be a simple matter of getting machines on the left side of the room and the humans on the right. It won’t be a clear distinction between the two, because we’re going to have many ways of mixing up the two worlds of intelligence. One thing, we’re going to be extending our own minds by this intimate integration of our actual biological neurons with non-biological entities. Nanobots [will be] floating in the capillaries of our brains and they’re going to be working together, so if you talk to someone, a biological human, in the year 2040, you’re actually interacting with someone who’s part machine.”

A brave new world or a weird artificial one? What is clear, and the scientists in the lab are quick to admit, is that ultimately the machines we are creating now, and those that will build themselves later, will have great deal of influence in our future.

Robot: Mere Machine to Transcendent Mind, by Hans Moravec, Oxford University Press, 1999

After the Internet: Alien Intelligence, by James Martin, Capital Press, 2000

The Sun, The Genome, and the Internet: Tools of Scientific Revolutions, by Freeman Dyson, Oxford University Press, 1999

The Age of Spiritual Machines: When Computers Exceed Human Intelligence, by Ray Kurzweil, Viking Penguin, 1999