Artificial intelligence is, some suggest, the next frontier for humanity. Creating neural networks that perform functions, that one day may perform better than our own biological systems, seems to be the big excitement in academic circles, where ‘tinkering’ with mathematical, electrical and robotic models is facilitated. Professor Emeritus of Physics at Princeton, Freeman Dyson, author of The Sun, the Genome and the Internet said of his own life as a mathematician, working in particle physics among other interests, that it was “scribbling, it has no real usefulness at all , other than to others who like to scribble and talk in numbers.” He called it, his “livelihood,” while his real concerns, as I noted during our discussion, were alleviating suffering. His work in numbers, he insisted, did not do that. According to others, numbers in fact can do that.
Bart Kosko, a well known electrical engineer, philosopher, mathematician and Professor, teaching at the University of Southern California, credited with advancing Fuzzy Logic, as expressed in his book , Heaven in A Chip,Fuzzy Visions of Society and Science in the Digital Age makes an effort to account in systems, for the vast spectrum of life that is neither black nor white, but every variation in between. Fuzzy logic, at least in my opinion, is not really fuzzy at all. It is a precise calculation, of the least number of elements needed, to create a self sustaining reactive process. Meaning, if your electronic toy frog has to jump over puddles, it has to be able to determine how big a jump to make. That will depend on the size puddle it has to jump over. Fuzzy systems are the result of patterns between elements, that have the ability to react to an ever changing environment. That’s what humans do. So do animals and other life organisms. Kosko suggests, that in the future, we will inhabit “a world, as mere receptors of artificial agents, hosts of computer chips that can replace our own brains, even out perform them, make our memory perfect” and at all times accessible, “heaven in a chip, even.” We disagreed on the desirability of such developments, Kosko explaining why he thought technologically induced immortality would be beneficial.
Hans Moravec, Robotic’s engineer and Director of Carneige Mellon’s Robotics Institute, author of Robot, Mere Machine to Transcendent Mind goes along with the sentiment, that our machines will do things for us, that we cannot do for ourselves, such as explore other planets, or beat our artificial hearts when our own fails, or give us sight when our own eyes cannot see, “ultimately run companies, enabling us to become the leisurely species.” When I asked him what would in all likelihood be the first application of commercial robots, he said with a chuckle, “robots that vacuum the house.” As we already see in Israel, where electronic sensors make it possible to give plants only the water they need, and when they need it, increasing yields by as much a 40% and saving water by 60%, efficient technology put to life saving applications is something to get excited about, robotic or otherwise.
Yet, in responding to the current artificial intelligence (AI) community’s projection, that our future will be dominated by intelligent machines, which will out perform our own abilities, and somehow or other be integrated with us, creating a new bionic human, my concern is not the threat of machines to our existence, rather, the threat we pose in creating machines that can diminish human participation in life, and possibly atrophy, free will. The degrading of our spirit nature today, may be setting the stage for such a future, which in my opinion would not lead to our immortality but more likely, our potential enslavement.
After reading numerous books currently considered reflective of the future of science, from works about genetics and cloning, robotics and artificial intelligence, the prevailing expression is decidedly anti-soul, not in necessarily religious terms of atheism or agnosticism, but rather a generalized assumption that human beings are especially bodies, and that if there is an experience of Deity, or soul, we cannot explain how it happens, though we can approximate it and then experience it as a controlled opportunity, as though it were nothing more than physical mechanics, chemical and biological events. We don’t have to wait for a sense of serenity, we can manufacture it, by programming our brain to experience it. In essence, even consciousness can be manufactured and immortality, the suggestion goes, can be engineered, can be gained by technological invention. I do not share this world view, making as it has, the body, and not the soul, as the central vehicle of human life and the brain, as the repository of the sum of the human experience and consciousness.
War games, which even children play on video terminals, express, I believe, the essence of the computer age thus far. While computers can connect diverse spectrums of life, much as atoms are connected in all matter, they also widen the separation between being responsible for our actions in the world, and our acting on the world from a distance. Just as we expect to use robots to lead the way into outer space, and are already utilizing space for wars, more and more, there is a drive to make us robotic, in order to explore our own, divine inner space. It is a denial of the sacred, the ‘holy’ being replaced instead, with an edifice of equations, that only approximate but do not generate a single fleck of life from nothing. The totemistic culture growing around our intelligent machines should concern us, for what it says about our values as people, as a culture, as a civilization, as humans loosing touch with our own immaterial natures, and tragically, undervaluing them.
While millions starve world wide needlessly, famine being far more a political tool than a environmental necessity, we obsess over making better machines, but not being better humans. If we hold ourselves to such exceptionally low standards of performance on earth, as a community interested in life, and our machines reflect, at least at ground zero, the framework we design into them, what if our machines do end up reflecting us, and then exceed our own organizational talents? Will they orchestrate the manner in which to starve us to death, and then return to their laboratory to engineer a better human, one that requires less upkeep? Might this become a fundamental robot rule; less is best, when it comes to humans? If we think so little of human life ourselves, why wouldn’t our intelligent machines? If we cannot love each other, how will systems designed to reflect us, not act like us?