The rapidly increasing speed and sophistication of computer hardware and software (e.g. in recent years the amount of computer power available per dollar seems to have doubling every year, and in earlier years, every two years) has led many people to worry about the possibility of computers or robots awaking from their mechanical sleep and rising up to pose some kind of threat to the human race, even to the extent of taking over the world and either exterminating us, or keeping us as pets. Not only do I not believe this, I consider it a dangerous argument. It is dangerous in two ways. The first is that, like all the best urban rumours and horror stories, it feeds upon our prejudices and fears to produce a story that many people want to believe. The second is that these kinds of plausible misconceptions can affect public policy towards research and development. In the history of AI that has already happened once, and may have happened twice.
The first time happened in the early 1970s. In the UK the Govt commissioned a special report from the Science & Engineering Research Council (the infamous Lighthill Report) which damned AI and recommended withdrawal of research funding. The same kind of official doubts which the Lighthill Report made explicit in the UK lay less explicitly behind a similar slow down in research funding in the US.
The second time may have happened in the last stages of defining the Alvey Initiative, the UK's answer to the threat of the Japanese Firth Generation Project, and which recommended putting a lot of money into AI research, which they renamed Knowledge Based Systems so as not to confuse MPs who might have remembered the Lighthill Report of little more than a decade earlier which had told them what rubbish AI was. It was rumoured in some of the UK national press of the time that Margaret Thatcher watched Professor Fredkin being interviewed on a late night TV science programme. Fredkin explained that superintelligent machines were destined to surpass the human race in intelligence quite soon, and that if we were lucky they find human beings interesting enough to keep us around as pets. The rumour is that Margaret Thatcher decided on seeing that that the ``artificial intelligentsia'' whom she was just proposing to give lots of research funds under the Alvey Initiative were seriously deranged. Her answer was to double the amount of industrial support required by a research project in order to be eligible for Alvey funding, hoping thereby to counterbalance their deranged flights of fancy with industrial common sense. I have so far been unable to substantiate this rumour. Fredkin did say that on British TV at the right time, and there were last minute increases in the amount of industrial support required for Alvey eligibility, but I haven't found documentary support for the link.
"Robot Terror", a letter from New Scientist of 30/1/99 from Nicholas Albery of the Institute of Social Inventions in which he seeks support for their petition:
"In view of the likelihood that early in the next millennium computers and robots will be developed with a capacity and complexity greater than that of the human brain, and with the potential to act malevolently towards humans, we, the undersigned, call on politicians and scientific associations to establish an international commission to monitor and control the development of artificial intelligence systems."[More details can be found of this in the list below.]
Apart from a dangerous belief, I think this Robot-Takeover scenario is misleading in a scientifically important way, because it elides a very important issue at the heart of robotics and artificial intelligence, which has serious implications for research and system architectures, and which for the moment I shall gesture vaguely at with the terms autonomy, agency,, and creaturehood.
While preparing my arguments against this view, I'm collecting notes and references on the topic here in the hope that they can be of assistance to others considering the problem, and may encourage others to point out useful material to me.
``If machines are going to be more intelligent, then what kind of treatment can we expect from them? We should probably be treated the same way we treat less intelligent life.''
"In view of the likelihood that early in the next millennium computers and robots will be developed with a capacity and complexity greater than that of the human brain, and with the potential to act malevolently towards humans, we, the undersigned, call on politicians and scientific associations to establish an international commission to monitor and control the development of artificial intelligence systems."From the Institute's Web Site/ can be found Robots could enslave human race which is based largely on Warwick's ideas, and which has a set of polling buttons you can press to indicate what you think of the idea of setting up an international body to monitor and control the development of AI.
Quotations from the Introduction:-
`I believe that 21st century global politics will be dominated by the issue of "species dominance".De Garis considers the problem of robot (or Ultra Intelligent Machine) takeover in detail, and suspects that those who view it positively as humanity's great destiny (e.g Moravec) will end up fighting a war with those (e.g. Warwick) who consider it a dreadful fate which we must avoid.
`I believe that 21st century technologies will allow the creation of "artilects" (artificial intellects, artificial intelligences, ultra intelligent machines), with intellectual capacities zillions of times greater than those of human beings.
`I believe that this technological possibility will force the issue of whether artilects should be built or not.
`I believe that humanity will split into two major ideological camps, one in favor of building artilects (the "Cosmists") and those opposed (the "Terrans").
`I believe that the ideological disagreements between these two groups on this issue will be so strong, that a major "artilect" war, killing billions of people, will be almost inevitable before the end of the 21st century.'
Not everyone, however, believes these prophecies of doom. Joseph Weizenbaum of MIT was long ago the author of the famous Eliza program which simulated the rather simple conversational responses of a Rogerian psychotherapist ("I got lost on my way here." "That is interesting that you got lost on your way here. Did your mother ever lose you as a child" etc.). One day he found his secretary, who knew very well the whole thing was a bag of plausible tricks with the intelligence of a flea, consulting Dr Eliza about the problems she was having with her husband. He was so appalled by the gullibility of people in face of apparently knowledgeable behaviour by computer systems, that he decided that the human race simply lacked the intellectual maturity to be allowed to pursue research into the seductive realms of AI. His book Computer Power and Human Reason argues the case. I had always thought JW had gone rather over the top in his alarmist reaction here. After all, his secretary may simply have the ``bag of tricks'' to be a useful way of provoking her to think about the issues, much as many people will use horoscopes and Tarot cards without actually believing there is some Cosmic Intelligence, simply that the chance fall of cards is one way of provoking a useful kind of brainstorming on the issues. However, faced with the extraordinary claims which otherwise intelligent and well-educated people, some of them even working in the field, are prepared to make about the imminent future of robotics, I have changed my mind. I have decided to call this gullible tendency to over-interpret what lies behind the apparently cute or knowledgeable behaviour of artefacts the Weizenbaum Illusion in honour of JW's prescience here.
Note that we have been constructed by evolution to imagine that anything that displays mind-like behaviour has a mind. The Weizenbaum Illusion is not just silliness, it a compelling intuition which has been wired into our brains. This idea was demonstrated very neatly in the science fiction story of the indestructible robot. A physicist bets a roboticist he can't build an indestructible robot. Come the day the physicist is shown a small furry thing running around on a table, given a hammer, and invited to destroy it. The physicist raises the hammer. The furry thing turns over on its back and squeaks piteously. The physicist finds himself unable to destroy it.
There follows some links to the views of perspicacious people who have managed to resist the seductive Weizenbaum Illusion.