Why Robots Won't Rule


photo of Chris Chris Malcolm, old enough to remember when one used chads to patch binary programs, is a lecturer in the School of Artificial Intelligence of the Division of Informatics of Edinburgh University. His research interests include robot control architectures and the philosophy of robotics.

click on image to enlarge. Graph by courtesy of Hans Moravec.

There is a currently popular argument that within a few to several decades robots (or some other kind of intelligent machine) will have become so much more intelligent than us that they will take over the world. This argument is seriously put forward by knowledgeable scientists working in appropriate disciplines. They take different attitudes to this future. For example, Professor Moravec (a roboticist from Carnegie Mellon University, US) thinks this will be good, because we will be handing the torch of future civilisation over to our ``children''. Professor Warwick (a roboticist from Reading University, UK) thinks they may snatch the world from us before we are willing to hand it over. Professor de Garis (head of the Artificial Brain Project of Starlab, Belgium) thinks there will be a war between those who are on the side of the robots and those who are against them. Kurzweil (developer of some of the world's most advanced speech synthesisers and recognisers) thinks that we can participate in this takeover by superintelligences by having microscopic nanocomputers link themselves into our brains and becoming superintelligent ourselves. What they all do agree about is the inevitability of some kind of superintelligent machine soon becoming vastly more intelligent than us.

Like all the best conjuring tricks, the argument depends on distracting you with astonishing facts while some assumptions sneak past.

The astonishing facts are a generalisation of Moore's Law. In 1965 Gordon Moore, then of Fairchild (later to found Intel) predicted that the amount of transistors packable into a silicon chip would double every year. It turned out to be closer to 18 months. It affects both computer processors and their associated on-board memory, i.e., the two components most responsible for what we think of as ``computer power''. In other words, we can expect computer power to double every 18 months. But for how long?

Computer scientists predict that the silicon chip technology on which current computers are based has another ten to twenty years left before it hits fundamental physical limits beyond which no further progress in miniaturisation will be possible. What then? In fact, as Moravec has shown, Moore's Law can be projected backwards since before the dawn of ``silicon chips'', right back to the early pre-computer clockwork calculators and tabulators. Moravec also normalises the data to ``processing power per $1000 (1997)'' to produce a ``bang per buck'' version of Moore's Law. When this data is plotted it can be seen that Moore's Law has leapt seamlessly from technology to technology, always finding a new one before the old one ran out of steam. This suggests that Moore's Law is a specific example of some deeper law concerning information processing technologies in general. So, if this trend persists, we can expect Moore's Law to keep going, leaping technologies again, and again, and again.

A robot of the 1950s (Grey Walter's Machina Speculatrix)

Forever? It turns out that we needn't worry about forever, because something very interesting indeed happens in the next few decades. Within a few decades $1000 (1997) will be able to buy a computer with the processing power of the human brain, according to our current best estimates of what that is. Such is the magic of this kind of exponential growth of computer power (doubling every 18 months) that it doesn't matter if we have underestimated the power of the human brain by a factor of 100. We would only have to wait another ten years for these $1,000 computers to be 100 times more powerful. Would you prefer to wait until the computers were as powerful as the summed brain power of the entire planetary human population of six billion people? You will just have to wait another 50 years.

In short, we have somehow managed to get ourselves onto a technological escalator which will produce cheap computers of superhuman processing power within a few to several decades. This is the astonishing fact: computers are soon likely to outstrip the processing power of the human brain.

The first assumption which sneaked past is that this increase in computer processing power will automatically mean an increase in the intelligence of whatever is using these computers for brains. As Moravec has shown, this is what has happened so far in a number of areas in robotics and AI. For example, the difference between the very first autonomous robot ever made, Grey Walter's Machina Speculatrix of the 1950s, and one of todays most complex robots, Honda's P3, is very impressive indeed. In order for machine intelligence to keep step with machine computer power, however, we need a much stronger argument than that it often happens. We need to be able to say that it always will, that this is a general rule to which there are no exceptions.

A robot of the 2000s (Honda's P3)

Unfortunately there are exceptions. Artificial Intelligence (AI) has achieved many successes with the kind of canned ``intelligence'' exemplified by Expert Systems which capture the expertise of a human expert, often a consultant diagnostician such as a medical specialist. These systems are, however, notoriously fragile, falling apart quite idiotically when moved slightly beyond their area of expertise. In other words, they lack the general underpinning of ``common sense'' which we have. They are also incapable of the widely general, insightful, and rapid learning which characterises human students. For example, if you can teach someone how to play chess, you can also teach them how to play Mah Jong. But it is not possible to adapt a chess-playing computer program to play Mah Jong, you have to start all over again from scratch.

These two areas, of common sense and machine learning, are generally recognised in AI (Artificial Intelligence) to be extremely difficult research areas of which we are only just beginning to scratch the surface. They are also generally recognised to be crucial to the development of human-scale intelligence. The progress of AI in these areas at the moment consists largely of finding out how very much more complex they are than we first supposed.

The most important single lesson which AI has learned in its 50 years of research is a generalisation of Hofstadter's Law of software development: the problem is much more difficult than you think, even when you take this into account. In other words, the optimists do not have a good track record.

That is one reason why machine intelligence will not follow the development of computational power.

Even if it did, however, that is still not enough to permit the ``robots will take over'' scenario, because the second assumption which sneaked past is that something which displays some of the attributes of creaturehood must possess all the attributes of creaturehood. i.e., in effect be a real creature although built by artificial means. We are strongly disposed by evolution and by habit to suppose that anything displaying some aspects of animate behaviour is animate. It's a usefully cautious assumption in a dangerous world. The point about creatures is that millions of years of evolution have equipped them with a fierce determination to survive. This involves such things as attacking other creatures who threaten their dinner or territory.

Intelligence is no more enough to make a real creature than is fur and beady eyes. No matter how much intelligence is added to your word processor it is not going to sulk and refuse to edit any more letters if you don't improve your spelling. And no matter how much intelligence you add to your washing machine, robot butler, or whatever, it is not going to become anything more than a smarter contraption. Our problem is that while we have got used to the idea that teddy bears are not real even though we may be in the habit of talking to them at length, we are not used to contraptions being intelligent enough to talk back, and are willing to credit them with possession of the full orchestra of creaturehood on hearing a few flute-like notes.

MIT's Kismet ``emotional'' robot face

This is like what happened with Vaucanson's famous mechanical duck, the duck which aroused such controversy that it still features today in the saying ``if it walks like a duck, and quacks like a duck, then it is a duck.'' In 1738 Vaucanson exhibited his marvelous mechanical duck to an astonished Paris. It had multiply jointed realistic wings, could move its head around and mimic the swallowing neck movements of a duck, ``eat'' grain, splash water, etc.. The Parisians were used to ingenious clockwork automata which played whistles, wrote with pen on paper, etc., but what astounded them about this duck, and convinced them that it was a real step forwards towards artificial life, was that it had guts made of rubber hose and actually shat evil smelling duck turds soon after eating. Unlike all the other ingenious clockwork automata of the time, this one seemed to approach the miraculously self-sustaining feature of life of (seeming to) get its energy from grains instead of clockwork springs. Although it was fastened to a large plinth full of the gears and pulleys that made it work, the press of the day, being just as gullible as today's concerning these matters, soon had it capable of walking and swimming and really nourishing itself on grains. And of course, everyone started asking ``If this is what can be done now, what on earth will be possible in another 50 years? If it walks like a duck, and quacks like a duck, will it really be a duck?''

Joseph Weizenbaum was author of one of the early attempts at passing the Turing Test for Artificial Intelligence, the well-known ``Eliza'' conversational program (some of its incarnations known as ``Doctor'' because it emulated the sympathetic enquiries of a psychotherapist). In the 1970s he decided that he had seen so much human gullibility and anthropomorphisation towards Eliza, some people responding to ``her'' as though to a real person although ``she'' was no more than a bag of barely plausible text manipulation tricks, that he concluded that the human race was simply not intellectually mature enough to meddle with such a seductive science as artificial intelligence. We would simply make dreadful fools of ourselves by anthropomorphising and over-interpreting everything. For example, it is very difficult when faced with an apparently emotionally responsive creature such as MIT's ``Kismet'' robot head not to imagine it has real feelings behind those large eyes which follow you about, blink, and frown.

I'm afraid that Messrs de Garis, Kurzweil, Moravec, Warwick, etc., have proved Weizenbaum all too prescient. I presume of course that the fact that publishers and TV programme makers want to hear about robots taking over, and don't want to hear about robots not taking over, has nothing to do with it.

It takes more than quacking and shitting to make a duck.

[A much more detailed discussion of the points raised in this article can be found at http://www.dai.ed.ac.uk/homes/cam/Robots_Wont_Rule2.shtml.]


Chris Malcolm 3 August 2001
http://www.dai.ed.ac.uk/homes/cam/.


References

Hugo de Garis and Don Mooradian, The Artilect War, first draft, available on-line at http://foobar.starlab.net/~degaris/artilectwar.html, and although not yet conventionally published, has aroused a lot of discussion.

Ray Kurzweil, The Age of Spiritual Machines : When Computers Exceed Human Intelligence, Penguin USA 2000.
http://www.kurzweiltech.com/ray.htm.

Hans Moravec, Robot : Mere Machine to Transcendent Mind, Oxford University Press 1998.
http://www.frc.ri.cmu.edu/~hpm/.

Warwick, Kevin, In the Mind of the Machine, Arrow Books 1998.
http://www.cyber.rdg.ac.uk/K.Warwick/.

Weizenbaum, Joseph, Computer Power and Human Reason : From Judgement to Calculation, W H Freeman & Co; 1976.