click on image to enlarge. Graph by courtesy of Intel.
In 1965 Gordon Moore, then of Fairchild (later to found Intel)
predicted that the amount of transistors packable into a chip would
double every year. This was a little optimistic. It has since then
turned out to be closer to 18 months, a change Moore agrees with. It
affects both computer processors and their associated on-board memory,
i.e., the two components most responsible for what we think of as
``computer power''.
In computer architecture there is a ratio between processor speed and
memory which provides optimum performance in terms of ``bang per
buck''. If processor speed is expressed in instructions per second,
and memory is expressed in bytes, then Amdahl (founder of Amdahl
Corporation, a computer manufacturer) has suggested that the optimum
ratio here is one, i.e., a 20 MIPS (million instructions per second)
processor is best allied with 20 Megabytes of memory. Roughly
speaking, on average this does seem to have been the case. The exact
number is not important. What is important is there is some simple
association between processor speed and the amount of memory that the
processor can conveniently exploit. Since memory is produced by the
same technology as are processor chips, then Moore's law applies to
computer power in general, in terms of the entire package of processor
and memory.
click on image to enlarge. Graph by courtesy of Hans
Moravec.
Computer technology analysts predict that the current technology on
which computers are based has another ten to twenty years left before
it hits fundamental physical limits beyond which no further progress
in miniaturisation is possible. What then? In fact, as Moravec has
shown, Moore's Law can be projected backwards since before the dawn of
``silicon chips'', right back through the earlier computer and
calculator technologies of clockwork, relays, electronic valves (US
tubes), and discrete transistors. Moravec also normalises the data
to ``processing power per $1000 (1997)'' to produce a ``bang per
buck'' version of Moore's Law. When the data is plotted it can be
seen that Moore's Law has leapt seamlessly from technology to
technology, always finding a new one before the old one ran out of
steam. So, if this trend persists, we can expect it to leap again, and
again, and again.
Forever? It turns out that we needn't worry about forever, because
something very interesting indeed happens in the next few
decades. Within a few decades $1000 (1997) will be able to buy a
computer with the processing power of the human brain, according to
our current best estimates of what that is. Such is the magic of this
kind of exponential growth of computer power (doubling every 18
months) that it doesn't matter if we have underestimated the power of
the human brain by a factor of 10, 100, or even 1,000. In the extreme
case of an underestimate of 1,000 we simply have to wait another
fifteen years for the requisite computer power to arrive. And we only
have to wait another ten years for these $1,000 computers to be 100
times more powerful than a human brain.
Would you prefer to wait until the computers were as powerful as the
summed brain power of the entire planetary human population of six
billion people? You will just have to wait another 50 years.
In short, we have somehow managed to get ourselves onto a
technological escalator which will produce cheap computers of
superhuman processing power within a few to several decades.
Unless of course Moore's Second Law stops it. Moore's Second Law
concerns the cost of the fabrication plant needed for each new density
of silicon chip. Once more this is an exponential law, but this time
the cost of the plant increases linearly with respect to
miniaturisation, i.e., exponentially with respect to time. So as
Moore's First Law is making computers cheaper, Moore's Second Law is
increasing the cost of tooling up for the production run, which is
increasing the cost of computers. At the moment the First Law is
winning, but soon the Second Law will catch up, and the price per
transistor will bottom out. There will then be no economic incentive to
making transistors smaller, so the progress of Moore's First Law will
stop. According to Ross [Ross 1995] this will happen between 2003 and
2005. Given that particular technology of course. Perhaps by then we
will have switched to optical or nanocomputing.
These arguments about what and when might stop the progress of Moore's
First Law become increasingly speculative and technical. I don't wish
to consider these arguments, because what is really interesting is the
conclusion some people, such as Professors Moravec and Warwick, draw
from this sustained progress, and that is that when computers have
exceeded the human brain in computing power, they (in the form
of robots or whatever) will be more intelligent than us, and
will take over the planet. Moravec thinks this will be a good thing,
because we will be handing the torch of future civilisation over to
our ``children''[Moravec 1998]. Warwick thinks this may not be so
good, because they may snatch it from us before we are willing to hand
it over, and we won't be able to stop them, because they'll be too
clever and too technologically advanced [Warwick 1998]. De Garis
thinks there will be a terrible world civil war between those people
who are on the side of the robots and those who are against them [de
Garis 2001]. Kurzweil thinks that we human beings can start to
participate in this increase in intelligence by having microscopic
nanocomputers link themselves into our own brains [Kurzweil 2000].
What all these predictions have in common is two assumptions, often
felt to be so obvious that they are barely even stated. The first is
that this increase in computer processing power will automatically
mean an increase in the intelligence of whatever is using these
computers for brains. The second is that superintelligent machines
will automatically want to take over the world.
I think both of these assumptions fail to hold. The first fails to
hold because intelligence is more, much more than just raw
processing power, although high intelligence requires a large amount
of processing power. It's like supposing that just because you've
fitted a rocket engine to the roof rack of your motor car you will now
be able to get to work faster.
The second assumption is due to what I call the Weizenbaum
Illusion, our human tendency to over-interpret things which behave
in some ways as though they might have minds. It's a modern version of
the same illusion which tempted early man to think that thunder meant
the Gods were angry. It leaps over what I call the
contraption/creature chasm, i.e., it imagines that a
superintelligent machine will not just be a useful contraption, it
will be a creature, with all that that entails about hostility towards
something which threatens its dinner. However, just as no amount of
extra linguistic ``intelligence'' in your word processor will cause it
to try to electrocute you because of your poor grammar, so no amount
of extra intelligence in your domestic robot will cause it to demand
that you do the washing up for it, because it has more important
things to do, and you are its inferior and therefore obliged to serve
it. Why not? Because it will not be a creature, with all that
creaturehood implies of survival, competitiveness, territoriality,
etc..
Some people argue that these predictions of the coming supremacy of
robots are just too silly, and that even bothering to refute such
silly ideas is a waste of time. They are just entertaining science
fiction horror stories which happen to have caught the attention of
the popular media. However, some of those who are putting forward
these allegedly silly ideas are very clever knowledgeable men who have
spent a great deal of time in related research, such as Moravec, and
have spent time dealing seriously with the obvious objections to their
ideas. Despite the reluctance many academics and scientists feel for
engaging in debates which have moved into the public arena, I think
these arguments deserve to be taken seriously for two reasons. Firstly
they raise some important issues relevant to artificial life and
artificial intelligence. Secondly these ideas can influence public
policy and Government research funding strategies. Even in the brief
history of artificial intelligence research, only about fifty years
old, it is possible that this has happened twice already.
The first time happened in the early 1970s. In the UK the Government
commissioned a special report from the Science & Engineering Research
Council (the infamous Lighthill Report) which damned AI and
recommended withdrawal of research funding. The same kind of official
doubts which the Lighthill Report made explicit in the UK lay less
explicitly behind a less extreme slow down in research funding in the
US. This is sometimes referred to generically as the ``first AI
winter''. The basic problem was an annoyed backlash by the research
funding agencies in response to the the failure of the optimistic
predictions of the early AI pioneers. For example, in 1958 Newell and
Simon predicted that computers would by 1970 be capable of composing
classical music, discovering important new mathematical theorems,
playing chess at grandmaster level, and understanding and translating
spoken language [Newell & Simon 1958].
The second time may have happened in the mid 1980s, in the last stages
of defining the Alvey Initiative, the UK's answer to the threat of the
Japanese Fifth Generation Project. The Alvey Report, on which the
Initiative was based, recommended putting a lot of money into AI
research, which they renamed Knowledge Based Systems so as not to
confuse Members of Parliament who might have remembered the Lighthill
Report of little more than a decade earlier which had told them what
rubbish AI was. It was rumoured in some of the UK national press of
the time that Margaret Thatcher watched Professor Fredkin being
interviewed on a late night TV science programme. Fredkin explained
that superintelligent machines were destined to surpass the human race
in intelligence quite soon, and that if we were lucky they might find
human beings interesting enough to keep us around as pets. The rumour
is that Margaret Thatcher decided that the ``artificial
intelligentsia'' whom she was just proposing to give lots of research
funds under the Alvey Initiative were seriously deranged. Her answer
was to increase the amount of industrial support required by a
research project in order to be eligible for Alvey funding, hoping
thereby to counterbalance their deranged flights of fancy with
industrial common sense. I have so far been unable to substantiate
this rumour. Fredkin did say that on British TV at the right
time, and there were last minute unexpected increases in the
amount of industrial support required for Alvey eligibility.
In a letter in New Scientist of 30/1/99 Nicholas Albery of the
Institute of Social Inventions sought support for their petition:
This petition was a direct result of a brainstorming session at the
Institute of Social Inventions in April 1998 introduced by Professor
Kevin Warwick on the basis of his 1997 book The March of the
Machines (an earlier edition of In the Mind of the Machine)
in which he predicted robots (or superintelligent
machines of some kind) forcibly taking over from the human race within
the next 50 years.
This fear of machines revolting and enslaving us is not new. In Samuel
Butler's 1901 science fiction novel of the far future,
Erewhon, his hero who has been transported to a future time
writes of the future civilisation:
The argument of Butler's Professor of Hypothetics [Butler] is
basically the same as those advanced by the modern advocates of the
view that robots (or other superintelligent machines) will forcibly
take over.
The first step in the argument is that computers will soon be more
powerful than human brains. This is debatable on grounds of technology
and economics. I don't propose to deal with that here because I think
the more serious and interesting flaws in the argument come later.
The next step in the argument claims that we, or the robots
themselves, will be able to organise all that raw computer power in
such a way as to provide the robots with a human level of
intelligence. The final stage in the argument claims that things
which are so much cleverer than us will develop their own purposes,
and in pursuit of those purposes will sweep us out of the way or
exploit us just as easily as we pushed wolves aside and exploited
sheep.
It is true that there are many examples of increases in machine (or
robot) capability or ``intelligence'' keeping step with the
development of computing power. I argue that this is for a special
reason which doesn't hold in general. That reason is that we had
earlier developed lots of promising techniques which the computers of
the time simply took too long to process.
For example, in the 1970s we already understood much of the general
lines of how to decode the images from stereo cameras to provide a
description of the 3D scene in front of the cameras. The problem was
that each image consisted of about a million pixels, each of which had
to be first compared to its neighbours, and finally, after a lot of
deduction of edges and surfaces and volumes, would result in a
detailed description of the 3D geometry of the scene. None of the
individual calculations was problematic, the problem was simply the
millions of times the calculations had to be done. This required
either millions of simple computational units running in orchestrated
parallel, or one computer running fast enough to do all the work in,
say, a second. It is only very recently that enough computer power to
do as much calculation as this in under a second has started to come
within our reach, the result being visible in many computer games and
virtual reality work, as well as the recently greatly improved speeds
of robots depending on vision to navigate in the world.
Given the high resolution of the human visual system, and the large
amount of our brain power devoted to processing it, machine vision is
likely to go on soaking up increases in computer power and returning
usefully increased performance for some time to come.
There are similar examples in abstract thought of the ``intelligent''
kind, such as chess. Let's consider first the very simple game of
noughts and crosses. It is not hard for experienced human players to
learn how to see right through to the end of this simple game from the
start, and once they've done that, they have become perfect
players. If we exploit the rapid computation and perfect memory of
computers and ignore symmetries and other tricks with which to prune
the search space, then to look into every possible board state in
noughts and crosses involves a maximum of 362,280 board states. It's
easily within the reach of the computational speed and memory of desk
top or even palm top computers today to play noughts and crosses by
simple exhaustive search and thereby be perfect players. I'm ignoring
the fact that there are much easier ways of being a perfect player,
because I'm showing the effects that simply increasing the brute force
of computers can have on the performance of very simple methods which
can be elaborated manifold to produce ``intelligent'' behaviour.
Chess is more complicated. If we assume that the average game involves
an average of 40 moves for each player with an average of 33 choices
per move, then we get about 10120 (10 followed by 120
zeroes) board states to examine, if we wish to look forward from the
start to the very end. Since by current estimates 10120
exceeds the number of atoms in the universe it's clear that we have
hit a problem. The required computer would be too big to fit into our
universe. That is what scientists refer to as a ``fundamental
limitation''. It's not going to be possible for computers to play
chess by looking ahead to the end in this simple exhaustive way. They
will simply have to do what people do, which is to look ahead as far
as they can in the time they have available. Every time we get a more
powerful computer the same simple chess program will be able to look
further ahead. So all we have to do is to wait for computer power to
develop far enough for a simple exhaustive look ahead search to defeat
the world chess champion. That hasn't happened yet. Deep Blue, which
defeated Kasparov in 1997, used a lot of clever tricks to optimise the
search. In effect Kasparov was defeated as two prongs of research and
development closed their pincers on him. One prong was the doubling of
computer power every 18 months. The second prong was chess programmers
devising ever cleverer tricks to prune the search.
If the hardware had developed more slowly the programmers would have
come up with a few more tricks and beaten Kasparov with a slower
computer and better software. If the hardware had developed more
quickly they wouldn't have had time to develop so many clever tricks,
but the extra raw computer power would have made up the difference,
and Kasparov would have been defeated by a simpler program running in
a more powerful computer.
Computer chess started in the 1950s, and by 1970 it was discovered
that simply increasing the depth of search correlated fairly linearly
with the chess rating scores used to rank human chess players. At that
point Kasparov's doom was known to be sealed. Even if no advances in
chess programming occurred, the relentless advance of computer power,
already well settled on its exponential rails in 1970, would soon
produce a machine powerful enough to beat him.
Just as was the case in computer vision, the structure of the problem,
and our early understanding of it, meant that we would be able to keep
exploiting advances in computer power to produce more and more useful
performance to limits well beyond today's computer power. And if it
should happen that the continuing development of computer power causes
vision processing times to drop under say 50 milliseconds, rendering
further improvement unnecessary, we will be able to soak up plenty
more power simply by increasing the resolution of the cameras. And if
chess playing computers soon become so far ahead of human chess
players that we can no longer usefully exploit more computer power, we
can simply switch to a game like Go, whose combinatorial explosion
makes chess look like noughts and crosses.
I have given one example in the area of sensory processing, vision,
and one in the area of abstract thinking, chess. Let me conclude this
part of the argument by giving one example from the area of motion
control.
Consider the problem of moving an arm of about the reach, strength,
speed, and dexterity of the human arm, without a hand, just the seven
joints of the arm. As a consequence of the physics of the problem it
turns out that we need to run the basic joint control algorithms in
something like 10 milliseconds to get that sort of performance. In the
1980s it wasn't possible to supply that kind of computer power in an
economical package in one computer, so all the industrial robot arms
of that time used a number of computers, usually dedicating a small
computer to each joint, and having an extra one controlling the joint
controllers. Today we can easily control an arm of that
sophistication with the power of a single modern PC. If you add
another arm, then two legs, then balance control, then complex
20-jointed hands, then vision, we soon end up exhausting the power of
a modern PC just in trying to reach basic primate sensorimotor
control.
In short, there are special areas where the repeated multiplication of
simple processes keeps giving better performance. The basic reactive
sensorimotor control of a sophisticated humanoid robot is one of those
areas and is still well outside the scope of our current PCs. Thus we
will be able to continue to soak up the increasing power of computers
for quite some time in simply providing more sophisticated
sensorimotor control. There are also certain restricted areas of
abstract thought, such as chess, where we will also be able to soak up
increasing power in producing better performance.
The interesting question is whether all aspects of intelligence
will behave in the same way. Are there perhaps some areas of
intelligence where we don't have much idea how to use the computer
power we already have, and providing more isn't going to help to
advance things? If there are, then these areas will advance not with
the breathtaking speed with which computer technology is developing,
but will be limited by the rather more pedestrian speed of human
scientific research into difficult questions.
A robot of the 1950s (Grey Walter's Machina Speculatrix)
During the late 1940s and early 1950s W. Grey Walter, the
biologist and cyberneticist, developed his famous Tortoise, Machine
Speculatrix, a small simple three-wheeled autonomous mobile robot
whose "brain" consisted of a valve, a relay, and a capacitor, but
which was capable of apparently complex lifelike behaviour, including
feeding (charging) when "hungry", pursuing lights, and flocking
behaviour in groups. He added a few more valves to produce a creature
capable of learning a conditioned reflex. He saw this as the beginning
of a new kind of biology, synthetic biology, which investigated the
principles of animal design by trying to make simple artificial
creatures. Other cyberneticists of the time, such as Ashby, took this
up, but they were severely hampered by the technology of the time --
valves and relays -- which restricted them to creatures with at most a
few dozen very simplified "neurons", simpler even than the idealised
"neurons" of todays computational neural nets.
A robot of the 2000s (Honda's P3)
Fifty years later there are a number of impressively humanoid-seeming
robots around, perhaps the most impressive being Honda's P3. The
technology of motors etc. has advanced a bit since Grey Walter's time,
but it would have been possible fifty years ago to put together
something which was mechanically similar, if a bit cruder. The reason
nobody did was the computer technology necessary to control so many
motors and sensors in a co-ordinated fashion was not available
then. In other words, what at first sight might appear to be a very
rapid development of robotic technology is in fact a rapid development
of computer technology which has permitted the control of much more
complex devices.
The most important area of Artificial Intelligence in which we are
still limited by our understanding rather than computer power is
learning. Learning is an essential component of intelligence. Indeed,
we often call unintelligent people ``slow learners''. Cognitive
psychologists and animal ethologists have been studying learning for a
long time, and have come up with rough classification schemes for
categorising the different kinds of learning. In Artificial
Intelligence many different kinds of learning have been implemented.
The first obvious point is that it is not easy to marry the
categorisations of the psychologists with those from AI. In other
words, those who are looking primarily at what the learning does are
modularising the phenomena in quite different ways from those who are
looking primarily at the internal mechanisms. The second point is that
many of the kinds of learning in AI are very slow compared to
comparable biological learning, not in terms of computing speed, but
in terms of how much experience is needed for the learning to take
place. In other words, they are very ``slow learners'' indeed.
These are all good clues that we don't really understand learning very
well yet, and the improved computer power of tomorrow is not going to
help. We need more research and more ideas.
How long does this kind of research take? A good example can be
provided in the area of learning itself. In the 1940s McCullough and
Pitts proposed connectionism or neural networks as a feasible
architecture modelled after the massive simple neuronal parallelism of
the brain [McCulloch & Pitts 1943]. In the 1950s Rosenblatt devised a
kind of simple single layer network and a learning algorithm for it,
which he called a perceptron [Rosenblatt 1958]. In the 1960s a
number of authors showed the fundamental limitations of this kind of
learning [Minsky & Papert 1988]. It was clear that a way in which
learning could take place in multi-layer networks was
required. Although such a method (backpropagation) was developed
independently in the 1970s by several researchers, it took until the
1980s before its significance was appreciated and it began to be
widely used. It works well, but is a very slow learner. Ways of
improving the speed of backpropagation learning is currently a very
active research area. In other words, in this one particular
subsection of learning research we have made significant progress in,
say, fifty years, but are still far from approaching biological speeds
of learning.
The modern counterparts of the Professor of Hypothetics like to
finesse awkward problems of research and understanding like this by
imagining that once the machines become an order of magnitude or two
superior to us in intelligence, they will also inevitably take over
scientific research from us, and solve all our baffling problems,
including those of how to make them even smarter, at mind-boggling
speeds. This simply begs the question, however, because the
problem is learning, and you can't solve it by supposing
the robots have become good enough at learning to discover its
solution!
Another problem which needs to be solved in the development of the
superintelligent robot is generality and integration. Suppose a
specific human being is a good chess player, speaks three languages,
is a medical doctor, and plays tennis, golf, and billiards. All of
these are specialised areas in which we have good expectations of
being able to produce good artificial competence in a robot, but we
couldn't simply put all these together into a robot, add a general
purpose conversationalist, a fair quantity of common sense, and have
the equivalent intelligence of this human being. The point is that
human beings are capable of learning all these things, and many
more. The way we devise a robot tennis player or medical diagnostic
system is by trying to copy the knowledge and inference methods of a
human expert. We haven't the slightest idea how to devise a system
which would be capable of learning all these kinds of things,
and also capable of inventing completely new games and negotiating
their rules.
It can be considered that robotics research began seriously in the
1960s. In the early 1980s a number of robot research laboratories
concluded that we now knew enough about most of the component
subproblems of controlling assembly robots in the factory, and it was
time to aim for an integration of all these component parts into an
overall architecture [Lozano-Perez & Brooks 1985]. As it became clear
that these attempts were doomed to become increasingly mired in a bog
of computational intractability some of researchers, most notably
Rodney Brooks (MIT), decided that these programmes were based on a
fundamentally flawed architecture, and a new research programme was
required to discover and elaborate the principles of a new more
appropriate architecture [Brooks 1986]. The loose coalition of
roboticists inspired by and controibuting to Brooks's lead soon
developed to the point of constituing a new paradigm (in the Kuhnian
sense of a new approach based on a new philosophy and a new
methodology [Kuhn 1970]) in robotics research [Malcolm et al
1989]. That research programme is now developing a robot modelled
after a human torso, known as Cog [Brooks 1997], a sufficiently
interesting robot that the philosopher Daniel Dennett has devoted a
paper to the question of whether (and in what sense) Cog might one day
become conscious [Dennett 1994].
The moral of this story is that it took 25 years of robitics research
to discover that the aspects of robotic behaviour being developed were
not going to fit together to provide further more general
competence. This was for most a completely unexpected result which put
most of world's robot manufacturing companies, who were waiting
hopefully for the promised robot revolution in manufacturing, out of
business. We are now 15 years into the new robotics paradigm, and
still taking the measure of the magnitude of the problem which in the
early 1980s had simply seemed a question of polishing up the
technology and software a bit.
This is one more illustration of the most important lesson which
Artificial Intelligence has learned in its 50 years of research: the
problem is more complex and difficult and fundamental than you think,
even when you take this into account. (This is a generalisation of
Hofstadter's Law of software development [Hofstadter 1979]).
There is some confusion over whether it will be necessary for an
intelligent robot to be conscious in the way that we think we are in
order for the robot to reach our levels of intelligence, and in order
for it to pose a threat to us by taking its own independent autonomous
decisions, by having its own ``free will''. There are questions here
to which we do not yet know the answers. For example, even if it is
necessary for us to be conscious in order to be able to do all that we
can do, we do not know if that is the only way of reaching that
level of performance. If there is another non-conscious way, then a
robot need not be conscious in our sense to be more intelligent, and
to be a threat to us. On the other hand, it might be the case that
consciousness of some kind is an inevitable concomitant of getting all
the rest of the machine right, so we won't have to take any special
extra steps to provide the machine with consciousness.
Further than this, it might be a mistake to think of consciousness in
terms of a single person, brain, or machine. It may be that
consciousness arises out of linguistic social interaction, is a kind
of ``centre of narrative gravity'' that we acquire through telling
each other why we do what we do, and why others do what they do, and
in so doing we learn the trick of looking at ourselves as though from
the outside, and looking at others as though from the inside.
It may even be the case that the whole notion of consciousness as we
currently think we understand it is a ghastly mistake, an artefact of
the Cartesian Error of dividing mind from body and creature from
world, whereas it might be that mind emerges from the interactive
and historical matrix of mind/body/world. If this is the case it no
wonder we are baffled when we look at the machinery of brain and find
something important strangely absent. As Leibniz put it three hundred
years ago:-
The philosopher Gilbert Ryle suggested that this was a ``category
mistake'', akin to the question raised by a foreign potentate to whom
one has just shown the University Campus, Halls of Residence, Lecture
Theatres, Library, Laboratories, etc., ``Yes, this is all very
interesting, but when are you going to show me the The University
Itself?''[Ryle 1949].
Books can and have been written on this topic, and recent insights of
neurophysiology and artificial intelligence have stirred up philosophy
of mind into a very active and interesting research area. I propose to
sidestep it here, since there is too much of relevance that we do not
yet know, and there are plenty of suggestive clues that at least some
of our current ideas are nonsense. It is possible to sidestep it here,
since the point of real importance to whether superintelligent
robots could threaten us is whether or not they could be
creatures, not whether or not they could be conscious,
since it is at least possible that there could be such a thing as a
dangerous non-conscious creature.
In this connection it is interesting to note that since we are
conscious, and prefer to explain our own behaviour, and that of
others, in terms of the contents of a conscious mind, we have a habit
of ``anthropomorphising'' the behaviour of lesser creatures, and even
of complex contraptions. For example, we may interpret the naughty
toilet behaviour of our dog as a deliberate act of revenge, or (in a
fit of rage against some Microsoft product) the loss of our files to
computer malevolence. This kind of ``anthropomorphic metaphor'' has a
lot of explanatory convenience, but there is a lot more to it than
that. Any hunter of animal prey will benefit from being able to
imagine what the prey can see from its point of view, what it
therefore thinks, and how therefore to fool it. I don't know if lions
imagine what a gazelle is thinking. I do know that human hunters do,
and that it is a very useful skill. Consequently evolution may well
have wired into our brains a natural propensity to this kind of useful
anthropomorphising.
Evolution, of course, could never have ``foreseen'' that we would one
day be playing with ``intelligent'' contraptions of our own devising,
such as computers and robots, and when considering how they behave we
are as likely to anthropomorphise as we are when looking at our
dog. This leads to something I call the Weizenbaum Illusion.
Joseph Weizenbaum of MIT was long ago the author of the famous
Eliza program which simulated the rather simple conversational
responses of a Rogerian psychotherapist.
Dr Eliza: "That is interesting that you got lost on your way here. Did your
mother ever lose you as a child?"
etc.
One day he found his secretary, who knew very well the whole Eliza
thing was a bag of plausible tricks with the intelligence of a flea,
nevertheless seriously consulting Dr Eliza about the problems she was
having with her husband. He was so appalled by this evidence of the
gullibility of people in the face of apparently knowledgeable
behaviour by computer systems, that he decided that the human race
simply lacked the intellectual maturity to be allowed to pursue
research into the seductive realms of AI. He thinks if we persist in
this kind of research our natural gullibility will cause us to make
awful fools of ourselves, with possibly dangerous consequences
[Weizenbaum 1977].
I had always thought Weizenbaum had gone rather over the top in his alarmist
reaction here. After all, his secretary may simply have seen the ``bag
of tricks'' as a useful way of provoking her to think about the
issues, much as many people will use horoscopes or Tarot cards without
actually believing there is some underlying Cosmic Intelligence
guiding the fall of the cards, simply that the chance fall of highly
ambiguous cards is one way of provoking a useful kind of brainstorming
on the issues. However, faced with the extraordinary claims which
otherwise intelligent and well-educated people are prepared to make
about the imminent future of robotics, I have changed my mind. I have
decided to call this gullible tendency to over-interpret what lies
behind the apparently cute or knowledgeable behaviour of artefacts the
Weizenbaum Illusion in honour of JW's prescience here.
MIT's Kismet ``emotional'' robot face
The Weizenbaum Illusion is not just silliness, it a compelling
intuition which has been wired into our brains. This idea was
demonstrated very neatly in the science fiction story of the
indestructible robot. A physicist bets a roboticist he can't build an
indestructible robot. Come the day the physicist is shown a small
furry thing running around on a table, given a hammer, and invited to
destroy it. The physicist raises the hammer. The furry thing turns
over on its back and squeaks piteously. It has big eyes. The physicist
finds himself unable to destroy it. Why? Because it displays four
simple aspects of baby creaturehood. It runs around. It has big
eyes. It is small and defenceless. It turns over when threatened and
squeaks piteously. Our brains are wired to recognise that as a baby
animal which needs protecting, and the illusion is very compelling
indeed.
How much more compelling will be the illusion when ``it'' is clearing
the dishes off the table and discussing the day's news with you? How
much more compelling it is when an apparent ``face'' interacts
behaviourally with you in a manner which suggests a comprehensible
interior emotional life, as in MIT AI Lab's Kismet above?
The dangers of the kind of anthropomorphisation going on when we think
a conversational computer program ``understands'' us, or a chess
playing program is ``trying to trap my queen'', or when we imagine we
know what our dog is thinking, used to be avoided by scientists by
simply outlawing all this kind of anthropomorphisation and insisting
on ``objective'' descriptions of behaviour. This is too crude and
procrustean a response, as Dennett explained in The Intentional
Stance [Dennett 1989]. Dennett points out that if a system has been designed
to achieve certain purposes, such as beating people at chess in the
case of a chess program, then it makes sense for us to talk about it
``trying'' to ``occupy the centre'' even if there is no specific
representation inside the system of ``trying'' or of ``occupying the
centre''. It makes sense because, as the designer intended, that kind
of behaviour emerges from the interaction of all the component parts
of the system within the historical context of an ongoing game. It's a
way of talking about a system at a very high level of abstraction
concerning its purposes, or the purposes of its designer, neglecting
the actual details of how these purposes are implemented in the
system. This level of description, which Dennett calls ``the
intentional stance'', i.e., describing the system as though it were
specifically intending to achieve these purposes, is a very useful
abstract summary of what is going on which helps us to understand what
the system is likely to do next.
What Dennett has done with this elucidation of the intentional
stance is to explain that there is a definite utility in adopting
explanations in these kinds of terms, even in cases where these terms
(such as "trying'') don't refer to any specific identifiable internal
states or representations within the machine. In other words, he has
explained that we can quite properly and usefully describe a
machine in these apparently anthropomorphic terms, provided we
remember that there is not necessarily an implication that the insides
of the machine resemble our insides in any way, or that the terms we
are using have any specific correspondences with internal states or
representations in the machine. In short, he has licensed us to use
anthropomorphic language provided we remember that we are using it in
this ``as if'' sense, without taking the ``as if'' to be true. It is a
subtle distinction that our human language was never designed to
encompass, and never needed to make, until we got around to inventing
machines which could process information and simulate fragments of
human cognitive behaviour.
What we have done in building computers is to make a machine which is
naturally capable of some aspects of human mental behaviour, but not
others. Since it has always historically been the case that aspects
of mental behaviour were only manifested as part of the complete
package of a mind we have this natural tendency to assume that
something which shows one aspect of mentality is equipped with the
full mental orchestra.
Of course normally when we say that something is ``intelligent'', we
are referring to some kind of live biological creature which, being an
animal with all that that entails about competitive survival, might be
hostile to things which might want to share its dinner. In this case,
however, we are referring to some kind of artificial robotic/computer
system. This will not have the usual animal complement of competitive
survival instincts, etc., unless of course, we had deliberately built
them in, or they had somehow developed by accident.
We can rule out deliberately building them in. We are
interested in robots, computer technology, etc., because we're
interested in building mechanical slaves, and it would be extremely
silly to give them the capability of revolting against their masters
on purpose.
This is the mistake that our Professor of Hypothetics and his modern
followers make. It is very tempting indeed to suppose that something
of the right shape and physical capabilities which shows aspects of
mental behaviour is in fact some kind of animal, albeit an artificial
one. Because of our common biological evolutionary heritage we share
with our animal fellows an instinct to survive, an instinct to
procreate, territorial behaviour, motivation in terms of pleasure and
pain, and so on. No matter how good they are at chess, table tennis,
and erudite conversation, a general purpose domestic robot of the
future will not share any of this evolutionary heritage with
us, and will not by natural default have any of these
instincts. Even if it happened to be a thousand times more intelligent
than us.
We are well used to the Hollywood idea that robots of the future of
roughly human form will of course possess superhuman strength, will be
able to punch holes in walls, tie knots in rifle barrels, and so
on. Of course they'll be much stronger. After all, they're machines,
and you can make machines any strength you like. Our factories are
full of machines much stronger than we are.
This is, however, a silly generalisation. The machines which are so
much stronger than us have external power supplies, such as being
plugged into the electricity supply. If we add in the proper creature
constraints a different picture emerges. These constraints are that
the robot must carry its own power supply around with it, suitable for
a few hours of operation without refuelling. And it can't be built
just for one purpose, such as tying knots in rifle barrels. It must be
general purpose, able to walk up and down stairs, open doors, fetch
things out of cupboards, and so on. Roboticists who are trying to
build robots with today's technology within these kind of constraints
have discovered an uncomfortable truth. Biological muscles, tendons,
bones, nerves, etc., are in fact extremely efficient in engineering
terms. It is going to be a very demanding task indeed, and one we
certainly can't contemplate with today's technology, to make a robot
of roughly human size with even a small fraction of a human being's
strength, speed, and agility.
Very considerable mechanical engineering development is going to be
required to catch up with biological performance. The problem here is
that mechanical engineering technology is not developing in the crazy
exponential fashion that computers are doing. There are unique reasons
for the way computer technology is developing, largely a consequence
of the fact that computers are mass produced by a kind of photocopying
technique, and perform better the smaller their component parts
get. That isn't true of muscles or motors. To apply a large causally
effective wallop to the physical world they simply have to be of
appropriately large size. If motor cars had partaken of computer-type
development the modern motor car would be smaller than a match box,
travel at supersonic speeds, and carry more passengers than a railway
train on a teaspoon of petrol. In the world of everyday Newtonian
physics where size matters this is simply silly.
The point is that the electromechanical bodies of robots will
be made of mechanical technology and will have to put up with the much
more pedestrian engineering technology rates of development. From that
it follows that even if in 50 years a humanoid robot is hundreds of
times more intelligent than you, if it is about the same size as you,
you will be able to run rings round it and knock it over with one hand
behind your back. In fact given the mechanical advantages in terms of
size and power of wheels over legs, it is extremely likely that the
superintelligent robots of the future will roll around on wheels
rather like Daleks, and suffer from the same kind of problems as
Daleks had with staircases. If you were a follower of the Dr Who
stories involving Daleks, you will remember that staircases were one
of the major obstacles in the way of Daleks conquering the
Universe. This is a humorous way of illustrating the general point
that the kind of physically general purpose body we have, which can
easily switch from driving cars to climbing trees, is going to be
very difficult indeed to better mechanically, and certainly not
within a human lifetime. Indeed, our very general purpose body, which
interfaces very well with our tools and weapons, is one of the things
that makes us such formidable soldiers and hunters.
There are lots of different ways in which superintelligent robots
might try to take over the world from us, from a direct assault with
automated battlefield weapons, to blackmailing us with a doomsday
weapon such as simultaneous meltdown of all nuclear power stations. We
don't need to consider the pros and cons of all these different
scenarios, because they all trip up over the same fundamental
question: Why would they want to do it?
They will not be animals, they will not have a
competitive survival instinct, they will not have an instinct
to reproduce themselves in competition with us, they will not
compete with us for dinner, oil, or electricity. To be very simple
about it, they will not object to being switched off. We could
of course try to give them these qualities, but that would be
rather silly.
We are tempted to imagine that they would be some kind of
creature because of the Weizenbaum Illusion that tempts us to
suppose that anything with behaviour which is not entirely predictable
and seems to display some aspects of mentality must in fact have a
fully functioning mind of the usual animal kind. However, our clever
technology has enabled us to produce machines which only have
some aspects of mentality, such as memory or intelligence, and
definitely don't have others, such as feelings and their own
purposes. The illusion is fascinating, but they are no more likely to
develop mind and imperial ambitions than is the waxwork simulacrum of
Napoleon in Madame Tussaud's, no matter how lifelike it looks.
Could they develop these other qualities of mind by accident? One of
the favourite scenarios here is accidentally letting the genie of
evolution out of the bottle by deliberately setting up evolutionary
programmes to improve the design of our robots. This scenario
misunderstands the very considerable sophistication of modern
biological evolution, which we are only now beginning to appreciate,
and which we are very far indeed from understanding. It is true that
biological evolution is capable of such ingenious feats of engineering
design as making an eye or a wing. It is doubtful that we know enough
about evolution yet to be able to set up an artificial evolutionary
program which would result in the invention of an eye. Even if we did,
the number of small increments that have to be added together to make
an eye makes it impossible to do this in less than a very very long
time.
Is it not possible that artificial evolution with the assistance of
extremely powerful computers might work a great deal faster? It is
possible, but in order to happen by accident whatever simple kinds of
evolution we started up to improve this or that aspect of the robots
would have first to evolve improved methods of evolution. That would
take far longer than evolving something as practical and simple and
immediately assessable as the eye. We would have centuries if not
millennia to see this kind of thing coming.
In sum, we no more need fear our superintelligent contraptions of the
future taking over the work from us than we need fear that the
``intelligent'' motor cars of the future will plot to run us over. Of
course given superintelligent contraptions we will no doubt be able to
magnify our own incompetences and errors to an even greater degree,
and I have no doubt that newspapers will soon start reporting human
workers being killed by robots. But they will just be accidents,
analogous to being struck by lightning, no more evidence of nascent
machine malevolence than lightning is of the anger of the gods.
Chris Malcolm 25 November 2000
[Brooks 1986] Brooks, Rodney A. "Achieving Artificial Intelligence
Through Building Robots," [pdf format], MIT AI Lab Memo 899, May 1986.
[Brooks 1997]
Brooks, R.A., From Earwigs to Humans, Robotics and Autonomous
Systems, Vol. 20, Nos. 2--4, June 1997, pp. 291--304.
[Butler 1901]
Butler, Samuel (1835-1902), Erewhon, edited with an
introduction by Peter Mudford, [2nd ed.], Harmondsworth : Penguin,
1970, SERIES: Penguin English library, orig publ 1901.
[Dennett 1989]
Dennett, Daniel, The Intentional Stance, MIT Press 1989.
[Dennett 1994] Dennett, Daniel, The Practical Requirements for
Making a Conscious Robot,Philosophical Transactions of the Royal
Society, A, v. 349, pp. 133-146, 1994.
[Dennett 1996]
Dennett, Daniel, Darwin's Dangerous Idea : Evolution and the
Meanings of Life, Touchstone Books 1996.
[de Garis 2001]
Hugo de Garis and Don Mooradian, The Artilect War, first draft,
available on-line at http://foobar.starlab.net/~degaris/artilectwar.html,
and although not yet conventionally published, has aroused a lot of
discussion.
[Hofstadter 1979]
Douglas Hofstadter, Godel, Escher, Bach: an Eternal Golden
Braid, Basic Books, 1979.
[Hofstadter & Dennett 1985]
Hofstadter, Douglas and Dennett, Daniel, The Mind's I, Bantam
Books 1985.
[Kuhn 1970]
Kuhn, Thomas S, he structure of scientific revolutions, 2nd
edn, University of Chicago Press, 1970.
[Kurzweil 2000]
Kurzweil, Ray, The Age of Spiritual Machines : When Computers
Exceed Human Intelligence, Penguin USA 2000.
[Leibniz 1714] Gottfried Leibniz, The Monadology, 1714.
Lozano-Perez & Brooks 1985]
Lozano-Perez, T. and R. A. Brooks, An Approach to Automatic Robot
Programming, Solid Modeling by Computers, Pickett and Boyse (ed.),
Plenum Press, New York, pp. 293--328, 1985; also in Proceedings of
1986 ACM Computer Science Conf., Cincinnatti, OH, February 1986,
pp. 61--69; also MIT AI Memo 842, April 1985.
[Malcolm et al 1989]
Chris Malcolm, Tim Smithers, and John Hallam, An Emerging Paradigm in Robot
Architecture, invited paper at the Intelligent Autonomous Systems Conference
(2) in Amsterdam, Dec 11-14, 1989; also available as Edinburgh University DAI
RP 447.
[McCulloch & Pitts 1943]
McCulloch, W. and Pitts, W. A logical calculus of the ideas
immanent in nervous activity, Bulletin of Mathematical Biophysics,
7:115 - 133, 1943.
[Minsky & Papert 1988] Marvin L. Minsky, Seymour A. Papert,
Perceptrons : Introduction to Computational Geometry, updated
version or original 1969 book, MIT Press, 1988.
[Moravec 1998]
Moravec, Hans, Robot : Mere Machine to Transcendent Mind,
Oxford University Press 1998.
[Simon & Newell 1958]
H. A. Simon and A. Newell,
Heuristic Problem Solving: The Next Advance in Operations Research,
Operation Research, pages 1-10, January-February 1958.
[Rosenblatt 1958]
Rosenblatt, F., The Perceptron: A Probabilistic Model for Information Storage and
Organization in the Brain, Psychological Review, Volume 65, 1958.
[Ross 1995]
Ross, Philip E. 1995. Moore's Second Law" Forbes, March 25,
pp. 116-117, 1995.
[Ryle 1949]
Gilbert Ryle, The Concept of Mind University of Chicago Press
Paperback, 1949.
[Searle 1980]
Searle, J., Minds, Brains, and Programs,
Behavioral and Brain Sciences, 3, p417-457, 1980, much of which is
reprinted with some extra commentary in Mind's I by Hofstadter
and Dennett, q.v.
[Warwick 1998]
Warwick, Kevin, In the Mind of the Machine, Arrow Books 1998.
[Weizenbaum 1977]
Weizenbaum, Joseph, Computer Power and Human Reason, W H
Freeman 1977.
Moore's Law
Amdahl's Constant
Moravec's Extension
Will Moore's Second Law Stop It?
The Flawed Assumptions
The Bad Effects of Silly Ideas
The First Time, the Lighthill Report
Was the Second Time the Alvey Initiative?
Will the Third Time be the Albery Committee?
``In view of the likelihood that early in the next millennium computers
and robots will be developed with a capacity and complexity greater
than that of the human brain, and with the potential to act
malevolently towards humans, we, the undersigned, call on politicians
and scientific associations to establish an international commission
to monitor and control the development of artificial intelligence
systems.''
The Revolt of the Machines
``I also questioned them about the museum of old machines, and the cause
of the apparent retrogression in all arts, sciences, and inventions. I
learnt that about four hundred years previously, the state of
mechanical knowledge was far beyond our own, and was advancing with
prodigious rapidity, until one of the most learned professors of
hypothetics wrote an extraordinary book (from which I propose to give
extracts later on), proving that the machines were ultimately destined
to supplant the race of man, and to become instinct with a vitality as
different from, and superior to, that of animals, as animal to
vegetable life.''
Will intelligence develop along with processing power?
Vision consumes processing power
Chess consumes processing power
Articulated motion consumes processing power
The deceptive improvement of robot technology in the last 50
years
The problem of learning
The speed of research
The problem of generality
How does the Ghost get into the Machine?
It must be confessed, however, that perception, and that which
depends upon it, are inexplicable by mechanical causes, that is to
say, by figures and motions. Supposing that there were a machine
whose structure produced thought, sensation, and perception, we
could conceive of it as increased in size with the same proportions
until one was able to enter into its interior, as he would into a
mill. Now, on going into it he would find only pieces working upon
one another, but never would he find anything to explain
perception. [Leibniz 1714].
The Weizenbaum Illusion
You: "I got lost on my way here."
The Intentional Stance
Why a Humanoid Robot Won't Be Dangerous Anyway
The Contraption/Creature Chasm
[See these comments for some
further links and comment on this topic.]
References
http://www.kurzweiltech.com/ray.htm.
http://www.frc.ri.cmu.edu/~hpm/.
http://www.cyber.rdg.ac.uk/K.Warwick/.