Autonomy and Artificiality
Margaret A. Boden
School of Cognitive & Computing Sciences
日韩无码
Brighton BN1 9QH
CSRP 307
November 1993
ABSTRACT
What science tells us about human autonomy is practically
important, because it affects the way that ordinary people see
themselves. Denials of one's ability for self-control are experienced as
threatening.
The sciences of the artificial (AI and A-Life) support two opposing
intuitions concerning autonomy. One, characteristic of "classical" AI,
is that determination of behaviour by the external environment lessens
an agent's autonomy. The other, characteristic of A-Life and situated
robotics, is that to follow a pre-conceived internal plan is to be a
mere puppet (one can no longer say "a mere robot").
These intuitions can be reconciled, since autonomy is not an all-
or-none property. Three dimensions of behavioural control are crucial:
(1) The extent to which response to the environment is direct
(determined only by the present state in the external world) or indirect
(mediated by inner mechanisms partly dependent on the creature's
previous history). (2) The extent to which the controlling mechanisms
were self-generated rather than externally imposed. (3) The extent to
which inner directing mechanisms can be reflected upon, and/or
selectively modified. Autonomy is the greater, the more behaviour is
directed by self-generated (and idiosyncratic) inner mechanisms, nicely
responsive to the specific problem-situation, yet reflexively modifiable
by wider concerns.
An A-Life worker has said :"The field of Artificial Life is
unabashedly mechanistic and reductionist. However, this ___new _________mechanism
... is vastly different from the mechanism of the last century." One
difference involves the emphasis on emergent properties. Even classical
AI goes beyond what most think of as "machines". The "reductionism" of
artificality denies that the only respectable concepts lie at the most
basic ontological level. AI and A-Life help us to understand how human
autonomy is possible.
________AUTONOMY ___AND _____________ARTIFICIALITY
To be published in D. Cliff (ed.), ____________Evolutionary ________Robotics
___and __________Artificial ____Life (provisional title), and in ____AISB
_________Quarterly, 1993/4.
_I: ___The _______Problem -- ___And ___Why __It _______Matters
Let us begin with a quotation -- or, rather, several:
For the many, there is hardly concealed discontent.... "I'm a
machine," says the spot welder. "I'm caged," says the bank
teller, and echoes the hotel clerk. "I'm a mule," says the
steel worker." "A monkey can do what I can do," says the
receptionist. "I'm less than a farm implement," says the
migrant worker. "I'm an object," says the high fashion model.
Blue collar and white call upon the identical phrase: "I'm a
robot." [Terkel, 1974, p. xi]
Studs Terkel encountered these remarks during his study of American
attitudes to employment. What relevance can they have here? Welders and
fashion models are not best known for an interest in philosophy. Blue
collar and white, surely, have scant interest in the abstract issue of
scientific reductionism?
The "surely", here, is suspect. Admittedly, neither the blue nor
the white feel much at ease with philosophical terminology. But
ignorance of jargon does not imply innocence of issues.
These workers clearly took for granted, as most people do, that
there is a clear distinction between humans on the one hand and animals
-- and machines -- on the other. They took for granted, too, that this
distinction is grounded in the variety of human skills and, above all,
in personal autonomy. When their working-conditions gave no scope for
their skills and autonomy, they experienced not merely frustration but
also personal threat -- not least, to their sense of worth, or human
dignity.
"So much the worse for them, poor deluded fools!", some might
retort, appealing not only to (scientific) truth but also to what they
see as (humanistic) illusion -- specifically, the illusion of freedom
inherent in the notion of human dignity.
The behaviourist B. F. Skinner, for example, argued that "the
literature of dignity ... stands in the way of further human
achievements " [Skinner, 1971, p. 59], the main achievement he had in
mind being the scientific understanding of human behaviour. "Dignity",
he said, is a matter of giving people credit, of admiring them for their
(self-generated) achievements. But his behaviourist principles implied
that "the environment", not "autonomous man", is in control [____ibid., p.
21]. No credit, then, to __us, if we exercise some skill -- whether
bodily, mental, or moral. Spot welder and fashion model can no longer
glory in their dexterity or gracefulness, nor clerk and cleric in their
Page 1
Boden: Autonomy & Artificiality"
profession or vocation. Honesty and honest toil alike are de-credited,
de-dignified.
Behaviourism, then, questions our notions of human worth. But it is
at least concerned with life. Animals are living things, and ______Rattus
__________Norvegicus a moderately merry mammal. Some small shred of our self-
respect can perhaps be retained, if we are classed with rats, or even
pigeons. But artificial intelligence, it seems, is another matter. For
AI compares us with computers, and dead, automatic tin-cannery is all
they are capable of. Sequential or connectionist, it makes no
difference: machines are not even alive. The notion that they could help
us to an adequate account of the mind seems quite absurd.
The absurdity is compounded with threat. For (on this view) it
seems that if human minds were understood in AI-terms, everything we
think of as distinctively human -- freedom, creativity, morals -- would
be explained away. Ultimately, a computational psychology and
neuroscience would reduce these matters to a set of chemical reactions
and electrical pulses. No autonomy there ... and no dignity, either. We
could not exalt human skills and personality above the dexterity of
monkeys or the obstinacy of mules. As for honouring excellence in the
human mind, this would be like preferring a Rolls Royce to a Mini: some
basis in objectivity, no doubt, but merely a matter of ranking machines.
Given these widespread philosophical assumptions, it is no wonder
if AI is feared by ordinary people. They think of it as even more
threatening to their sense of personal worth than either industrial
automation or "mechanical" work-practices, the subjects of the
complaints voiced to Terkel.
What they think _______matters. Given the central constructive role in
our personal life of the self-concept, we should expect that people who
believe (or even half-believe) they are mere machines may behave
accordingly. Similarly, people who feel they are being treated like
machines, or caged animals, may be not only frustrated and insulted but
also insidiously lessened by the experience. Such malign effects can
indeed be seen, for instance in psychotherapists' consulting rooms.
Thirty years ago, before the general public had even heard of AI, the
therapist Rollo May remarked on some depersonalizing effects of
behaviourism, and of reductionist science in general:
I take very seriously ... the dehumanizing dangers in our
tendency in modern science to make man over into the image of
the machine, into the image of the techniques by which we
study him.... A central core of modern man's "neurosis' is the
undermining of his experience of himself as responsible, the
sapping of his willing and decision. [May, 1961, p. 20].
I have used this quote elsewhere, but make no apology for repeating it.
It shows the practical results of people's defining themselves as (what
they think of as) machines, not only in a felt unhappiness but also in
an observable decline of personal autonomy.
The upshot is that it is practically important, not just
theoretically interesting, to examine the layman's philosophical
assumptions listed above. Are they correct? Or are they mere
sentimental illusion, a pusillanimous refusal to face scientific
Page 2
Boden: Autonomy & Artificiality"
reality? In particular, are AI-concepts and AI-explanations compatible
with the notion of human dignity?
__II: __AI ___and ____Ants
At first sight, the answer may appear to be "No". For it is not
only behaviourists who see conditions in the external environment as
causing apparently autonomous behaviour. Only a few years after May's
complaint quoted above, Herbert Simon -- a founding-father of AI -- took
much the same view [Simon, 1969].
Simon described the erratic path of the ant, as it avoided the
obstacles on its way to nest or food, as the result of a series of
simple and immediate reactions to the local details of the terrain. He
did not stop with ants, but tackled humans too. For over twenty years,
Simon has argued that rational thought and skilled behaviour are largely
triggered by specific environmental cues. The extensive psychological
experiments and computer-modelling on which his argument is based were
concerned with chess, arithmetic, and typing [Newell & Simon, 1972;
Card, Moran, & Newell, 1983]. But he would say the same of bank-telling
and spot-welding.
Simon's ant was not taken as a model by most of his AI-colleagues.
Instead, they were inspired by his earliest, and significantly
different, work on the computer simulation of problem-solving [Newell &
Simon, 1961; Newell, Shaw, & Simon, 1963]. This ground-breaking
theoretical research paid no attention to environmental factors, but
conceived of human thought in terms of internal mental/computational
processes, such as hierarchical means-end planning and goal-
representations.
Driven by this "internalist" view, the young AI-community designed
-- and in some cases built -- robots guided top-down by increasingly
sophisticated internal planning and representation [Boden, 1987, ch.
12]. Plans were worked out ahead of time. In the most flexible cases,
certain contingencies could be foreseen, and the detailed movements, and
even the sub-plans, could be decided on at the time of execution. But
even though they inhabited the physical world, these robots were not
real-world, real-time, creatures. Their environments were simple, highly
predictable, "toy-worlds". They typically involved a flat ground-plane,
polyhedral and/or pre-modelled shapes, white surfaces, shadowless
lighting, and -- by human standards -- painfully slow movements.
Moreover, they were easily called to a halt, or trapped into fruitless
perseverative behaviour, by unforeseen environmental details.
Recently, however, the AI-pendulum has swung towards the ant.
Current research in ________situated ________robotics sees no need for the symbolic
representations and detailed anticipatory planning typical of earlier
AI-robotics. Indeed, the earlier strategy is seen as not just
unnecessary, but ineffective. Traditional robotics suffers from the
brittleness of classical AI-programs in general: unexpected input can
cause the system to do something highly inappropriate, and there is no
way in which the problem-environment can help guide it back onto the
right track. Accepting that the environment cannot be anticipated in
detail, workers in situated robotics have resurrected the insight --
Page 3
Boden: Autonomy & Artificiality"
often voiced within classical AI, but also often forgotten -- that the
best source of information about the real world is the real world
itself.
Accordingly, the "intelligence" of these very recent robots is in
the hardware, not the software [Braitenberg, 1984; Brooks, 1991]. There
is no high-level program doing detailed anticipatory planning. Instead,
the creature is engineered in such a way that, within limits, it
naturally does the right (adaptive) thing at the right time. Behaviour
apparently guided by goals and hierarchical planning can, nevertheless,
occur [Maes, 1991].
Situated robotics is closely related to two other recent forms of
computer modelling, likewise engaged in studying "emergent" behaviours.
These are _______genetic __________algorithms (GAs) and __________artificial ____life (A-Life).
GA-systems are self-modifying programs, which continually come up
with new rules (new structures) [Holland, 1975; Holland __et __al., 1986].
They use rule-changing algorithms modelled on genetic processes such as
mutation and crossover, and algorithms for identifying and selecting the
relatively successful rules. Mutation makes a change in a single rule;
crossover brings about a mix of two, so that (for instance) the lefthand
portion of one rule is combined with the righthand portion of the other.
Together, these algorithms (working in parallel) generate a new system
better adapted to the task in hand.
One example of a GA-system is a computer-graphics program written
by Karl Sims [1991]. This program uses genetic algorithms to generate
new images, or patterns, from pre-existing images. Unlike most GA-
systems, the selection of the "fittest" examples is not automatic, but
is done by the programmer -- or by someone fortunate enough to be
visiting his office while the program is being run. That is, the human
being selects the images which are aesthetically pleasing, or otherwise
interesting, and these are used to "breed" the next generation. (Sims
could provide automatic selection rules, but has not yet done so -- not
only because of the difficulty of defining aesthetic criteria, but also
because he aims to provide an interactive graphics-environment, in which
human and computer can cooperate in generating otherwise unimaginable
images.)
In a typical run of the program, the first image is generated at
random (but Sims can feed in a real image, such as a picture of a face,
if he wishes). Then the program makes nineteen independent changes
(mutations) in the initial image-generating rule, so as to cover the
VDU-screen with twenty images: the first, plus its nineteen ("asexually"
reproduced) offspring. At this point, the human uses the computer-mouse
to choose either ___one image to be mutated, or ___two images to be "mated"
(through crossover). The result is another screenful of twenty images,
of which all but one (or two) are newly-generated by random mutations or
crossovers. The process is then repeated, for as many generations as one
wants.
(The details of this GA-system need not concern us. However, so as
to distinguish it from magic, a few remarks may be helpful. It starts
with a list of twenty very simple LISP-functions. A "function" is not an
actual instruction, but an instruction-schema: more like "x + y" than "2
+ 3". Some of these functions can alter parameters in pre-existing
Page 4
Boden: Autonomy & Artificiality"
functions: for example, they can divide or multiply numbers, transform
vectors, or define the sines or cosines of angles. Some can combine two
pre-existing functions, or nest one function inside another (so
multiply-nested hierarchies can eventually result). A few are basic
image-generating functions, capable (for example) of generating an image
consisting of vertical stripes. Others can process a pre-existing image,
for instance by altering the light-contrasts so as to make "lines" or
"surface-edges" more or less visible. When the program chooses a
function at random, it also randomly chooses any missing parts. So if it
decides to ___add something to an existing number (such as a numerical
parameter inside an image-generating function), and the "something" has
not been specified, it randomly chooses the amount to be added.
Similarly, if it decides to _______combine the pre-existing function with some
other function, it may choose that function at random.)
As for A-Life, this is the attempt to discover the abstract
functional principles underlying life in general [Langton, 1989]. A-Life
is closely related to AI (and uses various methods which are also
employed in AI). One might define A-Life as the abstract study of life,
and AI as the abstract study of mind. But if one assumes that life
prefigures mind, that cognition is -- and must be -- grounded in self-
organizing adaptive systems, then the whole of AI may be seen as a sub-
class of A-Life. Work in A-Life is therefore potentially relevant to the
question of how AI relates to human dignity.
Research in A-Life uses computer-modelling to study processes that
start with relatively simple, locally interacting units, and generate
complex individual and/or group behaviours. Examples of such behaviours
include self-organization, reproduction, adaptation, purposiveness, and
evolution.
Self-organization is shown, for instance, in the flocking behaviour
of flocks of birds, herds of cattle, and schools of fish. The entire
group of animals seems to behave as one unit. It maintains its coherence
despite changes in direction, the (temporary) separation of stragglers,
and the occurrence of obstacles -- which the flock either avoids or
"flows around". Yet there is no overall director working out the plan,
no sergeant-major yelling instructions to all the individual animals,
and no reason to think that any one animal is aware of the group as a
whole. The question arises, then, how this sort of behaviour is
possible.
Ethologists argue that communal behaviour of large groups of
animals must depend on local communications between neighbouring
individuals, who have no conception of the group-behaviour as such. But
just what are these "local communications"?
Flocking has been modelled within A-Life, in terms of a collection
of very simple units, called Boids [Reynolds, 1987]. Each Boid follows
three rules: (1) keep a minumum distance from other objects, including
other Boids; (2) match velocity to the average velocity of the Boids in
the immediate neighbourhood; (3) move towards the perceived centre of
mass of the Boids in the neighbourhood. These rules, depending as they
do only on very limited, local, information, result in the holistic
flocking behaviour just described. It does not follow, of course, that
real birds follow just those rules: that must be tested by ethological
studies. But this research shows that it is at least ________possible for
Page 5
Boden: Autonomy & Artificiality"
group-behaviour of this kind to depend on very simple, strictly local,
rules.
Situated robotics, GAs, and A-Life could be combined, for they
share an emphasis on bottom-up, self-adaptive, parallel processing. At
present, most situated robots are hand-crafted. In principle, they could
be "designed" by evolutionary algorithms from the GA/A-Life stable.
Fully-simulated robots have already been evolved, and real robots are
now being constructed with the help of simulated evolutio. The automatic
evolution of real physical robots _______without ___any ________recourse __to __________simulation is
more difficult [Brooks, 1992], but progress is being made in this area
too.
Recent work in evolutionary robotics [Cliff, Harvey, & Husbands,
1993] has simulated insect-like robots, with simple "brains" controlling
their behaviour. The (simulated) neural net controlling the (simulated)
visuomotor system of the robot gradually adapts to its specific
(simulated) task-environment. This automatic adaptation can result in
some surprises. For instance, if -- in the given task-environment -- the
creature does not actually need its (simulated) inbuilt whiskers as well
as its eyes, the initial network-links to the whiskers may eventually be
lost, and the relevant neural units may be taken over by the eyes. ____Eyes
can even give way to ___eye: if the task is so simple that only one eye is
needed, one of them may eventually lose its links with the creature's
network-brain.
Actual (physical) robots of this type can be generated by combining
simulated evolution with hardware-construction [Cliff, Harvey, &
Husbands, 1993]. The detailed physical connections to, and within, the
"brain" of the robot-hardware are adjusted every _n generations (where _n
may be 100, or 1,000, or ...), mirroring the current blueprint evolved
within the simulation. This acts as a cross-check: the real robot should
behave as the simulated robot does. Moreover, the resulting embodied
robot can roam around an actual physical environment, its real-world
task-failures and successes being fed into the background simulation so
as to influence its future evolution. The brain is not the only organ
whose anatomy can be evolved in this way: the placement and visual angle
of the creatures' eyes can be optimized, too. (The same research-team
has begun work on the evolution of physical robots without any
simulation. This takes much longer, because every single evaluation of
every individual in the population has to be done using the real
hardware.)
The three new research-fields outlined above have strong links with
biology: with neuroscience, ethology, genetics, and the theory of
evolution. As a result, animals are becoming theoretically assimilated
to _______animats [Meyer & Wilson, 1991]. The behaviour of swarms of bees, and
of ant-colonies, is hotly discussed at A-Life conferences, and
entomologists are constantly cited in the A-Life and situated-robotics
literatures [Lestel, 1992]. Environmentally situated (and formally
defined) accounts of apparently goal-seeking behaviour in various
animals, including birds and mammals, are given by (some) ethologists
[McFarland, 1989]. And details of invertebrate psychology, such as
visual tracking in the hoverfly, are modelled by research in
connectionist AI [Cliff, 1990; 1992].
Page 6
Boden: Autonomy & Artificiality"
In short, Simon's ant is now sharing the limelight on the AI-stage.
Some current AI is more concerned with artificial insects than with
artificial human minds. But -- what is of particular interest to us here
-- this form of AI sees itself as designing "autonomous agents" (as A-
Life in general seeks to design "autonomous systems").
___III: __________Autonomous ______Agency
Autonomy is ascribed to these artificial insects because it is
their intrinsic physical structure, adapted as it is to the sorts of
environmental problem they are likely to meet, which enables them to act
appropriately. Unlike traditional robots, their behaviour is not
directed by complex software written for a general-purpose machine,
imposed on their bodies by some alien (human) hand. Rather, they are
specifically constructed to adapt to the particular environment they
inhabit.
We are faced, then, with two opposing intuitions concerning
autonomy. Our (and Skinner's) original intuition was that response
determined by the external environment lessens one's autonomy. But the
nouvelle-AI intuition is that to be in thrall to an internal plan is to
be a mere puppet. (Notice that one can no longer say "a mere robot".)
How can these contrasting intuitions be reconciled?
Autonomy is not an all-or-nothing property. It has several
dimensions, and many gradations. Three aspects of behaviour - or rather,
of its control -- are crucial. First, the extent to which response to
the environment is direct (determined only by the present state in the
external world) or indirect (mediated by inner mechanisms partly
dependent on the creature's previous history). Second, the extent to
which the controlling mechanisms were self-generated rather than
externally imposed. And third, the extent to which inner directing
mechanisms can be reflected upon, and/or selectively modified in the
light of general interests or the particularities of the current problem
in its environmental context. An individual's autonomy is the greater,
the more its behaviour is directed by self-generated (and idiosyncratic)
inner mechanisms, nicely responsive to the specific problem-situation,
yet reflexively modifiable by wider concerns.
The first aspect of autonomy involves behaviour mediated, in part,
by inner mechanisms shaped by the creature's past experience. These
mechanisms may, but need not, include explicit representations of
current or future states. It is controversial, in ethology as in
philosophy, whether animals have explicit internal representations of
goals [Montefiore & Noble, 1989]. And, as we have seen, AI includes
strong research-programmes on both sides of this methodological fence.
But this controversy is irrelevant here. The important distinction is
between a response wholly dependent on the current environmental state
(given the original, "innate", bodily mechanisms), and one largely
influenced by the creature's experience. The more a creature's past
experience differs from that of other creatures, the more "individual"
its behaviour will appear.
Page 7
Boden: Autonomy & Artificiality"
The second aspect of autonomy, the extent to which the controlling
mechanisms were self-generated rather than externally imposed, may seem
to be the same as the first. After all, a mechanism shaped by experience
is sensitive to the past of that particular individual -- which may be
very different from that of other, initially comparable, individuals.
But the distinction, here, is between behaviour which "emerges" as a
result of self-organizing processes, and behaviour which was
deliberately prefigured in the design of the experiencing creature.
In computer-simulation studies within A-Life, and within situated
robotics also, holistic behaviour -- often of an unexpected sort -- may
emerge. It results, of course, from the initial list of simple rules
concerning locally interacting units. But it was neither specifically
mentioned in those rules, nor (often) foreseen when they were written.
A flock, for example, is a holistic phenomenon. A birdwatcher sees
a flock of birds as a unit, in the sense that it shows behaviour that
can be described only at the level of the flock itself. For instance,
when it comes to an obstacle, such as a tall building, the flock divides
and "flows" smoothly around it, reorganizing itself into a single unit
on the far side. But no individual bird is divided in half by the
building. And no bird has any notion of the flock as a whole, still less
any goal of reconstituting it after its division.
Clearly, flocking behaviour must be described on its own level,
even though it can be explained by (reduced to) processes on a lower
level. This point is especially important if "emergence-hierarchies"
evolve as a result of new forms of perception, capable of detecting the
emergent phenomena __as ____such. Once a holistic behaviour has emerged it,
or its effects, may be detected (perceived) by some creature or other --
including, sometimes, the "unit-creatures" making it up.
(This implies that a creature's perceptual capacities cannot be
fully itemized for all time. In Gibsonian terms, one might say that
evolution does not know what all the relevant affordances will turn out
to be, so cannot know how they will be detected. The current methodology
of AI and A-Life does not allow for "latent" perceptual powers,
actualized only by newly-emerged environmental features. This is one of
the ways in which today's computer-modelling is biologically unrealistic
[Kugler, 1992].)
If the emergent phenomenon can be detected, it can feature in rules
governing the perceiver's behaviour. Holistic phenomena on a higher
level may then result ... and so on. Ethologists, A-Life workers, and
situated roboticists all assume that increasingly complex hierarchical
behaviour can arise in this sort of way. The more levels in the
hierarchy, the less direct the influence of environmental stimuli -- and
the greater the behavioural autonomy.
Even if we can _______explain a case of emergence, however, we cannot
necessarily __________understand it. One might speak of intelligible __vs.
unintelligible emergence.
Flocking gives us an example of the former. Once we know the three
rules governing the behaviour of each individual Boid, we can see
lucidly how it is that holistic flocking results.
Page 8
Boden: Autonomy & Artificiality"
Sims' computer-generated images give us an example of the latter.
One may not be able to say just why ____this image resulted from ____that LISP-
expression. Sims himself cannot always explain the changes he sees
appearing on the screen before him, even though he can access the mini-
program responsible for any image he cares to investigate, and for its
parent(s) too. Often, he cannot even "genetically engineer" the
underlying LISP-expression so as to get a particular visual effect. To
be sure, this is partly because his system makes several changes
simultaneously, with every new generation. If he were to restrict it to
making only one change, and studied the results systematically, he could
work out just what was happening. But when several changes are made in
parallel, it is often impossible to understand the generation of the
image ____even ______though the "explanation" is available.
Where real creatures are concerned, of course, we have multiple
interacting changes, and no explanation at our finger-tips. At the
genetic level, these multiple changes and simultaneous influences arise
from mutations and crossover. At the psychological level, they arise
from the plethora of ideas within the mind. Think of the many different
thoughts which arise in your consciousness, more or less fleetingly,
when you face a difficult choice or moral dilemma. Consider the
likelihood that many more conceptual associations are being activated
unconsciously in your memory, influencing your conscious musings
accordingly. Even if we had a listing of all these "explanatory"
influences, we might be in much the same position as Sims, staring in
wonder at one of his nth-generation images and unable to say why ____this
LISP-expression gave rise to it. In fact, we cannot hope to know about
more than a fraction of the ideas aroused in human minds (one's own, or
someone else's) when such choices are faced.
The third criterion of autonomy listed above was the extent to
which a system's inner directing mechanisms can be reflected upon,
and/or selectively modified, by the individual concerned. One way in
which a system can adapt its own processes, selecting the most fruitful
modifications, is to use an "evolutionary" strategy such as the genetic
algorithms mentioned above. It may be that something broadly similar
goes on in human minds. But the mutations and selections carried out by
GAs are modelled on biological evolution, not conscious reflection and
self-modification. And it is conscious deliberation which many people
assume to be the crux of human autonomy.
For the sake of argument, let us accept this assumption at face-
value. Let us ignore the mounting evidence, from Freud to social
psychology [e.g. Nisbett & Ross, 1980], that our conscious thoughts are
less relevant than we like to think. Let us ignore neuroscientists'
doubts about whether our conscious intentions actually direct our
behaviour (as the folk-psychology of "action" assumes) [Libet, 1987].
Let us even ignore the fact that __________unthinking ___________spontaneity -- the opposite
of conscious reflection -- is often taken as a sign of individual
freedom. (Spontaneity may be based in the sort of multiple constraint
satisfaction modelled by connectionist AI, where many of the constraints
are drawn from the person's idiosyncratic experience.) What do AI, and
AI-influenced psychology, have to say about conscious thinking and
deliberate self-control?
Page 9
Boden: Autonomy & Artificiality"
Surprisingly, perhaps, the most biologically realistic (more
accurately: the least biologically unrealistic) forms of AI cannot help
us here. Ants, and artificial ants, are irrelevant. Nor can
connectionism help. It is widely agreed, even by connectionists, that
conscious thought requires a sequential "virtual machine", more like a
von Neumann computer than a parallel-processing neural net. As yet, we
have only very sketchy ideas about how the types of problem-solving best
suited to conscious deliberation might be implemented in connectionist
systems.
The most helpful AI approach so far, where conscious deliberation
is involved, is GOFAI: good old-fashioned AI [Haugeland, 1985] -- much
of which was inspired by human introspection. Consciousness involves
reflection on one level of processes going on at a lower level. Work in
classical AI, such as the work on planning mentioned above, has studied
multi-level problem-solving. Computationally-informed work in
developmental psychology has suggested that flexible self-control, and
eventually consciousness, result from a series of "representational
redescriptions" of lower-level skills [Clark & Karmiloff-Smith, in
press].
Representational redescriptions, many-levelled maps of the mind,
are crucial to creativity [Boden, 1990, esp. ch. 4]. Creativity is an
aspect of human autonomy. Many of Terkel's workers were frustrated
because their jobs allowed them no room for creative ingenuity. Our
ability to think new thoughts in new ways is one of our most salient,
and most valued, characteristics.
This ability involves someone's doing something which they not only
___did ___not do before, but which they _____could ___not have done before. To do
this, they must either explore a formerly unrecognized area of some
pre-existing "conceptual space", or transform some dimension of that
generative space. Transforming the space allows novel mental structures
to arise which simply could not have been generated from the initial set
of constraints. The nature of the creative novelties depends on which
feature has been transformed, and how. Conceptual spaces, and procedures
for transforming them, can be clarified by thinking of them in
computational terms. But this does not mean that creativity is
predictable, or even fully explicable ____post ___hoc: for various reasons
(including those mentioned above), it is neither [Boden, 1990, ch. 9].
Autonomy in general is commonly associated with unpredictability.
Many people feel AI to be a threat to their self-esteem because they
assume that it involves a deterministic predictability. But they are
mistaken. Some connectionist AI-systems include non-deterministic
(stochastic) processes, and are more efficient as a result.
Moreover, determinism does not always imply predictability. Workers
in A-Life, for instance, justify their use of computer-simulation by
citing chaos theory, according to which a fully deterministic dynamic
process may be theoretically unpredictable [Langton, 1989]. If there is
no analytic solution to the differential equations describing the
changes concerned, the process must simply be "run", and observed, to
know what its implications are. The same is true of many human choices.
We cannot always predict what a person will do. Moreover, predicting
___one'_s ___own choices is not always possible. One may have to "run one's own
equations" to find out what one will do, since the outcome cannot be
Page 10
Boden: Autonomy & Artificiality"
known until the choice is actually made.
__IV: __________Conclusion
One of the pioneers of A-Life has said :"The field of Artificial
Life is unabashedly mechanistic and reductionist. However, this ___new
_________mechanism -- based as it is on multiplicities of machines and on recent
results in the fields of nonlinear dynamics, chaos theory, and the
formal theory of computation -- is vastly different from the mechanism
of the last century." [Langton, 1989, p. 6; italics in original].
Our discussion of A-Life and ________nouvelle __AI has suggested just how
vast this difference is. Similarly, the potentialities of classical AI
systems go far beyond what most people -- fashion-models, spot-welders,
bank-tellers -- think of as "machines". If this is reductionism, it is
very different from the sort of reductionism which insists that the only
scientifically respectable concepts lie at the most basic ontological
level (neurones and biochemical processes, or even electrons, mesons,
and quarks).
In sum, AI does not reduce our respect for human minds. If
anything, it increases it. Far from denying human autonomy, it helps us
to understand how it is possible. The autonomy of Terkel's informants
was indeed compromised -- but by inhuman working conditions, not by
science. Science in general, and AI in particular, need not destroy our
sense of human dignity.
__________REFERENCES
Boden, M. A. [1987] __________Artificial ____________Intelligence ___and _______Natural ___Man (2nd edn.).
London: MIT Press.
Boden. M. A. [1990] ___The ________Creative ____Mind: _____Myths ___and __________Mechanisms. London:
Weidenfeld & Nicolson.
Braitenberg, V. [1984] ________Vehicles: ______Essays __in _________Synthetic __________Psychology.
Cambridge, Mass.: MIT Press.
Brooks, R. A. [1991] "Intelligence Without Representation", __________Artificial
____________Intelligence, 47, 139-159.
Brooks, R. A. [1992] "Artificial Life and Real Robots." In F. J. Varela
& P. Bourgine, eds., ______Toward _a ________Practice __of __________Autonomous _______Systems:
___________Proceedings __of ___the _____First ________European __________Conference __on __________Artificial ____Life.
Cambridge, Mass.: MIT Press. Pp. 3-10.
Card, S. K., T. P. Moran, & A. Newell. [1983] ___The __________Psychology __of _____Human-
____Page __11
______Boden: ________Autonomy _& ______________Artificiality"
________Computer ___________Interaction. Hillsdale, N.J.: Erlbaum.
Clark, A., & A. Karmiloff-Smith. [in press] "The Cognizer's Innards",
____Mind ___and ________Language.
Cliff, D. [1990] "The Computational Hoverfly: A Study in Computational
Neuroethology". In J.-A. Meyer & S. W. Wilson (eds.), >____From _______Animals
__to _______Animats: ___________Proceedings __of ___the _____First _____________International __________Conference __on
__________Simulation __of ________Adaptive _________Behaviour. Cambridge, Mass.: MIT Press. Pp.
87-96.
Cliff, D. [1992] "Neural Networks for Visual Tracking in an Artificial
Fly". In F. J. Varela & P. Bourgine, eds., ______Toward _a ________Practice __of
__________Autonomous _______Systems: ___________Proceedings __of ___the _____First ________European __________Conference __on
__________Artificial ____Life. Cambridge, Mass.: MIT Press. Pp. 78-87.
Cliff, D., I. Harvey, & P. Husbands. [1993] "Explorations in
Evolutionary Robotics", ________Adaptive ________Behavior, 2(1).
Haugeland, J. [1985] __________Artificial ____________Intelligence: ___The ____Very ____Idea. Cambridge,
Mass.: MIT Press.
Holland, J. H. [1975] __________Adaptation __in _______Natural ___and __________Artificial _______Systems: __An
____________Introductory ________Analysis ____with ____________Applications __to _______Biology, _______Control, ___and
__________Artificial ____________Intelligence. Ann Arbor: Univ. Michigan Press.
(Reissued MIT Press, 1991.)
Holland, J. H. , K. J. Holyoak, R. E. Nisbet, & P. R. Thagard. [1986]
_________Induction: _________Processes __of _________Inference, ________Learning, ___and _________Discovery.
Cambridge, Mass.: MIT Press.
Kugler. [1992] Talk given at the Summer-School on "Comparative
Approaches to Cognitive Science", Aix-en-Provence (organizers, J.-
A. Meyer & H. L. Roitblat).
Langton, C. G. [1989] "Artificial Life". In C. G. Langton (ed.),
__________Artificial ____Life: ___________Proceedings __of __an _________________Interdisciplinary ________Workshop __on
___the _________Synthesis ___and __________Simulation __of ______Living _______Systems. New York:
Addison-Wesley. Pp. 1-47.
Lestel, D. [1992] "Fourmis Cybernetiques et Robots-Insectes: Socialite
et Cognition a l'Interface de la Robotique et de l'Ethologie
Experimentale", ___________Information ___Sur ___Les ________Sciences ________Sociales, 31 (2),
179-211.
Libet, B. [1987] "Are the Mental Experiences of Will and Self-Control
Page 12
Boden: Autonomy & Artificiality"
Significant for the Performance of a Voluntary Act?", __________Behavioral
___and _____Brain ________Sciences, 10, 783-86.
Maes, P., ed. [1991] _________Designing ___________Autononmous ______Agents. Cambridge, Mass.:
MIT Press.
May, R. [1961] ___________Existential __________Psychology. New York: Random House.
McFarland, D. [1989] "Goals, No-Goals, and Own-Goals". In A. Montefiore
& D. Noble, eds., _____Goals, __No-_____Goals, ___and ___Own-_____Goals, London: Unwin
Hyman. Pp. 39-57.
Meyer, J.-A., & S. W. Wilson, eds. [1991] >____From _______Animals __to _______Animats:
___________Proceedings __of ___the _____First _____________International __________Conference __on __________Simulation __of
________Adaptive _________Behaviour. Cambridge, Mass.: MIT Press.
Montefiore, A., & D. Noble, eds. [1989] _____Goals, __No-_____Goals, ___and ___Own-_____Goals.
London: Unwin Hyman.
Newell, A., J. C. Shaw, & H. A. Simon. [1963] "Empirical Explorations
with the Logic Theory Machine: A Case-Study in Heuristics." In E.
A. Feigenbaum & J. Feldman (eds.), _________Computers ___and _______Thought. New
York: McGraw-Hill. Pp. 109-133.
Newell, A., & H. A. Simon. [1961] "GPS -- A Program That Simulates Human
Thought." In H. Billing (ed.), ________Lernende _________Automaten. Munich:
Oldenbourg. Pp. 109-124. Reprinted in E. A. Feigenbaum & J. Feldman
(eds.), _________Computers ___and _______Thought. New York: McGraw-Hill, 1963. Pp.
279-296.
Newell, A., & H. A. Simon. [1972] _____Human _______Problem _______Solving. Englewood
Cliffs, N.J.: Prentice-Hall.
Nisbett, R. E., & L. Ross. [1980] _____Human _________Inference: __________Strategies ___and
____________Shortcomings __in ______Social ________Judgment. Englewood Cliffs, N.J.:
Prentice-Hall.
Reynolds, C. W. [1987] "Flocks, Herds, and Schools: A Distributed
Behavioral Model", ________Computer ________Graphics 21 (4), 25-34.
Simon, H. A. [1969] ___The ________Sciences __of ___the __________Artificial. Cambridge, Mass.:
MIT Press.
Sims, K. [1991] "Artificial Evolution for Computer Graphics", ________Computer
________Graphics, 25 (no. 4), 319-328.
Page 13
Boden: Autonomy & Artificiality"
Skinner, B. F. [1971] ______Beyond _______Freedom ___and _______Dignity. New York: Alfred
Knopf.
Terkel, S. [1974] _______Working. New York: Pantheon.
Page 14