An Ominous Worry: Are We Abandoned Remote Control Animals?
Humans have domesticated animals for a long time, and for a variety of purposes: wolves and dogs as guard animals, horses and donkeys as transport animals, others just as loyal companions. We have used much creativity when it comes to the means used to domesticate them, but most approaches boil down to the old carrot and stick principle.
What all of the traditional and familiar approaches to domestication have in common is that they seem to be compatible with the idea that the animals are agents, exercising their free will. Granted, we do motivate, coerce, and perhaps even manipulate animals. But, it seems, we only interact with the animal from the outside, in principle the animal could have done otherwise than to do our bidding.
This is about to change. Recent advances in microelectronics allow us to domesticate animals in ways we never thought possible. Meet the remote controlled flower beetle: on its back he carries a miniature radio receiver which is connected to electrodes which are implanted in its optic lobes. Oscillating electrical pulses trigger a take-off, a single short pulse ceases the flight. Other electrodes, connected to the beetle’s basilar flight muscles, control the direction of the flight. This payload effectively turns the beetle into a remote controllable cyborg: an animal with machine parts which can be controlled by a human. Similar projects have been conducted with other animals such as roaches, rats, and pigeons.
Our efforts to create controllable animal cyborgs fall into two categories. They can either be of the we-control-their-sensory-motor-system or of the we-directly-control-their-mind variety. The former is the easier and the more common approach. The robo-roach approach works by stimulating neurons in the antennas of a roach to signal the roach that it touched a wall and hence needs to turn to avoid it. Attempts in the latter category involve modification of an animal’s motivational system, directly causing a desire to do some action. We humans might not yet be very good at this, but some animals have mastered this approach to an impressive degree.
Remote control animals are fascinating. But they are also somewhat unsettling and can trigger certain doubts in us concerning the degree to which these animals are really agents, acting in accord with their own decisions and free will. It feels like the victims, controllable in such a way, are unmasked as fake agents, keeping up a facade of making decisions and going about their own business in ordinary circumstances, but as the possibility of remote controlling them reveals, they are mere automata, acting in accordance with purely mechanical principles.
Douglas Hofstadter has coined the term “Sphexishness” for this phenomenon (Hofstadter 1982), after the digger wasp Sphex ichneumoneus. This wasp usually exhibits sophisticated and apparently intelligent behavior, but can easily be tricked into an action loop, where the wasp mindlessly repeats the same action without end. (Programmers are familiar with this phenomenon. In the case of programmes it is the result of an instruction being repeated until a certain condition is met, but this condition can never be met.)
But the worry often extends beyond these animals: clearly we are similar to non-human animals in many respects. Granted, we are more complex, but are our brains not just as much mechanical organs as those of the remote controlled animals we create? Couldn’t we too, in principle, be manipulated and controlled in a similar fashion, perhaps by a malevolent neurosurgeon implanting a controlling-device in our brain? And sometimes humans even exhibit “glitches” similar to those of certain animals, namely when they suffer from certain types of brain damage (Dennett 1984, p. 12). What if, we might worry, we are in effect a certain kind of remote control animal, just one where the remote has been abandoned, or perhaps purely physical forces hold the remote? Wouldn’t that rob us of our agency, our capability of acting in accordance with our beliefs and desires, and of our free will?
From Thing to Automaton to Agent: A Journey Towards Complexity
Daniel Dennett (Dennett 1984, p. 18f.) has argued that vague worries such as the are-we-abandoned-remote-control-animals worry are the driving force behind the free will debate, and that dispelling these anxieties and worries amounts to solving or at least dissolving the problems of free will.
In what follows we want to dispel the worry articulated above. We will attempt to show that there is a gradual transition from simple things such as elementary particles or stones, to what we are inclined to call automata such as a calculator or the robo-roach, to true agents such as ourselves and other complicated animals. This transition corresponds to an increase in complexity, and no (or at most one) new quality is needed to get to the agents. Many of the aspects of agency and free will we cherish show up somewhere along this gradual transition, with all or at least most of them being present at the level of agents. Since we have picked the jackpot and are located at the level of agents, we can sigh with relief – the fate of a robo-roach does not vindicate the conclusion that we are in a similar position.
At the very bottom of the ladder of complexity we find microscopic things like elementary particles or macroscopic things like stones. In one sense a stone is already fairly complex, it consists of countless elementary particles interacting in various ways. But that’s not the kind of complexity we are interested in. It does not have a variety of internal states which trigger distinct types of behavior, and which are themselves reactions to states in the environment. Due to this simplicity we do not ascribe to stones or electrons wishes, desires, and motives. An electron does not circle the nucleus of an atom because it decided to do so. Most people do not believe that there is any consciousness at this level of the world; there is nothing it is like to be an electron. (Although there are some dissenting voices.)
Somewhere in the middle we find things like calculators, computers, roaches (robo- and ordinary), etc. This short list illustrates that both life and non-life can belong to this category. These entities are much more complex than elementary particles or stones. They can take various functional states as a result of external input. These functional states sometimes represent elements in their environment, and determine their reaction to stimuli. To some examples of this category we are inclined to ascribe beliefs and desires. We can for example predict the action of insects to some degree by taking the intentional stance. This vindicates the view that at least some of them have consciousness, though drawing the line is difficult. Yet we would presumably not call a calculator, a computer, or an insect an agent. These things are still too simplistic to merit this title. Many of them are not apt to learn anything new, their responses to stimuli are hardwired into their system. In general they are very predictable and can easily be manipulated or even (remote!) controlled. Intuitively we would classify them as sophisticated automata.
Another step up on the ladder of complexity we find highly complex entities such as dogs, cows, and us humans. They possess not just various functional states which determine their response to external stimuli, they are also able to expand their arsenal of functional states based on past experiences. The develop sophisticated representations of their environment and adopt new strategies to deal with obstacles. Moreover, they can entertain multiple hypothesis concerning the state of their environment and adjust their credences to new evidence. Their beliefs together with their desires provide them with reasons to act, and uncertainty concerning either their beliefs or desires translates into uncertainty about how to act. Creatures with these capabilities are usually fairly resilient to simple manipulation because it is so complicated to understand the underlying mechanical explanations of their visible behavior. We do not hesitate to call such creatures agents and ascribe to them free will, since they have the ability to deliberate reasons and draw conclusions which translate into action.
This means that we are not in principle different from mere automata or even simple things. (A small caveat: at some point on this journey to complexity consciousness appears. As discussed in the post on reductionism it is not yet clear whether consciousness is something truly new, or somehow reducible to complicated physical stuff.) We are just much more complex than these simple things and therefore have much more sophisticated behavioral patterns and a much richer internal life involving reasons, deliberations, decisions, efforts, and similar things. This makes us more resilient against simple manipulation and control. Observing the simplicity and manipulability of roaches and rats should therefore not worry us too much, these animals feel more like automata because they lack certain features we have due to the higher complexity of our brains. Sphexishness, like complexity, comes in degrees and we have very little of it.
Is There a Residual Phenomenon?
We have mentioned free will only once in the last section, and described it as the ability to deliberate reasons and act according to them. This is a rough formulation of a form of compatibilism: free will thus understood is compatible with a mechanistic or deterministic universe since deliberation of reasons and acting accordingly are compatible with determinism. The ability of deliberation is arguably one of the aspects of free will we cherish most, but does it exhaust what we mean by free will or is there some residual phenomenon in the vicinity we have ignored so far?
One way of asking this question is as follows. Ordinarily we think people are responsible for their actions and we blame them if they do something wrong. We blame them for making the wrong decision, for being selfish, etc. There are plenty of reasons to do this in a deterministic universe: we can influence people’s mindsets, perhaps better them, and increase the likelihood that they will act differently next time. But should we believe that they are ultimately responsible (Kane 1996, p. 60) for their actions and should be put ultimate blame on them for bad decisions? It seems to me that in a deterministic universe we are at least inclined to be a little bit more lenient and merciful with perpetrators: after all it was in some sense not in their power to have acted otherwise – they can’t break the laws of nature.
But even given that determinism is probably true (or at least that the indeterminism of microphysics is irrelevant at the macroscopic level of neural networks) and there is no ultimate responsibility we can ask: couldn’t there be ultimate responsibility and (justified) ultimate blame, even if there is no such thing in our world? And if there is such an epistemic possibility, doesn’t that mean that there is an additional and coherent notion of free will which we have not yet captured? Wouldn’t that variety of free will be worth wanting, and, sadly, we probably just don’t have it?
What we know is that it is not enough that our actions are not determined by our previous states – these actions could still be random and thus not (in any interesting sense) free. They might be even less free than actions which are entirely determined by such things as reasons and desires. This means that the adherent of the residual-phenomenon-thesis has to spell out what in addition to indeterminacy he wants. Only then can we be sure that there is a coherent notion of free will which is distinct from ordinary compatibilist notions. One such attempt has been made by Robert Kane (Kane 1996).
Dennett, D. C. (1984). Elbow room: The varieties of free will worth wanting. mit Press.
Hofstadter, D. R. (1982). Can Creativity be Mechanized? Scientific American, 247, September 1982, 18-34.
Kane, R. (1998). The significance of free will. Oxford University Press.
Wooldridge, D. E. (1968). Mechanical man: The physical basis of intelligent life. New York: McGraw-Hill.
This is mostly tangential to philosophical points you make here, but there is a cool article about some of the misleading empirical details about the Sphex story in Keijzer, F (2013) “The Sphex story: How the cognitive sciences kept repeating an old and questionable anecdote” /Philosophical Psychology/. The paper is available here:
Hey Manuel, thanks for the pointer! I’ll have a look at it and add a reference in the article.
“Similar projects have been conducted with other animals such as roaches, rats, pigeons, and other animals.”
Ist die unnötige Dopplung gewollt?
Nein, die ist ungewollt- Danke für den Hinweis!