The Delusions of Certainty Page 13
Wiener’s idea is at once simple and troubling. Information becomes wholly independent of its substance. It is the pattern, not the meaning, that counts. The word “information” may be the single most malleable word in contemporary culture. In 1984, A. M. Schrader published a work in which he found seven hundred definitions of “information science” between 1900 and 1981 and described the general state of affairs as one of “conceptual chaos.”190 Depending on what text you are reading, the word can mean, as it does for Wiener and Shannon, the pattern of communication between source and receiver. It can also mean the content of a so-called cognitive state, the meaning of a sentence in linguistics, or a concept in physics that in some way seems to have been naturalized. In this last definition, no eye or ear or body is needed to take in the information and understand it. It is present before any thinker came along. The very arrangement of atoms and molecules is information. In Information and the Internal Structure of the Universe, Tom Stonier writes, “Information exists. It does not need to be perceived to exist . . . It requires no intelligence to interpret it. It does not have to have meaning to exist. It exists.”191 I confess I think that how one understands the word “information” must enter into this problem, and that without certain turns in the history of science and technology, it might not have occurred to anyone to speak of information as an inherent property of the material world, that such an understanding of “information” has a rhetorical history. There is no sense of language use in this statement. If one defines “information” as patterns of reality that have the potential to be read and interpreted, then the world is indeed plump with information of all kinds—both natural and unnatural.
Pinker states that his faith in information as the appropriate concept for “life” is also the appropriate concept for “mind.” The mind he is referring to emerged through what is now called the cognitive revolution in the 1950s. Without the cognitive revolution, there would be no evolutionary psychology, no claims that our minds are computers. From this point of view, thinking is computation, and minds are symbolic information-processing machines. Computers can therefore simulate our thought processes or disembodied patterns without any reference to particular gels and oozes—the biological material stuff underneath.
Rom Harré, the philosopher and psychologist, describes this way of thinking about cognition as essentially dualistic. “The way that the architects of the First Cognitive Revolution constrained their model of the human mind quickly developed into the analogy of the computer and the running of its programmes . . . As a psychology, this form of cognitivism had a number of disturbing aspects. It preserved a generally Cartesian picture of ‘the mind’ as some kind of diaphanous mechanism, a mechanism which operated upon such non-material stuff as ‘information.’ ”192 Darwin would have been surprised to find that after his death, his careful observations of plants and animals and his idea of natural selection would be yoked to machine computation, not to speak of Descartes’s schism between mind and body.
Mind as Literal Computer?
The idea that the mind literally is a computer fascinates me. Unlike “hard-wiring,” “computation” as a description of mental processes is not used as a metaphor. Pinker, for example, refers to the mind as a “neural computer.” Although the roots of the idea may be traced back to Pythagorean mysticism and mathematics, Greek logic, Galileo, mechanistic philosophers of the seventeenth century, and Newton, who was influenced by them, another more recent antecedent is the mathematician, logician, and philosopher Gottlob Frege (1848–1925), who made further advances in logic after Boole, created a formal notation for the movements of reasoning, and had a shaping influence on Anglo-American analytical philosophy. Frege believed that logic and mathematical truths were not the property of the human mind—he was a fervent opponent of “psychologism,” the idea that logic is a mental product—and argued for “a third realm.” Logic is not rooted in our everyday perceptual knowledge of the world but rather in universal principles, an idea with obvious Platonic and Cartesian resonances—a belief in eternal forms that are wholly unrelated to the experiences of our sensual, material bodies. Truth is out there waiting to be found.
Without belaboring the ongoing debates about whether logic, mathematics, and information are fallible or absolute, a product of the human mind or a discovery of it, it is vital to understand that a great deal rests on this dispute because it lies at the heart of a definition of mind that has dominated Western philosophy and science for centuries. Without the assumption that in some way the workings of our minds can be reduced to a set of objective, mechanistic, symbolic computational processes that are wholly unrelated to matter, there could be no evolutionary psychology.
In Evolutionary Psychology: A Primer, Leda Cosmides and John Tooby summarize their view: “The mind is a set of information-processing machines that were designed by natural selection to solve adaptive problems faced by our hunter-gatherer ancestors.” Sociobiology and computational theory of mind, CTM, are married in this sentence. According to this view, the mind is a conglomeration of specific modular mechanisms—the number of which is unknown. Cosmides and Tooby speculate that there may be “hundreds or thousands” of them. They also state clearly that they reject a hard line between nature and nurture: “A defining characteristic of the field,” they write, “is the explicit rejection of the usual nature/nurture dichotomies . . . What effect the environment will have on an organism depends critically on the details of its evolved cognitive architecture.”193 This sounds eminently reasonable to me.
For Cosmides and Tooby, however, this mental “architecture,” which, by their own definition, “houses a stone age mind,” is highly specified and mostly fixed by natural selection. Therefore, one can draw a straight line between those male hunters out for prey and 3-D spatial rotation skills without worrying about the many thousands of years between then and now. Despite their rejection of the nature/nurture divide, Cosmides and Tooby promote minds with a hard evolutionary rigidity reminiscent of Galton, mind machines that have “innate psychological mechanisms.” Indeed, if the mind were flexible, its architecture would probably house a more up-to-date or modern mind. The architecture Cosmides and Tooby are referring to is not brain architecture. Gels and oozes do not particularly bother them.
It is important to mention that the lives of those “hunter-gatherer ancestors” are not open books. We have no access to them because they are long gone. What we know about their lives is based on the hunter-gatherer societies that remain with us on earth, which are not wholly uniform as cultures. Although they all hunt and gather, they are also different from one another. In an essay called “Some Anthropological Objections to Evolutionary Psychology,” the anthropologist C. R. Hallpike writes, “While . . . we are quite well informed about physical conditions in East Africa one or two million years ago, by the standards of ethology and of social anthropology we know virtually nothing about the social relations and organization of our ancestors in those remote epochs, and even less about their mental capacities.”194 Hallpike goes on to say that nobody even knows whether these people had grammatical language, which makes any discussion of evolutionary adaptations extremely difficult. These Stone Age people with their Stone Age minds may be more “real” than Vico’s giants or Bigfoot, but our knowledge of them and the specifics of their lives are cloudy at best.
Does computation serve as a good literal description of our minds? Those hunter-gatherer Pleistocene ancestors on the African savanna, whom evolutionary psychologists are continually evoking to explain our minds today, knew nothing of the computer, but they are described as having something of the sort up in their Stone Age heads long before the machine was invented. There is nothing wrong with projecting the computer backward several millennia to describe the human mind if, in fact, it does function as one. Although people have “computed” problems for a long time, the computer as a machine is a recent invention dependent on human beings for its existence, but the confidence of this character
ization never fails to amaze me, simply because the mind, which has long been with us in some form or other, could not have been understood in this way until the machine came into existence. Although it is undoubtedly true that I am “processing information” daily, is my mind really a computational device, one with hundreds or maybe even thousands of problem-solving modules? Descartes would have balked at the idea that the mind, like the body, is a kind of machine. Nevertheless, as Harré noted, there is a strong Cartesian quality to this computing, problem-solving, curiously dematerialized mind.
The mind as envisioned by Cosmides, Tooby, Pinker, David Buss, and others in the field is characterized by countless naturally selected discrete mechanisms that have evolved to address particular problems. The mind is composed of machine modules. This idea of a “modular mind” comes from the analytical philosopher Jerry Fodor’s influential book The Modularity of Mind (1983). Fodor is a philosopher who believes that all human beings share a mental conceptual structure, a fundamental mode of thought that takes logical form, which he calls “mentalese.” This language of thought is not the same as actual spoken language but lies underneath words as its abstract logic. Fodor’s modular theory of cognition argues that some, not all, psychological processes are isolated and information is encapsulated in its own domain. Perceiving an object, according to this view, may not rely on other aspects of cognition, such as language, but rather can be computed in its own distinct bounded realm.
In Pinker’s view, each mind module, “whose logic is specified by our genetic program,” has a special task.195 The computer analogy is built into the prose, as it is into many papers in the cognitive sciences and neurosciences. The program is assumed, and it appears to function much like the one Jacob proposed in 1970 when his book was first published in France and the one Dawkins elaborated in The Selfish Gene. Genes are the “program.” But evolutionary psychology further relies on an idea that has been called “massive modularity.” The entire human mind is domain specific. This resembles the extreme locationist views that have come and gone in neurology. The information-processing model, however, as we have seen, is not dependent on real brains.
The mind is compartmentalized into boxes, each one evolutionarily designed for a special problem, a kind of modern phrenology of mind, not brain. Franz Joseph Gall also proposed modules that could be divined from reading the human skull. The modules of evolutionary psychology are mostly but not entirely inherited, part of an internal human nature or conceptual mental architecture. This does not mean there is no “input” from the environment, but rather that each of these hypothetical mental “modules” carries within it “innate knowledge.” Acquiring language, growing up, our particular personalities, and the psychological differences between the sexes are less about our environments now, although they certainly have an impact, and more about how our minds evolved in relation to the environment over millennia. There is a biological organism, of course, but the psychology of that evolved organism is conceived through discrete machine-mind, quasi-Cartesian modules.
Why are these people so sure about mental modules? A long history and thousands of questions precede this assumption. “How do I know I am actually here sitting by the fire?” is not even on the horizon. Rather, questions generated from one answer after another have resulted in a truism: we have evolved massively modular computational minds. Jerry Fodor, who might be described as Professor Modularity himself, was critical of what he regarded as Pinker’s ungrounded confidence in a wholly modular mind. He responded to the latter’s How the Mind Works with a book of his own: The Mind Doesn’t Work That Way.196 Fodor, unlike Pinker, does not believe that so-called higher cognitive processes such as analogous thinking are modular. He believes that this kind of thought cannot possibly rely on discrete modules.
Despite many questions in evolution about which traits are adaptations and which ones aren’t and whether we are still evolving or have stopped, many scholars in many fields accept the central Darwinian principle that we are evolved beings. The neo-Darwinian thought of someone like Dawkins is more controversial. In 2012, the analytical philosopher Thomas Nagel published Mind and Cosmos: Why the Materialist Neo-Darwinian Conception of Nature Is Almost Certainly False. His critique of neo-Darwinian evolutionary theory prompted an instantaneous and often brutal response. In a tweet, Pinker asked, “What has gotten into Thomas Nagel?” and referred to “the shoddy reasoning of a once-great thinker.”197 Nagel is dissatisfied with reductive materialism and argues it cannot account for conscious subjectivity, a problem he famously described in an essay published in 1974 called “What Is It Like to Be a Bat?”
In the essay, Nagel argues that the subjective experience of being you, me, or a bat takes place from a particular first-person perspective for you, me, or the bat and that no objective third-person description can fully characterize that reality. He is not arguing against objective positions, but rather that by reducing the subjective to the objective, something goes missing: “Every subjective phenomenon is essentially connected with a single point of view, and it seems inevitable that an objective, physical theory will abandon that point of view.”198 Nagel’s work is a model of lucid philosophical prose. He stands out from many of his peers like a beam from a lighthouse on a foggy night. His style reminds me of Descartes’s in its purity. And like Descartes, Nagel understands that there is something particular about subjective experience.
If we return to my moment at the conference and my triumphant but also guilty response after critiquing the bad paper, Nagel would argue that even if my experience could be perfectly described in terms of the physical processes of my brain and nervous system from a third-person point of view, it would leave out something important—mine-ness. In his Psychology William James describes this “for me” quality of conscious life. The Latin word for it is ipseity. At the very end of his essay, Nagel suggests that it might be possible to devise a new method of phenomenology. Phenomenology seeks to investigate and explain conscious experience itself, a philosophical tradition that started in the early twentieth century with the German philosopher Edmund Husserl. Husserl, who read William James, understood that every experience presupposes a subject. Every perspective has an owner. When Simone de Beauvoir called on Martin Heidegger, Jean-Paul Sartre, and Maurice Merleau-Ponty as proponents of the idea that the body is a situation, she was referring to a philosophical tradition to which she belonged: phenomenology.
Husserl was profoundly interested in logic and mathematics, and he wrestled with Frege, but he criticized scientific formulations that left out lived experience and relied exclusively on an ideal mathematics in the tradition of Galileo. Nagel’s “objective” phenomenology of the future is one he argues should “not [be] dependent on empathy or the imagination.”199 I would say this is not possible, that empathy and the imagination cannot be siphoned out of phenomenology and the desire to do so demonstrates a prejudice against feeling, which is part of a long rationalist tradition that denigrated the passions. Husserl faced the same problem. He did not advocate a purely subjective or solipsistic theory of consciousness—the idea that each of us, human or bat, is forever stuck in his or her own body’s perspective and can never get out of it. In his late writings, in particular, Husserl offered an idea of transcendental intersubjectivity. What is this? Intersubjectivity refers to our knowing and relating to other people in the world, our being with and understanding them, one subject or person to another, and how we make a shared world through these relations. Reading Husserl is not like reading Descartes, Nagel, or James. Husserl is knotty and difficult. I can say, however, that Husserl’s idea of intersubjectivity necessarily involves empathy, and that for Husserl empathy is an avenue into another person.200
In Mind and Cosmos, Nagel suggests a broad teleological view of nature that includes mind as a possible explanation, one that resonates with Aristotle’s ideas of nature moving toward an end. Although Nagel is not religious, this idea brought him far too close to God for many, which is wh
y he was criticized so severely. He stepped on a paradigm just as sacred to some in science as the Trinity is to Christianity. Nagel is right that subjective conscious experience, the mine-ness or ipseity of being, remains a problem in much scientific thought. Even if we could explicate every aspect of the physical brain in all its complexity, the first-person point of view, the experience of being awake and aware and thinking or asleep and dreaming, will be missing from that account. Consciousness has become a philosophical and scientific monster.
The Wet Brain
But let us ask from a third-person point of view whether the brain, the actual wet organ of neurons and synapses and chemicals, is a digital computational device or even like one. Of course, if that ethereal commodity, information, is superior to or deeper than biology, or if psychology can be severed from biology altogether, if there really are two substances, body and mind, then the question becomes less urgent. But I am interested in the brain that is located inside the mammalian skull inside an animal’s body, and I am also interested in why computational theory of mind lost its status as a hypothesis in cognitive science and became so widely accepted that it was and still is treated as a fact by many. Isn’t this exactly what Goethe warned against?