this proposal sounds straightforward. It is easy to let it pass as obvious, nod once or twice, and read on. But consider how strange the proposition really is.
A common view of consciousness is that I know I’m conscious because I feel it inside me. It is a direct experience of my own mind. A corollary to this view is that consciousness is by nature a private phenomenon. I can know my own mind only, and I am forced to guess about other people’s minds. I can never really know for certain if you are conscious or what your thoughts may be, though I can suspect based on your behavior.
This view may have led Turing to his famous test for computer consciousness. How will we know whether a computer has achieved consciousness? The same way that we judge each other’s consciousness. If we talk to that computer and cannot figure out from the conversation whether it is a real conscious human or a machine, then the machine has for all practical purposes achieved consciousness. The Turing test therefore acknowledges a fundamental assumption of human consciousness—that it is private, that I can directly experience only my own consciousness, and that I must rely on observation and deduction to understand any other consciousness.
I am proposing that this common, seemingly indisputable assumption about human consciousness is wrong. There is no fundamental difference between my perception of someone else’s mind and my perception of my own mind. I do not directly experience my own mind. I perceive it through the same intermediary, the machinery for social perception, that I use to perceive anyone else’s consciousness. That neuronal machinery is able to collect more data on my own brain and therefore construct a better quality of model for it, but fundamentally my perception of my own mind is in the same class of phenomenon as my perception of someone else’s mind. They are both models. They are both proxies for the real thing. They are both useful and also profoundly inaccurate. I do not actually know my own mind, any more than I know anyone else’s mind—I know only the model that my social machinery has constructed of it.
Some of the most intriguing evidence in support of this formulation of consciousness comes from damage to the brain. I will discuss this evidence in greater detail in the second half of the book, especially in Chapter 7. To summarize briefly here, there is a brain region thought to be particularly involved in social perception—in reconstructing the contents of another person’s mind. Yet when this brain region becomes damaged, such as by stroke, a strange set of symptoms develops that at first glance seems to have nothing to do with social perception. When this brain region is damaged on the right side of the brain, where it has its largest presence, the person loses conscious awareness of everything to the left side of his body. When that brain area is damaged on the left side of the brain, the right side seems to be able to take over pretty well, and the awareness deficit is not apparent. When that brain area is damaged on both sides of the brain. . . . I am not sure that condition has been studied thoroughly. By hypothesis, the patient becomes a zombie, bereft of conscious awareness, at least until some compensatory re-wiring of the brain occurs. The strange overlap between the brain areas involved in social perception and the brain areas that, when damaged, lead to a loss of awareness—a seeming riddle of clinical neuroscience—is actually easily explainable by the principle that consciousness is a specific, self-application of social perception.
Each of the following sections in this chapter addresses the same underlying principle—the essential equivalence between social perception and consciousness—but from a different perspective. Through that means I can draw a more complete picture of the concept.
Only a brain system expert at perceiving mind would understand the concept of