Zombies, Androids, Pantomines
The Basic Argument
- Brain states qua physical states exist and mental states exist.
- Mental states possess properties that physical states and systems do not.
- Brain states are not identical to mental states.
The Indiscernibilty of Identicals
Statements of identity are usually analyzed as follows: for any entities x and y, if x and y are really the same thing, then for any property P, P is true of x if and only if P is true of y. Note that, with respect to mind/brain identity, the principle is indifferent to whether mind is inseparable from the brain and whether the mind is correlated to the brain in a law-like fashion.
Properties of Physical Things
- Resistance (traditionally, hardness or solidity)
- Mass
- Velocity
- Charge
Possibly Unique Properties of Mental States
- Qualia and Secondary Properties
- Self-Presenting Properties. Privacy and Incorrigibility.
- The Subjective Nature of Experience
- The Existence of Secondary Qualities
- Intentionality
- Self-Awareness and Personal Identity through time.
- Indexicals. Irreducibility of third-person to first-person.
Functionalism and Computers
Ironically, after half a century of creating devices to mimic and extend human operations and purposes, many now think that it is we who are created in the image of computers, not vice-versa. We are basically organic computers, and because noone thinks computers are one part circuits and one part souls, we likewise are neurons without souls. But, does the operation and analogous functionality of computers suggest that we are like computers? Are any of the above properties exhibited by computers? Might they be?
The computer analogy is basically a modern version of functionalism. In the philosophy of mind, functionalism is the view that the mind should be defined in terms of causal inputs and outputs. Just as a kidney or carburetor is defined in terms of its function, so also the mind should be defined strictly by a certain kind of causal inputs and outputs. Anything that outputs 4 when fed 2+2 is a mind.
The Chinese Room Argument
The most famous objection, by far, to the human/computer analogy is John Searle’s Chinese Room Argument (Scientific American, C2, Searle 1990, 26-27).
Consider a language you don’t understand. In my case, I do not understand Chinese. To me, Chinese writing looks like so many meaningless squiggles. Now suppose I am placed in a room containing baskets full of Chinese symbols. Suppose also that I am given a rule book in English for matching Chinese symbols with other Chinese symbols. The rules identify the symbols entirely by their shapes and do not require that I understand any of them. The rules might say such things as “take a squiggle-squiggle sign from basket number one and put it next to a squoggle-squoggle sign from basket number two.”
Image that people outside the room who understand Chinese hand in small bunches of symbols and that in response I manipulate the symbols according to the rule book and hand back more small bunches of symbols. Now the rule book is the “computer program.” The people who wrote it are the “programmers,” and I am the “computer.” The baskets full of symbols are the “data base,” the small bunches that are handed in to me are “questions” and the bunches I then hand out are “answers.”
Now suppose that the rule book is written in such a way that my “answers” to the “questions” are indistinguishable from those of a native Chinese speaker. For example, the people outside might hand me some symbols that, unknown to me, mean, “What is your favorite color?” and I might after going through the rules give back symbols that, also unknown to me, mean “My favorite color is blue, but I also like green a lot.” I satisfy the Turing test for understanding Chinese. All the same, I am totally ignorant of Chinese. And there is no way I could come to understand Chinese in the system as described, since there is no way that I can learn the meanings of any of the symbols. Like a computer, I manipulate symbols, but I attach no meaning to the symbols.
The point of the thought experiment is this: if I do not understand Chinese solely on the basis of running a computer program for understanding Chinese, then neither does any other digital computer solely on that basis. Digital computers merely manipulate form symbols according to rules in the program.
What goes fro Chinese goes for other forms of cognition as well. Just manipulating symbols is not by itself enough to guarantee cognition, perception, understanding, thinking, and so forth. And since computer, qua computers, are symbol manipulating devices, merely running the computer program is not enough to guarantee cognition.
This simple argument is decisive against the claims of strong AI.
The Implications
In your original response to my article, “The Captain of My Soul”, you said: “I can write a program that will discriminate between conflicting
desires on the basis of reason, desire, and beliefs, and then let it do
so. … The same is true for us. We have an internal deterministic faculty
which discriminates between external desires on the basis of beliefs,
reasons, etc.” I think this is a mistake. Just because we can make a computer take inputs and return outputs in the same way humans do, it does not mean that computers are doing what we do, unless functionalism is true. If we manage to create a perfect Turing machine, it would not mean that that machine would be “thinking” or had “desires” and “beliefs”. There is no reason to think that the flips and switches have any semantic or emotional content. But we know in our own cases that our reasoning and feeling do have such content.
Rush Limbaugh just said: “I know these liberals. I know these cockroaches.” Get the paddle.