Why the Passion and Melodrama?
Why couldn’t God just forgive? Why all this melodrama of Jesus, who is nailed to a cross to suffer and die on our behalf? See Keller.
Why couldn’t God just forgive? Why all this melodrama of Jesus, who is nailed to a cross to suffer and die on our behalf? See Keller.
In my recent response to Bill Maher’s Religulous, I made a claim that seems right to me, but of which I’m not sure. In the film, Maher makes the all too common accusation that, “More people have been killed in the name of god than for anything else.” On the face of it, this seems to me obviously incorrect, but my history is far from above reproach. I claimed, alternatively: “Because humans throughout history have been so irredeemably religious, religion has played some role in most human conflict. A small subset of wars have
been waged for distinctly religious motives.” I’m beginning a survey starting in the present and working my way back to test this impression. I’ll classify each war as either a 1) religious war, 2) a nonreligious war, or 3) a mixed war. I’m especially interested to find wars with distinctly Christian motives. I will rely, to start, on conventional wisdom with respect to causes, though conventional wisdom often does not withstand scrutiny. This is a working document and will evolve as I have time. No doubt these classifications will have to be refined.
It’s not often that a corporate ethic surfaces in the mainstream, but Google’s maxim, “Do No Evil”, is an exception, no doubt in part because, like Google.com itself, it’s short and sweet. Google’s ethic may seem so obviously self-evident as to induce a smirk, but it’s unusual for a moral imperative to be so significant in a corporation’s self-identity. It brings to mind the kernel of the Hippocratic oath, “Do No Harm”, as well as Ghandi’s applying this simple rule to the complexities of being a googleopoly is not easy. Nonetheless, these rules, as good and important as they are, are noteworthy in part because of how different they are than that other moral maxim, the Golden Rule: “Do unto others as you would have them do unto you.”
As an empirical matter, it is difficult to deny that there are many who have earnestly sought to know God but have been unable to believe for rational reasons. And yet, by my count, most Christian apologists insist that the earnest seeker who follows the evidence will ultimately be convinced of the reality of God and of his self-revelation in Jesus Christ. In two of his recent podcasts, William Lane Craig addresses this issue: first, in response to a questioner who, by his own account, had honestly investigated the claims of Christianity but could not believe (“Evolution and Skepticism“, Nov 30, 2009); and second, in dealing with the phenomenon of some well-known deconversions (“Questions about ‘Ex-Christians’ and Molinism“, Dec 10, 2009). Though Craig recognizes atheism as a rational position, in these instances, instead of conceding the reality of sincere unbelief, Craig posited the power of self-deception in the first case and of moral failure in the second. I don’t discount the power of moral considerations, including self-deception, in belief formation. Indeed, virtue epistemology has well articulated an extensive catalog of moral obligations relevant to justified belief. Nonetheless, there seem to be cases in which the seeker’s quest has been bona fide and not impeded by an unwillingness to obey God. Kenneth W. Daniels’ spiritual autobiography, Why I Believed, recounts the story of his painful loss of faith in spite of his desperate desire to continue to believe the story that gave his life meaning and animated his vocation as a Christian missionary. If you read his story carefully, it is exceedingly difficult to attribute his deconversion to anything but the sway of arguments as he appraised them. And my experience suggests that his is but one of countless similar stories that are undocumented. But what, then, are we to make of a God whom is believed to have promised that he who seeks, will find Him. If sincere and virtuous unbelief is a reality, it presents a unique challenge to the Christian conception of God as described in the Bible. If trust, and thereby belief, in God and Christ is the eternally decisive matter it is claimed to be, how could a good God hide himself from those who seek him?
Statements of identity are usually analyzed as follows: for any entities x and y, if x and y are really the same thing, then for any property P, P is true of x if and only if P is true of y. Note that, with respect to mind/brain identity, the principle is indifferent to whether mind is inseparable from the brain and whether the mind is correlated to the brain in a law-like fashion.
Ironically, after half a century of creating devices to mimic and extend human operations and purposes, many now think that it is we who are created in the image of computers, not vice-versa. We are basically organic computers, and because noone thinks computers are one part circuits and one part souls, we likewise are neurons without souls. But, does the operation and analogous functionality of computers suggest that we are like computers? Are any of the above properties exhibited by computers? Might they be?
The computer analogy is basically a modern version of functionalism. In the philosophy of mind, functionalism is the view that the mind should be defined in terms of causal inputs and outputs. Just as a kidney or carburetor is defined in terms of its function, so also the mind should be defined strictly by a certain kind of causal inputs and outputs. Anything that outputs 4 when fed 2+2 is a mind.
The most famous objection, by far, to the human/computer analogy is John Searle’s Chinese Room Argument (Scientific American, C2, Searle 1990, 26-27).
Consider a language you don’t understand. In my case, I do not understand Chinese. To me, Chinese writing looks like so many meaningless squiggles. Now suppose I am placed in a room containing baskets full of Chinese symbols. Suppose also that I am given a rule book in English for matching Chinese symbols with other Chinese symbols. The rules identify the symbols entirely by their shapes and do not require that I understand any of them. The rules might say such things as “take a squiggle-squiggle sign from basket number one and put it next to a squoggle-squoggle sign from basket number two.”
Image that people outside the room who understand Chinese hand in small bunches of symbols and that in response I manipulate the symbols according to the rule book and hand back more small bunches of symbols. Now the rule book is the “computer program.” The people who wrote it are the “programmers,” and I am the “computer.” The baskets full of symbols are the “data base,” the small bunches that are handed in to me are “questions” and the bunches I then hand out are “answers.”
Now suppose that the rule book is written in such a way that my “answers” to the “questions” are indistinguishable from those of a native Chinese speaker. For example, the people outside might hand me some symbols that, unknown to me, mean, “What is your favorite color?” and I might after going through the rules give back symbols that, also unknown to me, mean “My favorite color is blue, but I also like green a lot.” I satisfy the Turing test for understanding Chinese. All the same, I am totally ignorant of Chinese. And there is no way I could come to understand Chinese in the system as described, since there is no way that I can learn the meanings of any of the symbols. Like a computer, I manipulate symbols, but I attach no meaning to the symbols.
The point of the thought experiment is this: if I do not understand Chinese solely on the basis of running a computer program for understanding Chinese, then neither does any other digital computer solely on that basis. Digital computers merely manipulate form symbols according to rules in the program.
What goes fro Chinese goes for other forms of cognition as well. Just manipulating symbols is not by itself enough to guarantee cognition, perception, understanding, thinking, and so forth. And since computer, qua computers, are symbol manipulating devices, merely running the computer program is not enough to guarantee cognition.
This simple argument is decisive against the claims of strong AI.
In your original response to my article, “The Captain of My Soul”, you said: “I can write a program that will discriminate between conflicting
desires on the basis of reason, desire, and beliefs, and then let it do
so. … The same is true for us. We have an internal deterministic faculty
which discriminates between external desires on the basis of beliefs,
reasons, etc.” I think this is a mistake. Just because we can make a computer take inputs and return outputs in the same way humans do, it does not mean that computers are doing what we do, unless functionalism is true. If we manage to create a perfect Turing machine, it would not mean that that machine would be “thinking” or had “desires” and “beliefs”. There is no reason to think that the flips and switches have any semantic or emotional content. But we know in our own cases that our reasoning and feeling do have such content.
Rush Limbaugh just said: “I know these liberals. I know these cockroaches.” Get the paddle.