Contact: michael at supermarinesoftware dot com
Github: https://github.com/lyncmi07
   _____ __  ________       ______ 
  / ___//  |/  / ___/____  / __/ /_
  \__ \/ /|_/ /\__ \/ __ \/ /_/ __/
 ___/ / /  / /___/ / /_/ / __/ /_  
/____/_/  /_//____/\____/_/  \__/  
                                   

Supermare Software

The Continued Failure to Prove Searle’s Chinese Room Argument to be False

The Continued Failure to Prove Searle’s Chinese Room Argument to be False

Michael Lynch

November 13, 2017

Abstract. Since 1980, John Searle's Chinese Room Argument on why computers cannot be conscious has gathered considerable controversy. The Argument has led to numerous debates in an effort to prove whether the argument is valid or not. This document presents two of, what I believe to be, the most convincing counter-arguments to the Chinese Room Thought Experiment. The first counter-argument is the Systems Reply argument as coined by John Searle himself. This will be followed by the Brain Simulation Argument presented by Lawrence Davis and Ned Block.

Contents

1 Searle's Proposition
2 Systems Reply Argument
 2.1 Original Argument
 2.2 Searle’s Response
 2.3 Further responses
3 Brain Simulation Argument
 3.1 Original Argument
 3.2 Searle’s Response
4 ”Conscousness is entirely caused by neurobiological processes”

1 Searle's Proposition

In presenting counter-arguments to the Chinese Room Argument, it is important to understand what it is. First presented by John Searle in 1980, the Chinese Room Argument is an attempt by Searle to prove that even if a computer passes the Turing Test, it still fundamentally has no conscious thought.

Searle's Argument is split into three axioms, as stated in his 1990 article in Scientific American [1].

One argument, that will be explored in detail next, against John Searle's conclusion that a computer cannot have conscious thought is the Systems Reply argument. This states that although Searle himself may not understand Chinese, the system as a whole (Searle, the book of rules, paper and pens etc.) does.

The second argument, which will be explored later in this document, is the Brain Simulation Argument. This argument states that if the program was the full simulation of a Chinese speakers brain, then due to its functional equivalence to a real Chinese speakers brain, it must be conscious. Finally, I will elaborate on a key element of John Searle's argument that is largely missed by not only these two arguments, but nearly all arguments made against the Chinese Room problem.

2 Systems Reply Argument

2.1 Original Argument

The Systems Reply response was first presented by John Searle himself in his original 1980 paper, Minds, Brains and Programs [2]. It states that it does not matter if the man inside the Chinese room understands Chinese or not. The man inside the room is just one part of the entire system. Searle argues that the man in the room acts as the computer in the Chinese Room and because the man does not understand Chinese then a computer doing the same tasks as the man would not understand Chinese either.

The Systems Reply argues that Searle is wrong in his assumption that the man in the room represents the computer in the Chinese Room and instead represents the central processing unit, an important part of a computer, but a part nonetheless.

B.J. Copeland sums up the Systems Reply argument by producing two premisses and conclusions to show where Searle's argument becomes invalid [3]

Copeland argues that in order to consider that Premiss 1 infers Conclusion 1 you would have to also hold the view that Premiss 2 infers Conclusion 2 even though the second premiss and conclusion is obviously absurd.

2.2 Searle’s Response

Searle presents his first refutation to this claim in his original paper on the subject in the form of an expansion of the thought experiment. Searle asks us to imagine that the man memorizes every aspect of the Chinese Room, The rule books, paper, pens and states of the program. This would mean that the man now incorporates the system in its entirety. Although the man has memorised the entire system he still according to Searle has no understanding of Chinese, just the understanding of the method of symbol manipulation required to create a coherent Chinese reply. As the man does not understand Chinese, and the system exists entirely in the man, then nothing understands Chinese.

2.3 Further responses

This argument by Searle is seen by many, as not being an adequate refutation of the Systems Reply response. B.J. Copeland discusses his problems with Searle's argument in his 1993 paper, The Curious Case of the Chinese Gym [3].

Copeland asks us to imagine a different scenario where a group of scientists have decided to strap electrodes to a patients brain. Using these electrodes the scientists have the ability to directly input a neurological program into the brain and get an output to this program. The program they decide to run on the patients brain is a logic theorem prover. The program runs successfully and therefore they have shown that part of the patients brain understands logic theorems. But this by no means shows that the patient themselves understands how to prove the same logic theorems. The patient could very well be presented with the logic theorem prover and be completely overwhelmed with the task.

I believe this argument to be somewhat lacking in truly refuting Searle's response. Copeland has not actually shown that the part of the patients brain that is able to run the logic theorem prover actually understands logic theorems, only that it is a symbol-manipulator that can run a program that proves logic theorems (it has syntax, not semantics).

There is a much simpler way to refute John Searle's response in that because Joe (The name given to the man in the Chinese Room by B.J. Copeland) does not understand Chinese, nothing does. It is that Joe the Body does in fact understand Chinese, even if Joe the Mind does not. Searle has mistaken Joe the Mind to be the entirety of Joe. This is not the case. Joe the Mind is the conscious part of Joe's brain, not Joe's brain as a whole. This becomes self-evident as your conscious thought does not have immediate access to all the memories that are held in your brain (you may know the name of the actor in the last film you watched, but you are not holding that memory in your conscious thought currently). Joe the Mind still only represents the Central Processing Unit when Joe is running the Chinese program in his head. Consider that Joe suffers from multiple personality disorder, sharing his body with a personality called Bob who has learnt Chinese. Although it would be clear that the Joe personality has no understanding of Chinese, it would be much more difficult to argue that Joe's body as a whole which holds the conscious entity of Bob did not understand Chinese.

3 Brain Simulation Argument

3.1 Original Argument

As with the Systems Reply response, the Brain Simulation Argument was also presented in John Searle's original paper as a reply to his Chinese Room Argument. The Brain Simulation argument asks us to imagine that the program which is being performed by the man in the Chinese Room is actually a full simulation of the mind of a native Chinese speaker. The program takes the Chinese characters as input (presumably through some sort of simulation of the visual cortex), simulates on the paper provided in the room the correct neuron firings that correspond to a real Chinese brain when exposed to these Chinese symbols and then outputs Chinese characters as an answer to be put in the output slot of the Chinese Room.

Given that this is a full simulation of the brain of a native Chinese speaker. How could it be argued that it does not understand Chinese? By arguing that this simulation of a Chinese brain does not understand Chinese, you would be, by extension, arguing that a real Chinese brain does also not understand Chinese.

3.2 Searle’s Response

The refutation presented in Minds, Brains and Programs [2] by Searle is a reconstruction of the Chinese room to feature a complex set of water pipes and valves operated by the man rather than the man in the room operating a state machine using pencil and paper. The valves and water pipes in the room correspond to the neurons present in a Chinese brain. The task of the man is to follow a program written in English to turn valves on and off in the correct manner as to simulate the neurons firing in the Chinese brain when exposed to the Chinese input symbols.

There is no component, according to Searle, present in this system that understands Chinese. The man operating the mechanism still does not understand Chinese and the water pipes (on the basis that they are just inanimate objects with water in them) must also not understand Chinese. The valves in this system would have simulated the firing of neurons in the brain, but not have created the causal properties or intensionality of the brain.

Searle furthers this argument with a pre-emptive argument against the belief that the system as a whole (man and water pipes) is the conscious being which understands Chinese. His argument against this point follows the same logic as that of the Systems Reply (The man memorises the system).

As a somewhat unrelated note, I believe it is a little unusual that Searle feels the need to reframe his Chinese Room thought experiment to this new system of water pipes and valves when countering the Brain Simulation argument. The simulation Searle talks of created by the water pipes and valves can just as easily be achieved using the original rooms paper and pencils. Searle never explains how substituting the original room with water pipes is necessary to make his point regarding the systems lack of causal properties.

Ray Kurzweil tackles this response by arguing that it really does not matter that the room is a simulation of a Chinese brain. Although the simulation is not an actual Chinese brain, it is functionally equivalent to one. As a functional equivalent, it must be argued, according to Kurzweil, that the simulation has functionally equivalent re-creations of [the brains] causal powers. By having recreated causal powers in this simulation, the simulation must have also created consciousness, if indeed consciousness is a result of causal powers as Searle suggests.

Searle countered this argument in a 1999 piece called I Married a Computer: An Exchange [4]: My pocket calculator is functionally equivalent to (indeed better than) me in producing answers to arithmetic problems writes Searle. Searle argues that it does not follow from this fact that the calculator is functionally equivalent to [him] in producing the conscious thought processes that go with solving arithmetic problems.

I would argue that Searle is mistaken in his claim that his pocket calculator is functionally equivalent to him in solving arithmetic problems, and he does in fact show his mistake in the very same sentence. Functional equivalence suggests that given two functions (in this case Searle's brain and his calculator), when given exactly the same inputs, produces exactly the same outputs. Searle points out however, his calculator is in fact better than him in performing arithmetic problems, it likely makes mistakes very rarely in comparison to Searle and therefore it cannot be functionally equivalent by definition. In order to be functionally equivalent to Searle's conscious arithmetic ability, it would have to make the exact same mistakes that Searle makes when doing arithmetic (likely through simulating Searle's brain).

4 ”Conscousness is entirely caused by neurobiological processes”

The Systems Reply and Brain Simulation argument both seem to miss an important aspect of the argument that John Searle makes with his Chinese Room argument. Searle states that Consciousness is entirely caused by neurobiological processes [5] in the brain. It is important to explore exactly what he means by this.

Almost nothing is really known about what consciousness is or what causes it. The only real concrete information we have about it, is that it is real, and that the human brain possesses the power to create it. This is where the arguments against Searle come apart. In order for these arguments to be valid, an assumption needs to be treated as if it were fact.

The assumption made by the arguments is the method in which consciousness is created. If we take the Systems Reply argument, it is the view of people who hold this argument, that the entire Chinese Room consciously understands Chinese. The interactions of the unconscious entities in the room (rule book, pencils, paper etc.) have become so complex that they have created consciousness simply by virtue of being complex. As Searle points out however, this is simply an assumption. We do not know that consciousness is caused by complex interactions. The only entity that we know for sure which causes consciousness is the human brain or, more generally, a neurobiological process within the brain.

Rather than consciousness, let us consider another biological process in the human body that is much better understood. The pancreas is an organ which creates the insulin hormone, a hormone vital for regulating the sugar levels of our blood. Without insulin, our body can develop serious health problems within a very short period of time. In diabetes patients where the pancreas is failing to create insulin properly, they must use insulin injections to regulate the sugar levels within their blood.

Now let us consider the man in the Chinese Room. Rather than simulating a Chinese speaker or even the brain of a Chinese speaker, we tell him to simulate the atoms in the insulin hormones using a book of physics equations along with pencils and paper. Unsurprisingly, a diabetes patient would likely not feel their treatment was sufficient if they were asked to, instead of injecting insulin, inject the simulation of insulin made by the man in the room. It would not matter how good the man's simulation of insulin is, it would not be a valid substitute for the real thing. This thought experiment shows that there are some things that exist in the real world which in fact cannot be recreated in a computer simply be simulating its properties. This is an important fact to keep in mind when understanding the basis of Searle's argument.

We have no idea what causes consciousness in the brain or indeed what consciousness actually is. It is not unreasonable to consider that consciousness is in fact some sort of chemical made up of some yet unknown observer particle and that the brain produces this chemical from a consciousness gland. If this is the case, it would not matter how well you simulated the consciousness chemical in a computer. Simulated consciousness could no more make a computer conscious than simulated insulin could control a diabetes patient's blood sugar levels.

In this lies Searle's fundamental argument with the Chinese Room. Given the facts that we know about consciousness (It is real and that the human brain can create it), The only element in the room that we know for certain is conscious is the man. We know that the man does not understand Chinese, therefore we cannot know for certain that there is a conscious entity in the room that understands Chinese. In order to make the claim that any other element within the Chinese Room consciously understands Chinese, we would need to have proof of what consciousness is and what causes it.

John Searle's argument seems to generate a great deal of frustration from his critics which is certainly warranted. Searle appears to be adamant that his Chinese Room argument unequivocally proves that there is nothing in the room that consciously understands Chinese. In making this claim, Searle is doing exactly that which he accuses his critics of doing but in the opposite direction. The Systems Reply and Brain Simulation arguments rely on the assumption that complex interactions of unconscious entities causes consciousness, and Searle's argument relys on the assumption that it does not.

In truth, with minor amendment, Searle's argument is valid. Searle's Chinese Room argument proves that we cannot know for sure that a Consciousness exists in the Chinese room that understands Chinese, it also does not prove that there isnt one. Until more information can be found on what consciousness is and what creates it, we cannot make a decision either way on whether Searle or his critics are correct in their arguments.

 

It is clear from the controversy that has been created by it, that John Searle's paper Minds, Brains and Programs has presented a very compelling argument against the view that consciousness is possible simply by running a program on a computer. Even with countless arguments against Searle such as the Systems Reply and Brain Simulation arguments, it cannot be said that this dilemma has been even close to solved. One thing remains clear on this subject. A great deal more research must be undertaken on the topic of consciousness, what causes it, and what it is in order for anyone to start deciding which side of this argument is the correct one. The Chinese Room argument has captivated our attention for nearly 40 years since it was first introduced in 1980. The dilemma will likely dominate the conversation on consciousness for many decades to come.

It is important to note however that regardless of whether Searle is proved right or wrong. The idea of consciousness holds very little value in the field of artificial intelligence. A computer that has real consciousness would be of no greater use as a conversation partner than a computer that simulated it. We will only ever experience our own consciousness, for us to worry about whether another entity is conscious too, is an exercise in futility. In any case, if the field of artificial intelligence ever enables us to even create simulations of consciousness, it will truly be a most remarkable achievement eclipsing any discovery or invention that came before it.

References

[1]   J. R. Searle, “Is the brain’s mind a computer program?” Scientific American, vol. 262, no. 1, pp. 25–31, 1990.

[2]   J. R. Searle et al., “Minds, brains, and programs,” & &, pp. 417–457, 1980.

[3]   B. J. Copeland, “The curious case of the chinese gym,” Synthese, vol. 95, no. 2, pp. 173–186, 1993.

[4]   J. Searle, “I married a computer,” The New York Review of Books, vol. 8, 1999.

[5]   J. R. Searle, “Consciousness,” Annual Review of Neuroscience, vol. 23, no. 1, pp. 557–578, 2000, pMID: 10845075. [Online]. Available: https://doi.org/10.1146/annurev.neuro.23.1.557