Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Special pages
Niidae Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Chinese room
(section)
Page
Discussion
English
Read
Edit
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
View history
General
What links here
Related changes
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Replies == Replies to Searle's argument may be classified according to what they claim to show:{{efn|David Cole combines the second and third categories, as well as the fourth and fifth.{{sfn|Cole|2004|pp=5–6}}}} * Those which identify who speaks Chinese * Those which demonstrate how meaningless symbols can become meaningful * Those which suggest that the Chinese room should be redesigned in some way * Those which contend that Searle's argument is misleading * Those which argue that the argument makes false assumptions about subjective conscious experience and therefore proves nothing Some of the arguments (robot and brain simulation, for example) fall into multiple categories. ===Systems and virtual mind replies: finding the mind=== These replies attempt to answer the question: since the man in the room does not speak Chinese, where is the mind that does? These replies address the key [[ontological]] issues of [[mind/body problem|mind versus body]] and simulation vs. reality. All of the replies that identify the mind in the room are versions of "the system reply". ==== System reply ==== The basic version of the system reply argues that it is the "whole system" that understands Chinese.<ref>{{Harvnb|Searle|1980|pp=5–6}}; {{Harvnb|Cole|2004|pp=6–7}}; {{Harvnb|Hauser|2006|pp=2–3}}; {{Harvnb|Dennett|1991|p=439}}; {{Harvnb|Fearn|2007|p=44}}; {{Harvnb|Crevier|1993|p=269}}.</ref>{{efn|Versions of the system reply are held by [[Ned Block]], [[Jack Copeland]], [[Daniel Dennett]], [[Jerry Fodor]], [[John Haugeland]], [[Ray Kurzweil]], and [[Georges Rey]], among others.{{sfn|Cole|2004|p=6}}}} While the man understands only English, when he is combined with the program, scratch paper, pencils and file cabinets, they form a system that can understand Chinese. "Here, understanding is not being ascribed to the mere individual; rather it is being ascribed to this whole system of which he is a part" Searle explains.{{sfn|Searle|1980|p=6}} Searle notes that (in this simple version of the reply) the "system" is nothing more than a collection of ordinary physical objects; it grants the power of understanding and consciousness to "the conjunction of that person and bits of paper"{{sfn|Searle|1980|p=6}} without making any effort to explain how this pile of objects has become a conscious, thinking being. Searle argues that no reasonable person should be satisfied with the reply, unless they are "under the grip of an ideology;"{{sfn|Searle|1980|p=6}} In order for this reply to be remotely plausible, one must take it for granted that consciousness can be the product of an information processing "system", and does not require anything resembling the actual biology of the brain. Searle then responds by simplifying this list of physical objects: he asks what happens if the man memorizes the rules and keeps track of everything in his head? Then the whole system consists of just one object: the man himself. Searle argues that if the man does not understand Chinese then the system does not understand Chinese either because now "the system" and "the man" both describe exactly the same object.{{sfn|Searle|1980|p=6}} Critics of Searle's response argue that the program has allowed the man to have two minds in one head.{{who|date=March 2011}} If we assume a "mind" is a form of information processing, then the [[theory of computation]] can account for two computations occurring at once, namely (1) the computation for [[Universal Turing machine|universal programmability]] (which is the function instantiated by the person and note-taking materials independently from any particular program contents) and (2) the computation of the Turing machine that is described by the program (which is instantiated by everything including the specific program).{{sfn|Yee|1993|loc=p. 44, footnote 2}} The theory of computation thus formally explains the open possibility that the second computation in the Chinese Room could entail a human-equivalent semantic understanding of the Chinese inputs. The focus belongs on the program's Turing machine rather than on the person's.{{sfn|Yee|1993|pp=42–47}} However, from Searle's perspective, this argument is circular. The question at issue is whether consciousness is a form of information processing, and this reply requires that we make that assumption. More sophisticated versions of the systems reply try to identify more precisely what "the system" is and they differ in exactly how they describe it. According to these replies,{{who|date=March 2011}} the "mind that speaks Chinese" could be such things as: the "software", a "program", a "running program", a simulation of the "neural correlates of consciousness", the "functional system", a "simulated mind", an "[[strong emergence|emergent]] property", or "a virtual mind". ==== Virtual mind reply ==== [[Marvin Minsky]] suggested a version of the system reply known as the "virtual mind reply".{{efn|The virtual mind reply is held by Minsky, {{sfn|Minsky|1980|p=440}}{{sfn|Cole|2004|p=7}} [[Tim Maudlin]], [[David Chalmers]] and David Cole.{{sfn|Cole|2004|pp=7–9}}}} The term "[[virtual artifact|virtual]]" is used in computer science to describe an object that appears to exist "in" a computer (or computer network) only because software makes it appear to exist. The objects "inside" computers (including files, folders, and so on) are all "virtual", except for the computer's electronic components. Similarly, Minsky that a computer may contain a "mind" that is virtual in the same sense as [[virtual machine]]s, [[virtual communities]] and [[virtual reality]]. To clarify the distinction between the simple systems reply given above and virtual mind reply, David Cole notes that two simulations could be running on one system at the same time: one speaking Chinese and one speaking Korean. While there is only one system, there can be multiple "virtual minds," thus the "system" cannot be the "mind".{{sfn|Cole|2004|p=8}} Searle responds that such a mind is at best a simulation, and writes: "No one supposes that computer simulations of a five-alarm fire will burn the neighborhood down or that a computer simulation of a rainstorm will leave us all drenched."{{sfn|Searle|1980|p=12}} Nicholas Fearn responds that, for some things, simulation is as good as the real thing. "When we call up the pocket calculator function on a desktop computer, the image of a pocket calculator appears on the screen. We don't complain that it isn't really a calculator, because the physical attributes of the device do not matter."{{sfn|Fearn|2007|p=47}} The question is, is the human mind like the pocket calculator, essentially composed of information, where a perfect simulation of the thing just <em>is</em> the thing? Or is the mind like the rainstorm, a thing in the world that is more than just its simulation, and not realizable in full by a computer simulation? For decades, this question of simulation has led AI researchers and philosophers to consider whether the term "[[synthetic intelligence]]" is more appropriate than the common description of such intelligences as "artificial." These replies provide an explanation of exactly who it is that understands Chinese. If there is something ''besides'' the man in the room that can understand Chinese, Searle cannot argue that (1) the man does not understand Chinese, therefore (2) nothing in the room understands Chinese. This, according to those who make this reply, shows that Searle's argument fails to prove that "strong AI" is false.{{efn|David Cole writes "From the intuition that in the CR thought experiment he would not understand Chinese by running a program, Searle infers that there is no understanding created by running a program. Clearly, whether that inference is valid or not turns on a metaphysical question about the identity of persons and minds. If the person understanding is not identical with the room operator, then the inference is unsound."{{sfn|Cole|2004|p=21}}}} These replies, by themselves, do not provide any evidence that strong AI is true, however. They do not show that the system (or the virtual mind) understands Chinese, other than the hypothetical premise that it passes the Turing test. Searle argues that, if we are to consider Strong AI remotely plausible, the Chinese Room is an example that requires explanation, and it is difficult or impossible to explain how consciousness might "emerge" from the room or how the system would have consciousness. As Searle writes "the systems reply simply begs the question by insisting that the system must understand Chinese"{{sfn|Searle|1980|p=6}} and thus is dodging the question or hopelessly circular. ===Robot and semantics replies: finding the meaning=== As far as the person in the room is concerned, the symbols are just meaningless "squiggles." But if the Chinese room really "understands" what it is saying, then the symbols must get their meaning from somewhere. These arguments attempt to connect the symbols to the things they symbolize. These replies address Searle's concerns about [[intentionality]], [[symbol grounding]] and [[syntax]] vs. [[semantic]]s. ==== Robot reply ==== Suppose that instead of a room, the program was placed into a robot that could wander around and interact with its environment. This would allow a "[[causal]] connection" between the symbols and things they represent.<ref>{{Harvnb|Searle|1980|p=7}}; {{Harvnb|Cole|2004|pp=9–11}}; {{Harvnb|Hauser|2006|p=3}}; {{Harvnb|Fearn|2007|p=44}}.</ref>{{efn|This position is held by [[Margaret Boden]], [[Tim Crane]], [[Daniel Dennett]], [[Jerry Fodor]], [[Stevan Harnad]], [[Hans Moravec]], and [[Georges Rey]], among others.{{sfn|Cole|2004|p=9}}}} [[Hans Moravec]] comments: "If we could graft a robot to a reasoning program, we wouldn't need a person to provide the meaning anymore: it would come from the physical world."<ref>Quoted in {{Harvnb|Crevier|1993|p=272}}</ref>{{efn| David Cole calls this the "externalist" account of meaning.{{sfn|Cole|2004|p=18}}}} Searle's reply is to suppose that, unbeknownst to the individual in the Chinese room, some of the inputs came directly from a camera mounted on a robot, and some of the outputs were used to manipulate the arms and legs of the robot. Nevertheless, the person in the room is still just following the rules, and does not know what the symbols mean. Searle writes "he doesn't <em>see</em> what comes into the robot's eyes."{{sfn|Searle|1980|p=7}} ==== Derived meaning ==== Some respond that the room, as Searle describes it, is connected to the world: through the Chinese speakers that it is "talking" to and through the programmers who designed the [[knowledge base]] in his file cabinet. The symbols Searle manipulates are already meaningful, they are just not meaningful to him.<ref>{{Harvnb|Hauser|2006|p=11}}; {{Harvnb|Cole|2004|p=19}}.</ref>{{efn|The derived meaning reply is associated with [[Daniel Dennett]] and others.}} Searle says that the symbols only have a "derived" meaning, like the meaning of words in books. The meaning of the symbols depends on the conscious understanding of the Chinese speakers and the programmers outside the room. The room, like a book, has no understanding of its own.{{efn|Searle distinguishes between "intrinsic" intentionality and "derived" intentionality. "Intrinsic" intentionality is the kind that involves "conscious understanding" like you would have in a human mind. [[Daniel Dennett]] doesn't agree that there is a distinction. David Cole writes "derived intentionality is all there is, according to Dennett."{{sfn|Cole|2004|p=19}}}} ====Contextualist reply==== Some have argued that the meanings of the symbols would come from a vast "background" of [[commonsense knowledge]] encoded in the program and the filing cabinets. This would provide a "context" that would give the symbols their meaning.{{sfn|Cole|2004|p=18}}{{efn|David Cole describes this as the "internalist" approach to meaning.{{sfn|Cole|2004|p=18}} Proponents of this position include [[Roger Schank]], [[Doug Lenat]], [[Marvin Minsky]] and (with reservations) [[Daniel Dennett]], who writes "The fact is that any program [that passed a Turing test] would have to be an extraordinarily supple, sophisticated, and multilayered system, brimming with 'world knowledge' and meta-knowledge and meta-meta-knowledge." {{sfn|Dennett|1991|p=438}}}} Searle agrees that this background exists, but he does not agree that it can be built into programs. [[Hubert Dreyfus]] has also criticized the idea that the "background" can be represented symbolically.{{sfn|Dreyfus|1979|loc="The [[epistemological]] assumption"}} To each of these suggestions, Searle's response is the same: no matter how much knowledge is written into the program and no matter how the program is connected to the world, he is still in the room manipulating symbols according to rules. His actions are syntactic and this can never explain to him what the symbols stand for. Searle writes "syntax is insufficient for semantics."{{sfn|Searle|1984}}{{efn|Searle also writes "Formal symbols by themselves can never be enough for mental contents, because the symbols, by definition, have no meaning (or [[Interpretation (logic)|interpretation]], or semantics) except insofar as someone outside the system gives it to them."{{sfn|Motzkin|Searle|1989|p=45}}}} However, for those who accept that Searle's actions simulate a mind, separate from his own, the important question is not what the symbols mean to Searle, what is important is what they mean to the virtual mind. While Searle is trapped in the room, the virtual mind is not: it is connected to the outside world through the Chinese speakers it speaks to, through the programmers who gave it world knowledge, and through the cameras and other sensors that [[roboticist]]s can supply. === Brain simulation and connectionist replies: redesigning the room === These arguments are all versions of the systems reply that identify a particular kind of system as being important; they identify some special technology that would create conscious understanding in a machine. (The "robot" and "commonsense knowledge" replies above also specify a certain kind of system as being important.) ==== Brain simulator reply ==== Suppose that the program simulated in fine detail the action of every neuron in the brain of a Chinese speaker.<ref>{{Harvnb|Searle|1980|pp=7–8}}; {{Harvnb|Cole|2004|pp=12–13}}; {{Harvnb|Hauser|2006|pp=3–4}}; {{Harvnb|Churchland|Churchland|1990}}.</ref>{{efn|The brain simulation reply has been made by [[Paul Churchland]], [[Patricia Churchland]] and [[Ray Kurzweil]].{{sfn|Cole|2004|p=12}}}} This strengthens the intuition that there would be no significant difference between the operation of the program and the operation of a live human brain. Searle replies that such a simulation does not reproduce the important features of the brain—its causal and intentional states. He is adamant that "human mental phenomena [are] dependent on actual physical–chemical properties of actual human brains."{{sfn|Searle|1980|p=13}} Moreover, he argues: {{blockquote|[I]magine that instead of a monolingual man in a room shuffling symbols we have the man operate an elaborate set of water pipes with valves connecting them. When the man receives the Chinese symbols, he looks up in the program, written in English, which valves he has to turn on and off. Each water connection corresponds to a synapse in the Chinese brain, and the whole system is rigged up so that after doing all the right firings, that is after turning on all the right faucets, the Chinese answers pop out at the output end of the series of pipes. Now where is the understanding in this system? It takes Chinese as input, it simulates the formal structure of the synapses of the Chinese brain, and it gives Chinese as output. But the man certainly doesn't understand Chinese, and neither do the water pipes, and if we are tempted to adopt what I think is the absurd view that somehow the conjunction of man and water pipes understands, remember that in principle the man can internalize the formal structure of the water pipes and do all the "neuron firings" in his imagination.{{sfn|Searle|1980|p={{Page needed|date=January 2019}}}}}} =====China brain===== What if we ask each citizen of China to simulate one neuron, using the telephone system to simulate the connections between [[axon]]s and [[dendrite]]s? In this version, it seems obvious that no individual would have any understanding of what the brain might be saying.<ref>{{Harvnb|Cole|2004|p=4}}; {{Harvnb|Hauser|2006|p=11}}.</ref>{{efn|Early versions of this argument were put forward in 1974 by [[Lawrence Davis (scientist)|Lawrence Davis]] and in 1978 by [[Ned Block]]. Block's version used walkie talkies and was called the "Chinese Gym". Paul and Patricia Churchland described this scenario as well.{{sfn|Churchland|Churchland|1990}}}} It is also obvious that this system would be functionally equivalent to a brain, so if consciousness is a function, this system would be conscious. =====Brain replacement scenario===== In this, we are asked to imagine that engineers have invented a tiny computer that simulates the action of an individual neuron. What would happen if we replaced one neuron at a time? Replacing one would clearly do nothing to change conscious awareness. Replacing all of them would create a digital computer that simulates a brain. If Searle is right, then conscious awareness must disappear during the procedure (either gradually or all at once). Searle's critics argue that there would be no point during the procedure when he can claim that conscious awareness ends and mindless simulation begins.<ref>{{Harvnb|Cole|2004|p=20}}; {{Harvnb|Moravec|1988}}; {{Harvnb|Kurzweil|2005|p=262}}; {{Harvnb|Crevier|1993|pp=271 and 279}}.</ref>{{efn|An early version of the brain replacement scenario was put forward by [[Clark Glymour]] in the mid-70s and was touched on by [[Zenon Pylyshyn]] in 1980. [[Hans Moravec]] presented a vivid version of it,{{sfn|Moravec|1988}} and it is now associated with [[Ray Kurzweil]]'s version of [[transhumanism]].}}{{efn|Searle does not consider the brain replacement scenario as an argument against the CRA, however in another context, Searle examines several possible solutions, including the possibility that "you find, to your total amazement, that you are indeed losing control of your external behavior. You find, for example, that when doctors test your vision, you hear them say 'We are holding up a red object in front of you; please tell us what you see.' You want to cry out 'I can't see anything. I'm going totally blind.' But you hear your voice saying in a way that is completely outside of your control, 'I see a red object in front of me.' [...] [Y]our conscious experience slowly shrinks to nothing, while your externally observable behavior remains the same."{{sfn|Searle|1992}}}} (See [[Ship of Theseus]] for a similar thought experiment.) ====Connectionist replies==== :Closely related to the brain simulator reply, this claims that a massively parallel connectionist architecture would be capable of understanding.{{efn|The connectionist reply is made by [[Andy Clark]] and [[Ray Kurzweil]],{{sfn|Cole|2004|pp=12 & 17}} as well as [[Paul Churchland|Paul]] and [[Patricia Churchland]].{{sfn|Hauser|2006|p=7}}}} Modern [[deep learning]] is massively parallel and has successfully displayed intelligent behavior in many domains. [[Nils John Nilsson|Nils Nilsson]] argues that modern AI is using digitized "dynamic signals" rather than symbols of the kind used by AI in 1980.{{sfn|Nilsson|2007}} Here it is the [[sample (signal)|sampled]] signal which would have the semantics, not the individual numbers manipulated by the program. This is a different kind of machine than the one that Searle visualized. ====Combination reply==== :This response combines the robot reply with the brain simulation reply, arguing that a brain simulation connected to the world through a robot body could have a mind.<ref>{{Harvnb|Searle|1980|pp=8–9}}; {{Harvnb|Hauser|2006|p=11}}.</ref> ====Many mansions / wait till next year reply==== :Better technology in the future will allow computers to understand.{{sfn|Searle|1980|p=8}}{{efn|{{harvtxt|Searle|2009}} uses the name "Wait 'Til Next Year Reply".}} Searle agrees that this is possible, but considers this point irrelevant. Searle agrees that there may be other hardware besides brains that have conscious understanding. These arguments (and the robot or common-sense knowledge replies) identify some special technology that would help create conscious understanding in a machine. They may be interpreted in two ways: either they claim (1) this technology is required for consciousness, the Chinese room does not or cannot implement this technology, and therefore the Chinese room cannot pass the Turing test or (even if it did) it would not have conscious understanding. Or they may be claiming that (2) it is easier to see that the Chinese room has a mind if we visualize this technology as being used to create it. In the first case, where features like a robot body or a connectionist architecture are required, Searle claims that strong AI (as he understands it) has been abandoned.{{efn|Searle writes that the robot reply "tacitly concedes that cognition is not solely a matter of formal symbol manipulation." {{sfn|Searle|1980|p=7}} Stevan Harnad makes the same point, writing: "Now just as it is no refutation (but rather an affirmation) of the CRA to deny that [the Turing test] is a strong enough test, or to deny that a computer could ever pass it, it is merely special pleading to try to save computationalism by stipulating ad hoc (in the face of the CRA) that implementational details do matter after all, and that the computer's is the 'right' kind of implementation, whereas Searle's is the 'wrong' kind."{{sfn|Harnad|2001|p=14}}}} The Chinese room has all the elements of a Turing complete machine, and thus is capable of simulating any digital computation whatsoever. If Searle's room cannot pass the Turing test then there is no other digital technology that could pass the Turing test. If Searle's room could pass the Turing test, but still does not have a mind, then the Turing test is not sufficient to determine if the room has a "mind". Either way, it denies one or the other of the positions Searle thinks of as "strong AI", proving his argument. The brain arguments in particular deny strong AI if they assume that there is no simpler way to describe the mind than to create a program that is just as mysterious as the brain was. He writes "I thought the whole idea of strong AI was that we don't need to know how the brain works to know how the mind works."{{sfn|Searle|1980|p=8}} If computation does not provide an explanation of the human mind, then strong AI has failed, according to Searle. Other critics hold that the room as Searle described it does, in fact, have a mind, however they argue that it is difficult to see—Searle's description is correct, but misleading. By redesigning the room more realistically they hope to make this more obvious. In this case, these arguments are being used as appeals to intuition (see next section). In fact, the room can just as easily be redesigned to weaken our intuitions. [[Ned Block]]'s [[Blockhead argument]]{{sfn|Block|1981}} suggests that the program could, in theory, be rewritten into a simple [[lookup table]] of [[Production system (computer science)|rules]] of the form "if the user writes ''S'', reply with ''P'' and goto X". At least in principle, any program can be rewritten (or "[[refactored]]") into this form, even a brain simulation.{{efn|That is, any program running on a machine with a finite amount memory.}} In the blockhead scenario, the entire mental state is hidden in the letter X, which represents a [[memory address]]—a number associated with the next rule. It is hard to visualize that an instant of one's conscious experience can be captured in a single large number, yet this is exactly what "strong AI" claims. On the other hand, such a lookup table would be ridiculously large (to the point of being physically impossible), and the states could therefore be overly specific. Searle argues that however the program is written or however the machine is connected to the world, the mind is being simulated by a simple step-by-step digital machine (or machines). These machines are always just like the man in the room: they understand nothing and do not speak Chinese. They are merely manipulating symbols without knowing what they mean. Searle writes: "I can have any formal program you like, but I still understand nothing."{{sfn|Searle|1980|p=3}} === Speed and complexity: appeals to intuition === The following arguments (and the intuitive interpretations of the arguments above) do not directly explain how a Chinese speaking mind could exist in Searle's room, or how the symbols he manipulates could become meaningful. However, by raising doubts about Searle's intuitions they support other positions, such as the system and robot replies. These arguments, if accepted, prevent Searle from claiming that his conclusion is obvious by undermining the intuitions that his certainty requires. Several critics believe that Searle's argument relies entirely on intuitions. Block writes "Searle's argument depends for its force on intuitions that certain entities do not think."<ref>Quoted in {{Harvnb|Cole|2004|p=13}}.</ref> [[Daniel Dennett]] describes the Chinese room argument as a misleading "[[intuition pump]]"{{sfn|Dennett|1991|pp=437–440}} and writes "Searle's thought experiment depends, illicitly, on your imagining too simple a case, an irrelevant case, and drawing the obvious conclusion from it."{{sfn|Dennett|1991|pp=437–440}} Some of the arguments above also function as appeals to intuition, especially those that are intended to make it seem more plausible that the Chinese room contains a mind, which can include the robot, commonsense knowledge, brain simulation and connectionist replies. Several of the replies above also address the specific issue of complexity. The connectionist reply emphasizes that a working artificial intelligence system would have to be as complex and as interconnected as the human brain. The commonsense knowledge reply emphasizes that any program that passed a Turing test would have to be "an extraordinarily supple, sophisticated, and multilayered system, brimming with 'world knowledge' and meta-knowledge and meta-meta-knowledge", as [[Daniel Dennett]] explains.{{sfn|Dennett|1991|p=438}} ==== Speed and complexity replies ==== Many of these critiques emphasize speed and complexity of the human brain,{{efn|Speed and complexity replies are made by [[Daniel Dennett]], [[Tim Maudlin]], [[David Chalmers]], [[Steven Pinker]], [[Paul Churchland]], [[Patricia Churchland]] and others.{{sfn|Cole|2004|p=14}} Daniel Dennett points out the complexity of world knowledge.{{sfn|Dennett|1991|p=438}}}} which processes information at 100 billion operations per second (by some estimates).{{sfn|Crevier|1993|p=269}} Several critics point out that the man in the room would probably take millions of years to respond to a simple question, and would require "filing cabinets" of astronomical proportions.<ref>{{Harvnb|Cole|2004|pp=14–15}}; {{Harvnb|Crevier|1993|pp=269–270}}; {{Harvnb|Pinker|1997|p=95}}.</ref> This brings the clarity of Searle's intuition into doubt. An especially vivid version of the speed and complexity reply is from [[Paul Churchland|Paul]] and [[Patricia Churchland]]. They propose this analogous thought experiment: "Consider a dark room containing a man holding a bar magnet or charged object. If the man pumps the magnet up and down, then, according to [[James Clerk Maxwell|Maxwell]]'s theory of artificial luminance (AL), it will initiate a spreading circle of electromagnetic waves and will thus be luminous. But as all of us who have toyed with magnets or charged balls well know, their forces (or any other forces for that matter), even when set in motion produce no luminance at all. It is inconceivable that you might constitute real luminance just by moving forces around!"{{sfn|Churchland|Churchland|1990}} Churchland's point is that the problem is that he would have to wave the magnet up and down something like 450 trillion times per second in order to see anything.<ref>{{Harvnb|Churchland|Churchland|1990}}; {{Harvnb|Cole|2004|p=12}}; {{Harvnb|Crevier|1993|p=270}}; {{Harvnb|Fearn|2007|pp=45–46}}; {{Harvnb|Pinker|1997|p=94}}.</ref> [[Stevan Harnad]] is critical of speed and complexity replies when they stray beyond addressing our intuitions. He writes "Some have made a cult of speed and timing, holding that, when accelerated to the right speed, the computational may make a [[phase transition]] into the mental. It should be clear that is not a counterargument but merely an ad hoc speculation (as is the view that it is all just a matter of ratcheting up to the right degree of 'complexity.')"{{sfn|Harnad|2001|p=7}}{{efn|Critics of the "phase transition" form of this argument include Stevan Harnad, [[Tim Maudlin]], [[Daniel Dennett]] and David Cole.{{sfn|Cole|2004|p=14}} This "phase transition" idea is a version of [[strong emergentism]] (what Dennett derides as "Woo woo West Coast emergence"{{sfn|Crevier|1993|p=275}}). Harnad accuses Churchland and [[Patricia Churchland]] of espousing strong emergentism. Ray Kurzweil also holds a form of strong emergentism.{{sfn|Kurzweil|2005}}}} Searle argues that his critics are also relying on intuitions, however his opponents' intuitions have no empirical basis. He writes that, in order to consider the "system reply" as remotely plausible, a person must be "under the grip of an ideology".{{sfn|Searle|1980|p=6}} The system reply only makes sense (to Searle) if one assumes that any "system" can have consciousness, just by virtue of being a system with the right behavior and functional parts. This assumption, he argues, is not tenable given our experience of consciousness. === Other minds and zombies: meaninglessness ===<!-- Linked to in a footnote above --> Several replies argue that Searle's argument is irrelevant because his assumptions about the mind and consciousness are faulty. Searle believes that human beings directly experience their consciousness, intentionality and the nature of the mind every day, and that this experience of consciousness is not open to question. He writes that we must "presuppose the reality and knowability of the mental."{{sfn|Searle|1980|p=10}} The replies below question whether Searle is justified in using his own experience of consciousness to determine that it is more than mechanical symbol processing. In particular, the other minds reply argues that we cannot use our experience of consciousness to answer questions about other minds (even the mind of a computer), the epiphenoma replies question whether we can make any argument at all about something like consciousness which can not, by definition, be detected by any experiment, and the eliminative materialist reply argues that Searle's own personal consciousness does not "exist" in the sense that Searle thinks it does. ==== Other minds reply ==== The "Other Minds Reply" points out that Searle's argument is a version of the [[problem of other minds]], applied to machines. There is no way we can determine if other people's subjective experience is the same as our own. We can only study their behavior (i.e., by giving them our own Turing test). Critics of Searle argue that he is holding the Chinese room to a higher standard than we would hold an ordinary person.<ref>{{Harvnb|Searle|1980|p=9}}; {{Harvnb|Cole|2004|p=13}}; {{Harvnb|Hauser|2006|pp=4–5}}; {{Harvnb|Nilsson|1984}}.</ref>{{efn|The "other minds" reply has been offered by Dennett, Kurzweil and [[Hans Moravec]], among others.{{sfn|Cole|2004|pp=12–13}}}} [[Nils Nilsson (researcher)|Nils Nilsson]] writes "If a program behaves <em>as if</em> it were multiplying, most of us would say that it is, in fact, multiplying. For all I know, Searle may only be behaving <em>as if</em> he were thinking deeply about these matters. But, even though I disagree with him, his simulation is pretty good, so I'm willing to credit him with real thought."{{sfn|Nilsson|1984}} Turing anticipated Searle's line of argument (which he called "The Argument from Consciousness") in 1950 and makes the other minds reply.{{sfn|Turing|1950|pp=11–12}} He noted that people never consider the problem of other minds when dealing with each other. He writes that "instead of arguing continually over this point it is usual to have the polite convention that everyone thinks."{{sfn|Turing|1950|p=11}} The [[Turing test]] simply extends this "polite convention" to machines. He does not intend to solve the problem of other minds (for machines or people) and he does not think we need to.{{efn|One of Turing's motivations for devising the [[Turing test]] is to avoid precisely the kind of philosophical problems that Searle is interested in. He writes "I do not wish to give the impression that I think there is no mystery ... [but] I do not think these mysteries necessarily need to be solved before we can answer the question with which we are concerned in this paper."{{sfn|Turing|1950|p=12}}}} ==== Replies considering that Searle's "consciousness" is undetectable ==== If we accept Searle's description of intentionality, consciousness, and the mind, we are forced to accept that consciousness is [[epiphenomenal]]: that it "casts no shadow" i.e. is undetectable in the outside world. Searle's "causal properties" cannot be detected by anyone outside the mind, otherwise the Chinese Room could not pass the Turing test—the people outside would be able to tell there was not a Chinese speaker in the room by detecting their causal properties. Since they cannot detect causal properties, they cannot detect the existence of the mental. Thus, Searle's "causal properties" and consciousness itself is undetectable, and anything that cannot be detected either does not exist or does not matter. [[Mike Alder]] calls this the "Newton's Flaming Laser Sword Reply". He argues that the entire argument is frivolous, because it is non-[[verificationist]]: not only is the distinction between <em>simulating</em> a mind and <em>having</em> a mind ill-defined, but it is also irrelevant because no experiments were, or even can be, proposed to distinguish between the two.{{sfn|Alder|2004}} Daniel Dennett provides this illustration: suppose that, by some mutation, a human being is born that does not have Searle's "causal properties" but nevertheless acts exactly like a human being. This is a [[philosophical zombie]], as formulated in the [[philosophy of mind]]. This new animal would reproduce just as any other human and eventually there would be more of these zombies. Natural selection would favor the zombies, since their design is (we could suppose) a bit simpler. Eventually the humans would die out. So therefore, if Searle is right, it is most likely that human beings (as we see them today) are actually "zombies", who nevertheless insist they are conscious. It is impossible to know whether we are all zombies or not. Even if we are all zombies, we would still believe that we are not.<ref>{{Harvnb|Cole|2004|p=22}}; {{Harvnb|Crevier|1993|p=271}}; {{Harvnb|Harnad|2005|p=4}}.</ref> ==== Eliminative materialist reply ==== Several philosophers argue that consciousness, as Searle describes it, does not exist. [[Daniel Dennett]] describes consciousness as a "[[user illusion]]".{{sfn|Dennett|1991|loc={{page needed|date=February 2011}}}} This position is sometimes referred to as [[eliminative materialism]]: the view that consciousness is not a concept that can "enjoy reduction" to a strictly mechanical description, but rather is a concept that will be simply ''eliminated'' once the way the ''material'' brain works is fully understood, in just the same way as the concept of a [[Demon (thought experiment)|demon]] has already been eliminated from science rather than enjoying reduction to a strictly mechanical description. Other mental properties, such as original intentionality (also called "meaning", "content", and "semantic character"), are also commonly regarded as special properties related to beliefs and other propositional attitudes. Eliminative materialism maintains that propositional attitudes such as beliefs and desires, among other intentional mental states that have content, do not exist. If eliminative materialism is the correct scientific account of human cognition then the assumption of the Chinese room argument that "minds have mental contents ([[semantics]])" must be rejected.{{sfn|Ramsey|2022}} Searle disagrees with this analysis and argues that "the study of the mind starts with such facts as that humans have beliefs, while thermostats, telephones, and adding machines don't ... what we wanted to know is what distinguishes the mind from thermostats and livers."{{sfn|Searle|1980|p=7}} He takes it as obvious that we can detect the presence of consciousness and dismisses these replies as being off the point. === Other replies === [[Margaret Boden]] argued in her paper "Escaping from the Chinese Room" that even if the person in the room does not understand the Chinese, it does not mean there is no understanding in the room. The person in the room at least understands the rule book used to provide output responses. She then points out that the same applies to machine languages: a natural language sentence is understood by the programming language code that instantiates it, which in turn is understood by the lower-level compiler code, and so on. This implies that the distinction between syntax and semantics is not fixed, as Searle presupposes, but relative: the semantics of natural language is realized in the syntax of programming language; the semantics of programming language has a semantics that is realized in the syntax of compiler code. Searle's problem is a failure to assume a binary notion of understanding or not, rather than a graded one, where each system is stupider than the next. <ref>{{Citation |last=Boden |first=Margaret A. |title=Computer Models of Mind |year=1988 |editor-last=Heil |editor-first=John |chapter=Escaping from the chinese room |publisher=Cambridge University Press |isbn=978-0-521-24868-6}}</ref> ==== Carbon chauvinism ==== Searle conclusion that "human mental phenomena [are] dependent on actual physical–chemical properties of actual human brains"{{sfn|Searle|1980|p=13}} have been sometimes described as a form of "[[Carbon chauvinism]]".{{sfn|Graham|2017|p=168}} [[Steven Pinker]] suggested that a response to that conclusion would be to make a counter thought experiment to the Chinese Room, where the incredulity goes the other way.{{sfn|Pinker|1997|p=94–96}} He brings as an example the short story ''[[They're Made Out of Meat]]'' which depicts an alien race composed of some electronic beings who upon finding Earth express disbelief that the meat brain of humans can experience consciousness and thought.<ref>{{Cite web |last=Bisson |first=Terry |date=1990 |title=They're Made Out of Meat |url=http://www.terrybisson.com/theyre-made-out-of-meat-2/ |access-date=2024-11-07 |archive-url=https://web.archive.org/web/20190501130711/http://www.terrybisson.com/theyre-made-out-of-meat-2/ |archive-date=May 1, 2019 }}</ref> However, Searle himself denied being "Carbon chauvinist".<ref name=":1">{{Cite book |last=Vicari |first=Giuseppe |url=https://books.google.com/books?id=NA6e6LhEnAMC&dq=was+searle+a+carbon+chauvinist&pg=PA49 |title=Beyond Conceptual Dualism: Ontology of Consciousness, Mental Causation, and Holism in John R. Searle's Philosophy of Mind |date=2008 |publisher=Rodopi |isbn=978-90-420-2466-3 |page=49 |language=en}}</ref> He said "I have not tried to show that only biological based systems like our brains can think. [...] I regard this issue as up for grabs".<ref>{{Cite book |last=Fellows |first=Roger |url=https://books.google.com/books?id=CixGDHrR-uEC&pg=PA86 |title=Philosophy and Technology |date=1995 |publisher=Cambridge University Press |isbn=978-0-521-55816-7 |page=86 |language=en}}</ref> He said that even silicon machines could theoretically have human-like consciousness and thought, if the actual physical–chemical properties of silicon could be used in a way that can produce consciousness and thought, but "until we know how the brain does it we are not in a position to try to do it artificially".{{sfn|Preston|Bishop|2002|p=351}}
Summary:
Please note that all contributions to Niidae Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Encyclopedia:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Search
Search
Editing
Chinese room
(section)
Add topic