Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Special pages
Niidae Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Chinese room
(section)
Page
Discussion
English
Read
Edit
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
View history
General
What links here
Related changes
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Brain simulation and connectionist replies: redesigning the room === These arguments are all versions of the systems reply that identify a particular kind of system as being important; they identify some special technology that would create conscious understanding in a machine. (The "robot" and "commonsense knowledge" replies above also specify a certain kind of system as being important.) ==== Brain simulator reply ==== Suppose that the program simulated in fine detail the action of every neuron in the brain of a Chinese speaker.<ref>{{Harvnb|Searle|1980|pp=7–8}}; {{Harvnb|Cole|2004|pp=12–13}}; {{Harvnb|Hauser|2006|pp=3–4}}; {{Harvnb|Churchland|Churchland|1990}}.</ref>{{efn|The brain simulation reply has been made by [[Paul Churchland]], [[Patricia Churchland]] and [[Ray Kurzweil]].{{sfn|Cole|2004|p=12}}}} This strengthens the intuition that there would be no significant difference between the operation of the program and the operation of a live human brain. Searle replies that such a simulation does not reproduce the important features of the brain—its causal and intentional states. He is adamant that "human mental phenomena [are] dependent on actual physical–chemical properties of actual human brains."{{sfn|Searle|1980|p=13}} Moreover, he argues: {{blockquote|[I]magine that instead of a monolingual man in a room shuffling symbols we have the man operate an elaborate set of water pipes with valves connecting them. When the man receives the Chinese symbols, he looks up in the program, written in English, which valves he has to turn on and off. Each water connection corresponds to a synapse in the Chinese brain, and the whole system is rigged up so that after doing all the right firings, that is after turning on all the right faucets, the Chinese answers pop out at the output end of the series of pipes. Now where is the understanding in this system? It takes Chinese as input, it simulates the formal structure of the synapses of the Chinese brain, and it gives Chinese as output. But the man certainly doesn't understand Chinese, and neither do the water pipes, and if we are tempted to adopt what I think is the absurd view that somehow the conjunction of man and water pipes understands, remember that in principle the man can internalize the formal structure of the water pipes and do all the "neuron firings" in his imagination.{{sfn|Searle|1980|p={{Page needed|date=January 2019}}}}}} =====China brain===== What if we ask each citizen of China to simulate one neuron, using the telephone system to simulate the connections between [[axon]]s and [[dendrite]]s? In this version, it seems obvious that no individual would have any understanding of what the brain might be saying.<ref>{{Harvnb|Cole|2004|p=4}}; {{Harvnb|Hauser|2006|p=11}}.</ref>{{efn|Early versions of this argument were put forward in 1974 by [[Lawrence Davis (scientist)|Lawrence Davis]] and in 1978 by [[Ned Block]]. Block's version used walkie talkies and was called the "Chinese Gym". Paul and Patricia Churchland described this scenario as well.{{sfn|Churchland|Churchland|1990}}}} It is also obvious that this system would be functionally equivalent to a brain, so if consciousness is a function, this system would be conscious. =====Brain replacement scenario===== In this, we are asked to imagine that engineers have invented a tiny computer that simulates the action of an individual neuron. What would happen if we replaced one neuron at a time? Replacing one would clearly do nothing to change conscious awareness. Replacing all of them would create a digital computer that simulates a brain. If Searle is right, then conscious awareness must disappear during the procedure (either gradually or all at once). Searle's critics argue that there would be no point during the procedure when he can claim that conscious awareness ends and mindless simulation begins.<ref>{{Harvnb|Cole|2004|p=20}}; {{Harvnb|Moravec|1988}}; {{Harvnb|Kurzweil|2005|p=262}}; {{Harvnb|Crevier|1993|pp=271 and 279}}.</ref>{{efn|An early version of the brain replacement scenario was put forward by [[Clark Glymour]] in the mid-70s and was touched on by [[Zenon Pylyshyn]] in 1980. [[Hans Moravec]] presented a vivid version of it,{{sfn|Moravec|1988}} and it is now associated with [[Ray Kurzweil]]'s version of [[transhumanism]].}}{{efn|Searle does not consider the brain replacement scenario as an argument against the CRA, however in another context, Searle examines several possible solutions, including the possibility that "you find, to your total amazement, that you are indeed losing control of your external behavior. You find, for example, that when doctors test your vision, you hear them say 'We are holding up a red object in front of you; please tell us what you see.' You want to cry out 'I can't see anything. I'm going totally blind.' But you hear your voice saying in a way that is completely outside of your control, 'I see a red object in front of me.' [...] [Y]our conscious experience slowly shrinks to nothing, while your externally observable behavior remains the same."{{sfn|Searle|1992}}}} (See [[Ship of Theseus]] for a similar thought experiment.) ====Connectionist replies==== :Closely related to the brain simulator reply, this claims that a massively parallel connectionist architecture would be capable of understanding.{{efn|The connectionist reply is made by [[Andy Clark]] and [[Ray Kurzweil]],{{sfn|Cole|2004|pp=12 & 17}} as well as [[Paul Churchland|Paul]] and [[Patricia Churchland]].{{sfn|Hauser|2006|p=7}}}} Modern [[deep learning]] is massively parallel and has successfully displayed intelligent behavior in many domains. [[Nils John Nilsson|Nils Nilsson]] argues that modern AI is using digitized "dynamic signals" rather than symbols of the kind used by AI in 1980.{{sfn|Nilsson|2007}} Here it is the [[sample (signal)|sampled]] signal which would have the semantics, not the individual numbers manipulated by the program. This is a different kind of machine than the one that Searle visualized. ====Combination reply==== :This response combines the robot reply with the brain simulation reply, arguing that a brain simulation connected to the world through a robot body could have a mind.<ref>{{Harvnb|Searle|1980|pp=8–9}}; {{Harvnb|Hauser|2006|p=11}}.</ref> ====Many mansions / wait till next year reply==== :Better technology in the future will allow computers to understand.{{sfn|Searle|1980|p=8}}{{efn|{{harvtxt|Searle|2009}} uses the name "Wait 'Til Next Year Reply".}} Searle agrees that this is possible, but considers this point irrelevant. Searle agrees that there may be other hardware besides brains that have conscious understanding. These arguments (and the robot or common-sense knowledge replies) identify some special technology that would help create conscious understanding in a machine. They may be interpreted in two ways: either they claim (1) this technology is required for consciousness, the Chinese room does not or cannot implement this technology, and therefore the Chinese room cannot pass the Turing test or (even if it did) it would not have conscious understanding. Or they may be claiming that (2) it is easier to see that the Chinese room has a mind if we visualize this technology as being used to create it. In the first case, where features like a robot body or a connectionist architecture are required, Searle claims that strong AI (as he understands it) has been abandoned.{{efn|Searle writes that the robot reply "tacitly concedes that cognition is not solely a matter of formal symbol manipulation." {{sfn|Searle|1980|p=7}} Stevan Harnad makes the same point, writing: "Now just as it is no refutation (but rather an affirmation) of the CRA to deny that [the Turing test] is a strong enough test, or to deny that a computer could ever pass it, it is merely special pleading to try to save computationalism by stipulating ad hoc (in the face of the CRA) that implementational details do matter after all, and that the computer's is the 'right' kind of implementation, whereas Searle's is the 'wrong' kind."{{sfn|Harnad|2001|p=14}}}} The Chinese room has all the elements of a Turing complete machine, and thus is capable of simulating any digital computation whatsoever. If Searle's room cannot pass the Turing test then there is no other digital technology that could pass the Turing test. If Searle's room could pass the Turing test, but still does not have a mind, then the Turing test is not sufficient to determine if the room has a "mind". Either way, it denies one or the other of the positions Searle thinks of as "strong AI", proving his argument. The brain arguments in particular deny strong AI if they assume that there is no simpler way to describe the mind than to create a program that is just as mysterious as the brain was. He writes "I thought the whole idea of strong AI was that we don't need to know how the brain works to know how the mind works."{{sfn|Searle|1980|p=8}} If computation does not provide an explanation of the human mind, then strong AI has failed, according to Searle. Other critics hold that the room as Searle described it does, in fact, have a mind, however they argue that it is difficult to see—Searle's description is correct, but misleading. By redesigning the room more realistically they hope to make this more obvious. In this case, these arguments are being used as appeals to intuition (see next section). In fact, the room can just as easily be redesigned to weaken our intuitions. [[Ned Block]]'s [[Blockhead argument]]{{sfn|Block|1981}} suggests that the program could, in theory, be rewritten into a simple [[lookup table]] of [[Production system (computer science)|rules]] of the form "if the user writes ''S'', reply with ''P'' and goto X". At least in principle, any program can be rewritten (or "[[refactored]]") into this form, even a brain simulation.{{efn|That is, any program running on a machine with a finite amount memory.}} In the blockhead scenario, the entire mental state is hidden in the letter X, which represents a [[memory address]]—a number associated with the next rule. It is hard to visualize that an instant of one's conscious experience can be captured in a single large number, yet this is exactly what "strong AI" claims. On the other hand, such a lookup table would be ridiculously large (to the point of being physically impossible), and the states could therefore be overly specific. Searle argues that however the program is written or however the machine is connected to the world, the mind is being simulated by a simple step-by-step digital machine (or machines). These machines are always just like the man in the room: they understand nothing and do not speak Chinese. They are merely manipulating symbols without knowing what they mean. Searle writes: "I can have any formal program you like, but I still understand nothing."{{sfn|Searle|1980|p=3}}
Summary:
Please note that all contributions to Niidae Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Encyclopedia:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Search
Search
Editing
Chinese room
(section)
Add topic