Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Special pages
Niidae Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Chinese room
(section)
Page
Discussion
English
Read
Edit
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
View history
General
What links here
Related changes
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Systems and virtual mind replies: finding the mind=== These replies attempt to answer the question: since the man in the room does not speak Chinese, where is the mind that does? These replies address the key [[ontological]] issues of [[mind/body problem|mind versus body]] and simulation vs. reality. All of the replies that identify the mind in the room are versions of "the system reply". ==== System reply ==== The basic version of the system reply argues that it is the "whole system" that understands Chinese.<ref>{{Harvnb|Searle|1980|pp=5β6}}; {{Harvnb|Cole|2004|pp=6β7}}; {{Harvnb|Hauser|2006|pp=2β3}}; {{Harvnb|Dennett|1991|p=439}}; {{Harvnb|Fearn|2007|p=44}}; {{Harvnb|Crevier|1993|p=269}}.</ref>{{efn|Versions of the system reply are held by [[Ned Block]], [[Jack Copeland]], [[Daniel Dennett]], [[Jerry Fodor]], [[John Haugeland]], [[Ray Kurzweil]], and [[Georges Rey]], among others.{{sfn|Cole|2004|p=6}}}} While the man understands only English, when he is combined with the program, scratch paper, pencils and file cabinets, they form a system that can understand Chinese. "Here, understanding is not being ascribed to the mere individual; rather it is being ascribed to this whole system of which he is a part" Searle explains.{{sfn|Searle|1980|p=6}} Searle notes that (in this simple version of the reply) the "system" is nothing more than a collection of ordinary physical objects; it grants the power of understanding and consciousness to "the conjunction of that person and bits of paper"{{sfn|Searle|1980|p=6}} without making any effort to explain how this pile of objects has become a conscious, thinking being. Searle argues that no reasonable person should be satisfied with the reply, unless they are "under the grip of an ideology;"{{sfn|Searle|1980|p=6}} In order for this reply to be remotely plausible, one must take it for granted that consciousness can be the product of an information processing "system", and does not require anything resembling the actual biology of the brain. Searle then responds by simplifying this list of physical objects: he asks what happens if the man memorizes the rules and keeps track of everything in his head? Then the whole system consists of just one object: the man himself. Searle argues that if the man does not understand Chinese then the system does not understand Chinese either because now "the system" and "the man" both describe exactly the same object.{{sfn|Searle|1980|p=6}} Critics of Searle's response argue that the program has allowed the man to have two minds in one head.{{who|date=March 2011}} If we assume a "mind" is a form of information processing, then the [[theory of computation]] can account for two computations occurring at once, namely (1) the computation for [[Universal Turing machine|universal programmability]] (which is the function instantiated by the person and note-taking materials independently from any particular program contents) and (2) the computation of the Turing machine that is described by the program (which is instantiated by everything including the specific program).{{sfn|Yee|1993|loc=p. 44, footnote 2}} The theory of computation thus formally explains the open possibility that the second computation in the Chinese Room could entail a human-equivalent semantic understanding of the Chinese inputs. The focus belongs on the program's Turing machine rather than on the person's.{{sfn|Yee|1993|pp=42β47}} However, from Searle's perspective, this argument is circular. The question at issue is whether consciousness is a form of information processing, and this reply requires that we make that assumption. More sophisticated versions of the systems reply try to identify more precisely what "the system" is and they differ in exactly how they describe it. According to these replies,{{who|date=March 2011}} the "mind that speaks Chinese" could be such things as: the "software", a "program", a "running program", a simulation of the "neural correlates of consciousness", the "functional system", a "simulated mind", an "[[strong emergence|emergent]] property", or "a virtual mind". ==== Virtual mind reply ==== [[Marvin Minsky]] suggested a version of the system reply known as the "virtual mind reply".{{efn|The virtual mind reply is held by Minsky, {{sfn|Minsky|1980|p=440}}{{sfn|Cole|2004|p=7}} [[Tim Maudlin]], [[David Chalmers]] and David Cole.{{sfn|Cole|2004|pp=7β9}}}} The term "[[virtual artifact|virtual]]" is used in computer science to describe an object that appears to exist "in" a computer (or computer network) only because software makes it appear to exist. The objects "inside" computers (including files, folders, and so on) are all "virtual", except for the computer's electronic components. Similarly, Minsky that a computer may contain a "mind" that is virtual in the same sense as [[virtual machine]]s, [[virtual communities]] and [[virtual reality]]. To clarify the distinction between the simple systems reply given above and virtual mind reply, David Cole notes that two simulations could be running on one system at the same time: one speaking Chinese and one speaking Korean. While there is only one system, there can be multiple "virtual minds," thus the "system" cannot be the "mind".{{sfn|Cole|2004|p=8}} Searle responds that such a mind is at best a simulation, and writes: "No one supposes that computer simulations of a five-alarm fire will burn the neighborhood down or that a computer simulation of a rainstorm will leave us all drenched."{{sfn|Searle|1980|p=12}} Nicholas Fearn responds that, for some things, simulation is as good as the real thing. "When we call up the pocket calculator function on a desktop computer, the image of a pocket calculator appears on the screen. We don't complain that it isn't really a calculator, because the physical attributes of the device do not matter."{{sfn|Fearn|2007|p=47}} The question is, is the human mind like the pocket calculator, essentially composed of information, where a perfect simulation of the thing just <em>is</em> the thing? Or is the mind like the rainstorm, a thing in the world that is more than just its simulation, and not realizable in full by a computer simulation? For decades, this question of simulation has led AI researchers and philosophers to consider whether the term "[[synthetic intelligence]]" is more appropriate than the common description of such intelligences as "artificial." These replies provide an explanation of exactly who it is that understands Chinese. If there is something ''besides'' the man in the room that can understand Chinese, Searle cannot argue that (1) the man does not understand Chinese, therefore (2) nothing in the room understands Chinese. This, according to those who make this reply, shows that Searle's argument fails to prove that "strong AI" is false.{{efn|David Cole writes "From the intuition that in the CR thought experiment he would not understand Chinese by running a program, Searle infers that there is no understanding created by running a program. Clearly, whether that inference is valid or not turns on a metaphysical question about the identity of persons and minds. If the person understanding is not identical with the room operator, then the inference is unsound."{{sfn|Cole|2004|p=21}}}} These replies, by themselves, do not provide any evidence that strong AI is true, however. They do not show that the system (or the virtual mind) understands Chinese, other than the hypothetical premise that it passes the Turing test. Searle argues that, if we are to consider Strong AI remotely plausible, the Chinese Room is an example that requires explanation, and it is difficult or impossible to explain how consciousness might "emerge" from the room or how the system would have consciousness. As Searle writes "the systems reply simply begs the question by insisting that the system must understand Chinese"{{sfn|Searle|1980|p=6}} and thus is dodging the question or hopelessly circular.
Summary:
Please note that all contributions to Niidae Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Encyclopedia:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Search
Search
Editing
Chinese room
(section)
Add topic