Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Special pages
Niidae Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Chinese room
(section)
Page
Discussion
English
Read
Edit
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
View history
General
What links here
Related changes
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Complete argument == Searle has produced a more formal version of the argument of which the Chinese Room forms a part. He presented the first version in 1984. The version given below is from 1990.<ref>{{Harvnb|Searle|1984}}; {{Harvnb|Searle|1990a}}.</ref>{{efn|The wording of each axiom and conclusion are from Searle's presentation in ''[[Scientific American]]''.{{sfn|Searle|1990a}}{{sfn|Hauser|2006|p=5}} (A1-3) and (C1) are described as 1,2,3 and 4 in David Cole.{{sfn|Cole|2004|p=5}}}} The Chinese room thought experiment is intended to prove point A3.{{efn|Paul and Patricia Churchland write that the Chinese room thought experiment is intended to "shore up axiom 3".{{sfn|Churchland|Churchland|1990|p=34}} }} He begins with three axioms:<!--these should maybe be a description list but gotta put it together as one or else it defeats the entire purpose!--> :(A1) "Programs are formal (syntactic)." ::A program uses syntax to manipulate symbols and pays no attention to the semantics of the symbols. It knows where to put the symbols and how to move them around, but it does not know what they stand for or what they mean. For the program, the symbols are just physical objects like any others. :(A2) "Minds have mental contents (semantics)." ::Unlike the symbols used by a program, our thoughts have meaning: they represent things and we know what it is they represent. :(A3) "Syntax by itself is neither constitutive of nor sufficient for semantics." ::This is what the Chinese room thought experiment is intended to prove: the Chinese room has syntax (because there is a man in there moving symbols around). The Chinese room has no semantics (because, according to Searle, there is no one or nothing in the room that understands what the symbols mean). Therefore, having syntax is not enough to generate semantics. Searle posits that these lead directly to this conclusion: :(C1) Programs are neither constitutive of nor sufficient for minds. ::This should follow without controversy from the first three: Programs don't have semantics. Programs have only syntax, and syntax is insufficient for semantics. Every mind has semantics. Therefore no programs are minds. This much of the argument is intended to show that artificial intelligence can never produce a machine with a mind by writing programs that manipulate symbols. The remainder of the argument addresses a different issue. Is the human brain running a program? In other words, is the [[computational theory of mind]] correct?{{efn|name=Computationalism}} He begins with an axiom that is intended to express the basic modern scientific consensus about brains and minds: :(A4) Brains cause minds. Searle claims that we can derive "immediately" and "trivially"{{sfn|Searle|1990a}} that: :(C2) Any other system capable of causing minds would have to have causal powers (at least) equivalent to those of brains. ::Brains must have something that causes a mind to exist. Science has yet to determine exactly what it is, but it must exist, because minds exist. Searle calls it "causal powers". "Causal powers" is whatever the brain uses to create a mind. If anything else can cause a mind to exist, it must have "equivalent causal powers". "Equivalent causal powers" is whatever <em>else</em> that could be used to make a mind. And from this he derives the further conclusions: :(C3) Any artifact that produced mental phenomena, any artificial brain, would have to be able to duplicate the specific causal powers of brains, and it could not do that just by running a formal program. ::This follows from C1 and C2: Since no program can produce a mind, and "equivalent causal powers" produce minds, it follows that programs do not have "equivalent causal powers." :(C4) The way that human brains actually produce mental phenomena cannot be solely by virtue of running a computer program. ::Since programs do not have "equivalent causal powers", "equivalent causal powers" produce minds, and brains produce minds, it follows that brains do not use programs to produce minds. Refutations of Searle's argument take many different forms (see below). Computationalists and functionalists reject A3, arguing that "syntax" (as Searle describes it) <em>can</em> have "semantics" if the syntax has the right functional structure. Eliminative materialists reject A2, arguing that minds don't actually have "semantics"βthat thoughts and other mental phenomena are inherently meaningless but nevertheless function as if they had meaning.
Summary:
Please note that all contributions to Niidae Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Encyclopedia:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Search
Search
Editing
Chinese room
(section)
Add topic