Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Special pages
Niidae Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Chinese room
(section)
Page
Discussion
English
Read
Edit
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
View history
General
What links here
Related changes
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Philosophy == Although the Chinese Room argument was originally presented in reaction to the statements of [[artificial intelligence]] researchers, philosophers have come to consider it as an important part of the [[philosophy of mind]]. It is a challenge to [[functionalism (philosophy of mind)|functionalism]] and the [[computational theory of mind]],{{efn|name=Computationalism|Harnad holds that the Searle's argument is against the thesis that "has since come to be called 'computationalism,' according to which cognition is just computation, hence mental states are just computational states".{{sfn|Harnad|2005|p=1}} David Cole agrees that "the argument also has broad implications for functionalist and computational theories of meaning and of mind".{{sfn|Cole|2004|p=1}}}} and is related to such questions as the [[mind–body problem]], the [[problem of other minds]], the [[symbol grounding]] problem, and the [[hard problem of consciousness]].{{efn|name=Consciousness}} === Strong AI ===<!--This section title is linked to from several places --> Searle identified a philosophical position he calls "strong AI": {{Blockquote| The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds.{{efn|name="Strong AI"|This version is from Searle's ''Mind, Language and Society''{{sfn|Searle|1999|p={{Page needed|date=February 2012}}}} and is also quoted in [[Daniel Dennett]]'s ''[[Consciousness Explained]]''.{{sfn|Dennett|1991|p=435}} Searle's original formulation was "The appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states."{{sfn|Searle|1980|p=1}} Strong AI is defined similarly by [[Stuart J. Russell]] and [[Peter Norvig]]: "weak AI—the idea machines could act a <em>as if</em> they were intelligent—and strong AI—the assertions that do so are <em>actually</em> consciously thinking (not just <em>simulating</em> thinking)."{{sfn|Russell|Norvig|2021|p=981}}}} }} The definition depends on the distinction between simulating a mind and actually having one. Searle writes that "according to Strong AI, the correct simulation really is a mind. According to Weak AI, the correct simulation is a model of the mind."{{sfn|Searle|2009|p=1}} The claim is implicit in some of the statements of early AI researchers and analysts. For example, in 1955, AI founder [[Herbert A. Simon]] declared that "there are now in the world machines that think, that learn and create".<ref>Quoted in {{Harvnb|McCorduck|2004|p=138}}.</ref> Simon, together with [[Allen Newell]] and [[Cliff Shaw]], after having completed the first program that could do [[formal reasoning]] (the [[Logic Theorist]]), claimed that they had "solved the venerable mind–body problem, explaining how a system composed of matter can have the properties of mind."<ref>Quoted in {{Harvnb|Crevier|1993|p=46}}</ref> [[John Haugeland]] wrote that "AI wants only the genuine article: <em>machines with minds</em>, in the full and literal sense. This is not science fiction, but real science, based on a theoretical conception as deep as it is daring: namely, we are, at root, <em>computers ourselves</em>."{{sfn|Haugeland|1985|p=2|ps= (Italics his)}} Searle also ascribes the following claims to advocates of strong AI: * AI systems can be used to explain the mind;{{sfn|Searle|1980|p=1}} * The study of the brain is irrelevant to the study of the mind;{{efn|Searle believes that "strong AI only makes sense given the dualistic assumption that, where the mind is concerned, the brain doesn't matter." {{sfn|Searle|1980|p=13}} He writes elsewhere, "I thought the whole idea of strong AI was that we don't need to know how the brain works to know how the mind works." {{sfn|Searle|1980|p=8}} This position owes its phrasing to Stevan Harnad.{{sfn|Harnad|2001}}}} and * The [[Turing test]] is adequate for establishing the existence of mental states.{{efn|"One of the points at issue," writes Searle, "is the adequacy of the Turing test."{{sfn|Searle|1980|p=6}}}} === Strong AI as computationalism or functionalism === In more recent presentations of the Chinese room argument, Searle has identified "strong AI" as "computer [[functionalism (philosophy of mind)|functionalism]]" (a term he attributes to [[Daniel Dennett]]).{{sfn|Searle|1992|p=44}}{{sfn|Searle|2004|p=45}} Functionalism is a position in modern [[philosophy of mind]] that holds that we can define mental phenomena (such as beliefs, desires, and perceptions) by describing their functions in relation to each other and to the outside world. Because a computer program can accurately [[knowledge representation and reasoning|represent]] functional relationships as relationships between symbols, a computer can have mental phenomena if it runs the right program, according to functionalism. [[Stevan Harnad]] argues that Searle's depictions of strong AI can be reformulated as "recognizable tenets of <em>computationalism</em>, a position (unlike "strong AI") that is actually held by many thinkers, and hence one worth refuting."{{sfn|Harnad|2001|p=3|ps= (Italics his)}} [[Computationalism]]{{efn|Computationalism is associated with [[Jerry Fodor]] and [[Hilary Putnam]],{{sfn|Horst|2005|p=1}} and is held by [[Allen Newell]],{{sfn|Harnad|2001}} [[Zenon Pylyshyn]]{{sfn|Harnad|2001}} and [[Steven Pinker]],{{sfn|Pinker|1997}} among others.}} is the position in the philosophy of mind which argues that the mind can be accurately described as an [[Information processing (psychology)|information-processing]] system. Each of the following, according to Harnad, is a "tenet" of computationalism:{{sfn|Harnad|2001|pp=3–5}} * Mental states are computational states (which is why computers can have mental states and help to explain the mind); * Computational states are [[multiple realizability|implementation-independent]]—in other words, it is the software that determines the computational state, not the hardware (which is why the brain, being hardware, is irrelevant); and that * Since implementation is unimportant, the only empirical data that matters is how the system functions; hence the Turing test is definitive. Recent philosophical discussions have revisited the implications of computationalism for artificial intelligence. Goldstein and Levinstein explore whether [[large language model]]s (LLMs) like [[ChatGPT]] can possess minds, focusing on their ability to exhibit folk psychology, including beliefs, desires, and intentions. The authors argue that LLMs satisfy several philosophical theories of mental representation, such as informational, causal, and structural theories, by demonstrating robust internal representations of the world. However, they highlight that the evidence for LLMs having action dispositions necessary for belief-desire psychology remains inconclusive. Additionally, they refute common skeptical challenges, such as the "[[Stochastic parrot|stochastic parrots]]" argument and concerns over memorization, asserting that LLMs exhibit structured internal representations that align with these philosophical criteria.{{sfn|Goldstein|Levinstein|2024}} [[David Chalmers]] suggests that while current LLMs lack features like recurrent processing and unified agency, advancements in AI could address these limitations within the next decade, potentially enabling systems to achieve consciousness. This perspective challenges Searle's original claim that purely "syntactic" processing cannot yield understanding or consciousness, arguing instead that such systems could have authentic mental states.{{sfn|Chalmers|2023}} === Strong AI vs. biological naturalism === Searle holds a philosophical position he calls "[[biological naturalism]]": that consciousness{{efn|name=Consciousness}} and understanding require specific biological machinery that is found in brains. He writes "brains cause minds"{{sfn|Searle|1990a|p=29}} and that "actual human mental phenomena [are] dependent on actual physical–chemical properties of actual human brains".{{sfn|Searle|1990a|p=29}} Searle argues that this machinery (known in [[neuroscience]] as the "[[neural correlates of consciousness]]") must have some causal powers that permit the human experience of consciousness.{{sfn|Searle|1990b}} Searle's belief in the existence of these powers has been criticized. Searle does not disagree with the notion that machines can have consciousness and understanding, because, as he writes, "we are precisely such machines".{{sfn|Searle|1980|p=11}} Searle holds that the brain is, in fact, a machine, but that the brain gives rise to consciousness and understanding using specific machinery. If neuroscience is able to isolate the mechanical process that gives rise to consciousness, then Searle grants that it may be possible to create machines that have consciousness and understanding. However, without the specific machinery required, Searle does not believe that consciousness can occur. Biological naturalism implies that one cannot determine if the experience of consciousness is occurring merely by examining how a system functions, because the specific machinery of the brain is essential. Thus, biological naturalism is directly opposed to both [[behaviorism]] and [[functionalism (philosophy of mind)|functionalism]] (including "computer functionalism" or "strong AI").{{sfn|Hauser|2006|p=8}} Biological naturalism is similar to [[identity theory of mind|identity theory]] (the position that mental states are "identical to" or "composed of" neurological events); however, Searle has specific technical objections to identity theory.{{sfn|Searle|1992|loc=chpt. 5}}{{efn|Larry Hauser writes that "biological naturalism is either confused (waffling between identity theory and dualism) or else it ''just is'' identity theory or dualism."{{sfn|Hauser|2006|p=8}}}} Searle's biological naturalism and strong AI are both opposed to [[Cartesian dualism]],{{sfn|Hauser|2006|p=8}} the classical idea that the brain and mind are made of different "substances". Indeed, Searle accuses strong AI of dualism, writing that "strong AI only makes sense given the dualistic assumption that, where the mind is concerned, the brain doesn't matter".{{sfn|Searle|1980|p=13}} === Consciousness === Searle's original presentation emphasized understanding—that is, [[mental state]]s with [[intentionality]]—and did not directly address other closely related ideas such as "consciousness". However, in more recent presentations, Searle has included consciousness as the real target of the argument.{{sfn|Searle|1992|p=44}} {{blockquote|Computational models of consciousness are not sufficient by themselves for consciousness. The computational model for consciousness stands to consciousness in the same way the computational model of anything stands to the domain being modelled. Nobody supposes that the computational model of rainstorms in London will leave us all wet. But they make the mistake of supposing that the computational model of consciousness is somehow conscious. It is the same mistake in both cases.{{sfn|Searle|2002}}|John R. Searle| ''Consciousness and Language'', p. 16}} [[David Chalmers]] writes, "it is fairly clear that consciousness is at the root of the matter" of the Chinese room.{{sfn|Chalmers|1996|p=322}} [[Colin McGinn]] argues that the Chinese room provides strong evidence that the [[hard problem of consciousness]] is fundamentally insoluble. The argument, to be clear, is not about whether a machine can be conscious, but about whether it (or anything else for that matter) can be shown to be conscious. It is plain that any other method of probing the occupant of a Chinese room has the same difficulties in principle as exchanging questions and answers in Chinese. It is simply not possible to divine whether a conscious agency or some clever [[simulation]] inhabits the room.{{sfn|McGinn|2000}} Searle argues that this is only true for an observer outside of the room. The whole point of the thought experiment is to put someone inside the room, where they can directly observe the operations of consciousness. Searle claims that from his vantage point within the room there is nothing he can see that could imaginably give rise to consciousness, other than himself, and clearly he does not have a mind that can speak Chinese. In Searle's words, "the computer has nothing more than I have in the case where I understand nothing".{{sfn|Searle|1980|p=418}} ===Applied ethics === [[File:USS Vincennes (CG-49) Aegis large screen displays.jpg|thumb|right|Sitting in the combat information center aboard [[USS Vincennes (CG-49)|a warship]]—proposed as a real-life analog to the Chinese room]] Patrick Hew used the Chinese Room argument to deduce requirements from military [[command and control]] systems if they are to preserve a commander's [[moral agency]]. He drew an analogy between a commander in their [[command center]] and the person in the Chinese Room, and analyzed it under a reading of [[Nicomachean Ethics|Aristotle's notions of "compulsory" and "ignorance"]]. Information could be "down converted" from meaning to symbols, and manipulated symbolically, but moral agency could be undermined if there was inadequate 'up conversion' into meaning. Hew cited examples from the [[Iran Air Flight 655|USS ''Vincennes'' incident]].{{sfn|Hew|2016}}
Summary:
Please note that all contributions to Niidae Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Encyclopedia:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Search
Search
Editing
Chinese room
(section)
Add topic