Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Special pages
Niidae Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
John Searle
(section)
Page
Discussion
English
Read
Edit
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
View history
General
What links here
Related changes
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
====Artificial intelligence==== {{See also|Chinese room|philosophy of artificial intelligence}} Biological naturalism implies that if humans want to create a conscious being, they will have to duplicate whatever physical processes the brain goes through to cause consciousness. Searle thereby means to contradict what he calls "[[Chinese room#Strong AI|Strong AI]]", defined by the assumption that "the appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to ''understand'' and have other cognitive states."<ref name=":2" /> In 1980, Searle presented the "[[Chinese room]]" argument, which purports to prove the falsity of strong AI.<ref name=":2">[http://www.bbsonline.org/Preprints/OldArchive/bbs.searle2.html "Minds, Brains and Programs"] {{webarchive|url=https://web.archive.org/web/20010221025515/http://www.bbsonline.org/Preprints/OldArchive/bbs.searle2.html |date=2001-02-21}}, ''The Behavioral and Brain Sciences''.3, pp. 417β424. (1980)</ref> A person is in a room with two slits, and they have a book and some scratch paper. This person does not know any Chinese. Someone outside the room slides some Chinese characters in through the first slit; the person in the room follows the instructions in the book, transcribing the characters as instructed onto the scratch paper, and slides the resulting sheet out by the second slit. To people outside the room, it appears that the room speaks Chinese β they have slid Chinese statements into one slit and got valid responses in English β yet the 'room' does not understand a word of Chinese. This suggests, according to Searle, that no computer can ever understand Chinese or English, because, as the [[thought experiment]] suggests, being able to 'translate' Chinese into English does not entail 'understanding' either Chinese or English: all that the person in the thought experiment, and hence a computer, is able to do is to execute certain syntactic manipulations.<ref>{{Cite web|url=http://globetrotter.berkeley.edu/people/Searle/searle-con4.html|title=Conversation with John Searle, p.4 of 6|website=globetrotter.berkeley.edu}}</ref><ref name="Roberts">{{cite journal |last1=Roberts |first1=Jacob |title=Thinking Machines: The Search for Artificial Intelligence |journal=Distillations |date=2016 |volume=2 |issue=2 |pages=14β23 |url=https://www.sciencehistory.org/distillations/magazine/thinking-machines-the-search-for-artificial-intelligence |access-date=March 22, 2018 |archive-url=https://web.archive.org/web/20180819152455/https://www.sciencehistory.org/distillations/magazine/thinking-machines-the-search-for-artificial-intelligence |archive-date=August 19, 2018 |url-status=dead}}</ref> [[Douglas Hofstadter]] and [[Daniel Dennett]] in their book ''[[The Mind's I]]'' criticize Searle's view of AI, particularly the Chinese room argument.<ref>Hofstadter, D., 1981, 'Reflections on Searle', in Hofstadter and Dennett (eds.), The Mind's I, New York: Basic Books, pp. 373β382.</ref> [[Stevan Harnad]] argues that Searle's "Strong AI" is really just another name for [[Functionalism (philosophy of mind)|functionalism]] and [[computationalism]], and that these positions are the real targets of his critique.<ref>[http://cogprints.org/4023/ Harnad, Stevan (2001)], "What's Wrong and Right About Searle's Chinese Room Argument", in M.; Preston, J., ''Essays on Searle's Chinese Room Argument'', Oxford University Press.</ref> Functionalists argue that consciousness can be defined as a set of informational processes inside the brain. It follows that anything that carries out the same informational processes as a human is also conscious. Thus, if humans wrote a computer program that was conscious, they could run that computer program on, say, a system of ping-pong balls and beer cups and the system would be equally conscious, because it was running the same information processes. Searle argues that this is impossible, contending that consciousness is a physical property, like digestion or fire. No matter how good a simulation of digestion is built on the computer, it will not digest anything; no matter how well it simulates fire, nothing will get burnt. By contrast, informational processes are ''observer-relative'': observers pick out certain patterns in the world and consider them information processes, but information processes are not things-in-the-world themselves. Since they do not exist at a physical level, Searle argues, they cannot have ''causal efficacy'' and thus cannot cause consciousness. There is no physical law, Searle insists, that can see the equivalence between a personal computer, a series of ping-pong balls and beer cans, and a pipe-and-water system all implementing the same program.<ref>Searle 1980</ref>
Summary:
Please note that all contributions to Niidae Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Encyclopedia:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Search
Search
Editing
John Searle
(section)
Add topic