Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Special pages
Niidae Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Chinese room
(section)
Page
Discussion
English
Read
Edit
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
View history
General
What links here
Related changes
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Strong AI as computationalism or functionalism === In more recent presentations of the Chinese room argument, Searle has identified "strong AI" as "computer [[functionalism (philosophy of mind)|functionalism]]" (a term he attributes to [[Daniel Dennett]]).{{sfn|Searle|1992|p=44}}{{sfn|Searle|2004|p=45}} Functionalism is a position in modern [[philosophy of mind]] that holds that we can define mental phenomena (such as beliefs, desires, and perceptions) by describing their functions in relation to each other and to the outside world. Because a computer program can accurately [[knowledge representation and reasoning|represent]] functional relationships as relationships between symbols, a computer can have mental phenomena if it runs the right program, according to functionalism. [[Stevan Harnad]] argues that Searle's depictions of strong AI can be reformulated as "recognizable tenets of <em>computationalism</em>, a position (unlike "strong AI") that is actually held by many thinkers, and hence one worth refuting."{{sfn|Harnad|2001|p=3|ps= (Italics his)}} [[Computationalism]]{{efn|Computationalism is associated with [[Jerry Fodor]] and [[Hilary Putnam]],{{sfn|Horst|2005|p=1}} and is held by [[Allen Newell]],{{sfn|Harnad|2001}} [[Zenon Pylyshyn]]{{sfn|Harnad|2001}} and [[Steven Pinker]],{{sfn|Pinker|1997}} among others.}} is the position in the philosophy of mind which argues that the mind can be accurately described as an [[Information processing (psychology)|information-processing]] system. Each of the following, according to Harnad, is a "tenet" of computationalism:{{sfn|Harnad|2001|pp=3β5}} * Mental states are computational states (which is why computers can have mental states and help to explain the mind); * Computational states are [[multiple realizability|implementation-independent]]βin other words, it is the software that determines the computational state, not the hardware (which is why the brain, being hardware, is irrelevant); and that * Since implementation is unimportant, the only empirical data that matters is how the system functions; hence the Turing test is definitive. Recent philosophical discussions have revisited the implications of computationalism for artificial intelligence. Goldstein and Levinstein explore whether [[large language model]]s (LLMs) like [[ChatGPT]] can possess minds, focusing on their ability to exhibit folk psychology, including beliefs, desires, and intentions. The authors argue that LLMs satisfy several philosophical theories of mental representation, such as informational, causal, and structural theories, by demonstrating robust internal representations of the world. However, they highlight that the evidence for LLMs having action dispositions necessary for belief-desire psychology remains inconclusive. Additionally, they refute common skeptical challenges, such as the "[[Stochastic parrot|stochastic parrots]]" argument and concerns over memorization, asserting that LLMs exhibit structured internal representations that align with these philosophical criteria.{{sfn|Goldstein|Levinstein|2024}} [[David Chalmers]] suggests that while current LLMs lack features like recurrent processing and unified agency, advancements in AI could address these limitations within the next decade, potentially enabling systems to achieve consciousness. This perspective challenges Searle's original claim that purely "syntactic" processing cannot yield understanding or consciousness, arguing instead that such systems could have authentic mental states.{{sfn|Chalmers|2023}}
Summary:
Please note that all contributions to Niidae Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Encyclopedia:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Search
Search
Editing
Chinese room
(section)
Add topic