Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Special pages
Niidae Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Mind
(section)
Page
Discussion
English
Read
Edit
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
View history
General
What links here
Related changes
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Non-human == === Animal === It is commonly acknowledged today that animals have some form of mind, but it is controversial to which animals this applies and how their mind differs from the human mind.<ref>{{multiref | {{harvnb|Carruthers|2019|pp=ix, 29β30}} | {{harvnb|Griffin|1998|pp=53β55}} }}</ref> Different conceptions of the mind lead to different responses to this problem. When understood in a very wide sense as the capacity to process information, the mind is present in all forms of life, including insects, plants, and individual cells.<ref>{{harvnb|Spradlin|Porterfield|2012|pp=[https://books.google.com/books?id=aynUBwAAQBAJ&pg=PA17 17β18]}}</ref> On the other side of the spectrum are views that deny the existence of mentality in most or all non-human animals based on the idea that they lack key mental capacities, like abstract [[rationality]] and symbolic language.<ref>{{multiref | {{harvnb|Carruthers|2019|pp=29β30}} | {{harvnb|Steiner|2014|p=[https://books.google.com/books?id=nHBEBQAAQBAJ&pg=PA93 93]}} | {{harvnb|Thomas|2020|pp=999β1000}} }}</ref> The status of [[Animal cognition|animal minds]] is highly relevant to the field of [[ethics]] since it affects the treatment of animals, including the topic of [[animal rights]].<ref>{{multiref | {{harvnb|Griffin|2013|p=[https://books.google.com/books?id=K2uXAwAAQBAJ&pg=PR9 ix]}} | {{harvnb|Carruthers|2019|p=ix}} | {{harvnb|Fischer|2021|pp=28β29}} }}</ref> Discontinuity views state that the minds of non-human animals are fundamentally different from human minds and often point to higher mental faculties, like thinking, reasoning, and deliberate decision-making.<ref>{{multiref | {{harvnb|Fischer|2021|pp=30β32}} | {{harvnb|Lurz|loc=Lead Section}} | {{harvnb|Carruthers|2019|pp=ix, 29β30}} | {{harvnb|Penn|Holyoak|Povinelli|2008|pp=109β110}} }}</ref> This outlook is reflected in the traditionally influential position of defining humans as "[[rational animal]]s" as opposed to all other animals.<ref>{{multiref | {{harvnb|Melis|MonsΓ³|2023|pp=1β2}} | {{harvnb|Rysiew|2012}} }}</ref> Continuity views, by contrast, emphasize similarities and see cognitive differences in degree rather than kind. Central considerations for this position are the shared evolutionary origin and organic similarities on the level of the brain and nervous system. Observable behavior is another key factor, such as problem-solving skills, [[animal communication]], and reactions to and expressions of pain and pleasure. Of particular importance are the questions of consciousness and [[sentience]], that is, to what extent non-human animals have a subjective experience of the world and are capable of suffering and feeling joy.<ref>{{multiref | {{harvnb|Fischer|2021|pp=32β35}} | {{harvnb|Lurz|loc=Lead Section}} | {{harvnb|Griffin|1998|pp=53β55}} | {{harvnb|Carruthers|2019|pp=ixβx}} | {{harvnb|Penn|Holyoak|Povinelli|2008|pp=109β110}} }}</ref> === Artificial === {{main|Philosophy of artificial intelligence}} Some of the difficulties of assessing animal minds are also reflected in the topic of artificial minds. It includes the question of whether computer systems implementing [[artificial intelligence]] should be considered a form of mind.<ref>{{multiref | {{harvnb|McClelland|2021|p=[https://books.google.com/books?id=viQqEAAAQBAJ&pg=PT81 81]}} | {{harvnb|Franklin|1995|pp=1β2}} | {{harvnb|Anderson|Piccinini|2024|pp=[https://books.google.com/books?id=o68FEQAAQBAJ&pg=PA232 232β233]}} | {{harvnb|Carruthers|2004|pp=267β268}} }}</ref> This idea is consistent with some theories of the nature of mind, such as functionalism and its claim that mental concepts describe functional roles. It asserts that the functions implemented by biological brains could in principle also be implemented by artificial devices.<ref>{{multiref | {{harvnb|Carruthers|2004|pp=267β268}} | {{harvnb|Levin|2023|loc=Lead Section, Β§ 1. What Is Functionalism?}} | {{harvnb|Searle|2004|p=62}} | {{harvnb|Jaworski|2011|pp=136β137}} }}</ref> [[File:Turing test diagram.png|thumb|alt=Diagram of a person receiving two written pages, one from a computer and one from a human|The [[Turing test]] aims to determine whether a computer can imitate human linguistic behavior to the degree that it is not possible to tell the difference between human and computer.<ref name="auto2">{{multiref | {{harvnb|Biever|2023|pp=686β689}} | {{harvnb|Carruthers|2004|pp=248β249, 269β270}} | {{harvnb|Bansal|2022|loc=Β§ 6. The Plastic Mind}} | {{harvnb|Hodges|2013|pp=[https://books.google.com/books?id=cgqrCAAAQBAJ&pg=PA3 3, 6]}} }}</ref>]] The [[Turing test]], proposed by [[Alan Turing]] (1912β1954), is a traditionally influential procedure to test artificial intelligence: a person exchanges messages with two parties, one of them a human and the other a computer. The computer passes the test if it is not possible to reliably tell which party is the human and which one the computer. While there are computer programs today that may pass the Turing test, this alone is usually not accepted as conclusive proof of mindedness.<ref name="auto2"/> For some aspects of mind, it is controversial whether computers can, in principle, implement them, such as desires, feelings, consciousness, and free will.<ref>{{harvnb|Carruthers|2004|pp=270β273}}</ref> This problem is often discussed through the contrast between [[Weak artificial intelligence|weak]] and strong artificial intelligence. Weak or narrow artificial intelligence is limited to specific mental capacities or functions. It focuses on a particular task or a narrow set of tasks, like [[Vehicular automation|autonomous driving]], [[speech recognition]], or [[Automated theorem proving|theorem proving]]. The goal of strong AI, also termed ''[[artificial general intelligence]]'', is to create a complete artificial person that has all the mental capacities of humans, including consciousness, emotion, and reason.<ref>{{multiref | {{harvnb|Chen|2023|p=[https://books.google.com/books?id=C8vAEAAAQBAJ&pg=PA1141 1141]}} | {{harvnb|Bringsjord|Govindarajulu|2024|loc=Β§ 8. Philosophy of Artificial Intelligence}} | {{harvnb|Butz|2021|pp=91β92}} }}</ref> It is controversial whether strong AI is possible; influential arguments against it include [[John Searle]]'s [[Chinese Room Argument]] and [[Hubert Dreyfus]]'s critique based on [[Martin Heidegger|Heideggerian]] philosophy.<ref>{{multiref | {{harvnb|Bringsjord|Govindarajulu|2024|loc=Β§ 8. Philosophy of Artificial Intelligence}} | {{harvnb|Fjelland|2020|pp=1β2}} }}</ref>
Summary:
Please note that all contributions to Niidae Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Encyclopedia:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Search
Search
Editing
Mind
(section)
Add topic