Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Special pages
Niidae Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Consciousness
(section)
Page
Discussion
English
Read
Edit
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
View history
General
What links here
Related changes
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Outside human adults== ===In children=== {{See also|Theory of mind}} Of the eight types of consciousness in the Lycan classification, some are detectable in utero and others develop years after birth. Psychologist and educator William Foulkes studied children's dreams and concluded that prior to the shift in cognitive maturation that humans experience during ages five to seven,<ref>{{cite book|editor1=[[Arnold J. Sameroff]]|editor2=Marshall M. Haith|date=1996|title=The Five to Seven Year Shift: The Age of Reason and Responsibility|location=Chicago|publisher=University of Chicago Press}}</ref> children lack the Lockean consciousness that Lycan had labeled "introspective consciousness" and that Foulkes labels "self-reflection".<ref>{{cite book|last=Foulkes|first=David|date=1999|title=Children's Dreaming and the Development of Consciousness|page=13|location=Cambridge, Massachusetts|publisher=Harvard University Press|quote= In defining 'consciousness' as a self-reflective act, psychology loses much of the glamour and mystery of other areas of consciousness-study, but it also can proceed on a workaday basis without becoming paralyzed in pure abstraction.}}</ref> In a 2020 paper, [[Katherine Nelson]] and [[Robyn Fivush]] use "autobiographical consciousness" to label essentially the same faculty, and agree with Foulkes on the timing of this faculty's acquisition. Nelson and Fivush contend that "language is the tool by which humans create a new, uniquely human form of consciousness, namely, autobiographical consciousness".<ref>{{cite journal|last1=Nelson|first1=Katherine|last2=Fivush|first2=Robin|title=The Development of Autobiographical Memory, Autobiographical Narratives, and Autobiographical Consciousness|journal=Psychological Reports|year=2020|volume=123|issue=1|page=74|doi=10.1177/0033294119852574|pmid=31142189|s2cid=169038149|doi-access=free}}</ref> [[Julian Jaynes]] had staked out these positions decades earlier.<ref>{{cite book|last=Jaynes|first=Julian|title=The Origin of Consciousness in the Breakdown of the Bicameral Mind|publisher=Houghton Mifflin|orig-year=1976| year=2000|page=447|quote=''Consciousness is based on language''.... Consciousness is not the same as cognition and should be sharply distinguished from it.|isbn=0-618-05707-2}}</ref><ref>{{cite book|last=Jaynes|first=Julian|title=The Origin of Consciousness in the Breakdown of the Bicameral Mind|publisher=Houghton Mifflin|orig-year=1976| year=2000|page=450|quote=The basic connotative definition of consciousness is thus an analog 'I' narratizing in a functional mind-space. The denotative definition is, as it was for Descartes, Locke, and Hume, what is introspectable.|isbn=0-618-05707-2}}</ref> Citing the developmental steps that lead the infant to autobiographical consciousness, Nelson and Fivush point to the acquisition of "[[theory of mind]]", calling theory of mind "necessary for autobiographical consciousness" and defining it as "understanding differences between one's own mind and others' minds in terms of beliefs, desires, emotions and thoughts". They write, "The hallmark of theory of mind, the understanding of false belief, occurs ... at five to six years of age".<ref>{{cite journal|last1=Nelson|first1=Katherine|last2=Fivush|first2=Robin|title=The Development of Autobiographical Memory, Autobiographical Narratives, and Autobiographical Consciousness|journal=Psychological Reports|year=2020|volume=123|issue=1|pages=80–83|doi=10.1177/0033294119852574|pmid=31142189|s2cid=169038149|doi-access=free}}</ref> ===In animals=== {{Main|Animal consciousness}} The topic of animal consciousness is beset by a number of difficulties. It poses the problem of other minds in an especially severe form, because non-human animals, lacking the ability to express human language, cannot tell humans about their experiences.<ref name=Allen>{{cite web|author=Colin Allen|title=Animal consciousness|publisher=Stanford Encyclopedia of Philosophy (Summer 2011 Edition)|editor=Edward N. Zalta|url=http://plato.stanford.edu/archives/sum2011/entries/consciousness-animal/|access-date=2011-10-25|archive-date=2019-07-31|archive-url=https://web.archive.org/web/20190731010951/https://plato.stanford.edu/archives/sum2011/entries/consciousness-animal/|url-status=live}}</ref> Also, it is difficult to reason objectively about the question, because a denial that an animal is conscious is often taken to imply that it does not feel, its life has no value, and that harming it is not morally wrong. Descartes, for example, has sometimes been blamed for mistreatment of animals due to the fact that he believed only humans have a non-physical mind.<ref>{{cite journal|author=Peter Carruthers|title=Sympathy and subjectivity|journal=Australasian Journal of Philosophy|year=1999|volume=77|issue=4|pages=465–482|doi=10.1080/00048409912349231|author-link=Peter Carruthers (philosopher)}}</ref> Most people have a strong intuition that some animals, such as cats and dogs, are conscious, while others, such as insects, are not; but the sources of this intuition are not obvious, and are often based on personal interactions with pets and other animals they have observed.<ref name=Allen/> [[File:Big-eared-townsend-fledermaus.jpg|right|thumb|[[Thomas Nagel]] argues that while a human might be able to imagine what it is like to be a [[bat]] by taking "the bat's point of view", it would still be impossible "to know what it is like for a [[bat]] to be a bat". (''[[Townsend's big-eared bat]] pictured''.)]] Philosophers who consider subjective experience the essence of consciousness also generally believe, as a correlate, that the existence and nature of animal consciousness can never rigorously be known. Thomas Nagel spelled out this point of view in an influential essay titled "[[What Is it Like to Be a Bat?]]". He said that an organism is conscious "if and only if there is something that it is like to be that organism—something it is like ''for'' the organism"; and he argued that no matter how much we know about an animal's brain and behavior, we can never really put ourselves into the mind of the animal and experience its world in the way it does itself.<ref name=NagelBat>{{cite book| author=Thomas Nagel|title=Mortal Questions|chapter=Ch. 12 What is it like to be a bat?|publisher=Cambridge University Press|year=1991|isbn=978-0-521-40676-5|author-link=Thomas Nagel}}</ref> Other thinkers, such as [[Douglas Hofstadter]], dismiss this argument as incoherent.<ref>{{cite book|author=Douglas Hofstadter|chapter=Reflections on ''What Is It Like to Be a Bat?''|pages=[https://archive.org/details/mindsifantasiesr00hofs/page/403 403–414]|title=The Mind's I|editor=Douglas Hofstadter|editor2=[[Daniel Dennett]]|publisher=Basic Books|year=1981|isbn=978-0-7108-0352-8|title-link=The Mind's I|author-link=Douglas Hofstadter}}</ref> Several psychologists and ethologists have argued for the existence of animal consciousness by describing a range of behaviors that appear to show animals holding beliefs about things they cannot directly perceive—[[Donald Griffin]]'s 2001 book ''Animal Minds'' reviews a substantial portion of the evidence.<ref name=Griffin2001>{{cite book|title=Animal Minds: Beyond Cognition to Consciousness|author=Donald Griffin|publisher=University of Chicago Press|year=2001|isbn=978-0-226-30865-4|author-link=Donald Griffin}}</ref> On July 7, 2012, eminent scientists from different branches of neuroscience gathered at the [[University of Cambridge]] to celebrate the Francis Crick Memorial Conference, which deals with consciousness in humans and pre-linguistic consciousness in nonhuman animals. After the conference, they signed in the presence of [[Stephen Hawking]], the 'Cambridge Declaration on Consciousness', which summarizes the most important findings of the survey: "We decided to reach a consensus and make a statement directed to the public that is not scientific. It's obvious to everyone in this room that animals have consciousness, but it is not obvious to the rest of the world. It is not obvious to the rest of the Western world or the Far East. It is not obvious to the society."<ref>{{cite AV media|url=https://www.youtube.com/watch?v=RSbom5MsfNM| archive-url=https://ghostarchive.org/varchive/youtube/20211028/RSbom5MsfNM| archive-date=2021-10-28|title=Animal Consciousness Officially Recognized by Leading Panel of Neuroscientists|date=3 September 2012|via=YouTube}}{{cbignore}}</ref> "Convergent evidence indicates that non-human animals ..., including all mammals and birds, and other creatures, ... have the necessary neural substrates of consciousness and the capacity to exhibit intentional behaviors."<ref>{{cite web|url=http://fcmconference.org/img/CambridgeDeclarationOnConsciousness.pdf|archive-url=https://ghostarchive.org/archive/20221009/http://fcmconference.org/img/CambridgeDeclarationOnConsciousness.pdf|archive-date=2022-10-09|url-status=live|title=Cambridge Declaration on Consciousness}}</ref> ===In artificial intelligence=== {{Main|Artificial consciousness}} The idea of an [[wikt:artifact|artifact]] made conscious is an ancient theme of mythology, appearing for example in the Greek myth of [[Pygmalion (mythology)|Pygmalion]], who carved a statue that was magically brought to life, and in medieval Jewish stories of the [[Golem]], a magically animated [[homunculus]] built of clay.<ref>{{cite book|author=Moshe Idel|title=Golem: Jewish Magical and Mystical Traditions on the Artificial Anthropoid|year=1990|publisher=SUNY Press|isbn=978-0-7914-0160-6}} Note: In many stories the Golem was mindless, but some gave it emotions or thoughts.</ref> However, the possibility of actually constructing a conscious machine was probably first discussed by [[Ada Lovelace]], in a set of notes written in 1842 about the [[Analytical Engine]] invented by [[Charles Babbage]], a precursor (never built) to modern electronic computers. Lovelace was essentially dismissive of the idea that a machine such as the Analytical Engine could think in a humanlike way. She wrote: {{blockquote|It is desirable to guard against the possibility of exaggerated ideas that might arise as to the powers of the Analytical Engine. ... The Analytical Engine has no pretensions whatever to ''originate'' anything. It can do whatever we ''know how to order it'' to perform. It can ''follow'' analysis; but it has no power of ''anticipating'' any analytical relations or truths. Its province is to assist us in making ''available'' what we are already acquainted with.<ref>{{cite web|author=Ada Lovelace|title=Sketch of The Analytical Engine, Note G|url=http://www.fourmilab.ch/babbage/sketch.html|author-link=Ada Lovelace|access-date=2011-09-10|archive-date=2010-09-13|archive-url=https://web.archive.org/web/20100913042032/http://www.fourmilab.ch/babbage/sketch.html|url-status=live}}</ref>}} One of the most influential contributions to this question was an essay written in 1950 by pioneering computer scientist [[Alan Turing]], titled ''[[Computing Machinery and Intelligence]]''. Turing disavowed any interest in terminology, saying that even "Can machines think?" is too loaded with spurious connotations to be meaningful; but he proposed to replace all such questions with a specific operational test, which has become known as the [[Turing test]].<ref name="tu">{{cite book|author=Stuart Shieber|title=The Turing Test : Verbal Behavior as the Hallmark of Intelligence|publisher=MIT Press|year=2004|isbn=978-0-262-69293-9}}</ref> To pass the test, a computer must be able to imitate a human well enough to fool interrogators. In his essay Turing discussed a variety of possible objections, and presented a counterargument to each of them. The Turing test is commonly cited in discussions of [[artificial intelligence]] as a proposed criterion for machine consciousness; it has provoked a great deal of philosophical debate. For example, Daniel Dennett and [[Douglas Hofstadter]] argue that anything capable of passing the Turing test is necessarily conscious,<ref name=MindsI>{{cite book|author=Daniel Dennett|author2=Douglas Hofstadter|year=1985|title=The Mind's I|publisher=Basic Books|isbn=978-0-553-34584-1|author2-link=Douglas Hofstadter|author-link=Daniel Dennett|url=https://archive.org/details/mindsifantasiesr1982hofs}}</ref> while [[David Chalmers]] argues that a [[philosophical zombie]] could pass the test, yet fail to be conscious.<ref name=Chalmers>{{cite book|author=David Chalmers|year=1997|title=The Conscious Mind: In Search of a Fundamental Theory|publisher=Oxford University Press|isbn=978-0-19-511789-9|author-link=David Chalmers}}</ref> A third group of scholars have argued that with technological growth once machines begin to display any substantial signs of human-like behavior then the dichotomy (of human consciousness compared to human-like consciousness) becomes passé and issues of machine autonomy begin to prevail even as observed in its nascent form within contemporary industry and [[technology]].<ref name="Ridley Scott pp. 133-144"/><ref name="Machine Morals 2010"/> [[Jürgen Schmidhuber]] argues that consciousness is the result of compression.<ref name="Schmidhuber2009">{{cite book|author=Jürgen Schmidhuber|year=2009|title=Driven by Compression Progress: A Simple Principle Explains Essential Aspects of Subjective Beauty, Novelty, Surprise, Interestingness, Attention, Curiosity, Creativity, Art, Science, Music, Jokes|url=https://archive.org/details/arxiv-0812.4360|author-link=Jürgen Schmidhuber|bibcode=2008arXiv0812.4360S|arxiv=0812.4360}}</ref> As an agent sees representation of itself recurring in the environment, the compression of this representation can be called consciousness. [[File:John searle2.jpg|thumb|upright|John Searle in December 2005]] In a lively exchange over what has come to be referred to as "the [[Chinese room]] argument", [[John Searle]] sought to refute the claim of proponents of what he calls "strong artificial intelligence (AI)" that a computer program can be conscious, though he does agree with advocates of "weak AI" that computer programs can be formatted to "simulate" conscious states. His own view is that consciousness has subjective, first-person causal powers by being essentially intentional due to the way human brains function biologically; conscious persons can perform computations, but consciousness is not inherently computational the way computer programs are. To make a Turing machine that speaks Chinese, Searle imagines a room with one monolingual English speaker (Searle himself, in fact), a book that designates a combination of Chinese symbols to be output paired with Chinese symbol input, and boxes filled with Chinese symbols. In this case, the English speaker is acting as a computer and the rulebook as a program. Searle argues that with such a machine, he would be able to process the inputs to outputs perfectly without having any understanding of Chinese, nor having any idea what the questions and answers could possibly mean. If the experiment were done in English, since Searle knows English, he would be able to take questions and give answers without any algorithms for English questions, and he would be effectively aware of what was being said and the purposes it might serve. Searle would pass the Turing test of answering the questions in both languages, but he is only conscious of what he is doing when he speaks English. Another way of putting the argument is to say that computer programs can pass the Turing test for processing the syntax of a language, but that the syntax cannot lead to semantic meaning in the way strong AI advocates hoped.<ref name=Searle1990>{{cite journal|author=John R. Searle|title=Is the brain's mind a computer program|journal=Scientific American|year=1990|volume= 262|issue=1|pages=26–31|url=http://www.cs.princeton.edu/courses/archive/spr06/cos116/Is_The_Brains_Mind_A_Computer_Program.pdf|archive-url=https://ghostarchive.org/archive/20221009/http://www.cs.princeton.edu/courses/archive/spr06/cos116/Is_The_Brains_Mind_A_Computer_Program.pdf|archive-date=2022-10-09|url-status=live|doi=10.1038/scientificamerican0190-26|pmid=2294583|bibcode=1990SciAm.262a..26S|author-link=John R. Searle}}</ref><ref name=SearleSEP>{{cite book| title=The Chinese Room Argument| url=http://plato.stanford.edu/entries/chinese-room| publisher=Metaphysics Research Lab, Stanford University| year=2019| access-date=2012-02-20| archive-date=2012-01-12| archive-url=https://web.archive.org/web/20120112034000/http://plato.stanford.edu/entries/chinese-room/| url-status=live}}</ref> In the literature concerning artificial intelligence, Searle's essay has been second only to Turing's in the volume of debate it has generated.<ref name=Searle1980>{{cite journal|author=John Searle|title=Minds, brains, and programs|journal=Behavioral and Brain Sciences|year=1980|volume=3|issue=3|pages=417–457|doi=10.1017/S0140525X00005756|display-authors=etal|citeseerx=10.1.1.83.5248|s2cid=55303721|author-link=John Searle}}</ref> Searle himself was vague about what extra ingredients it would take to make a machine conscious: all he proposed was that what was needed was "causal powers" of the sort that the brain has and that computers lack. But other thinkers sympathetic to his basic argument have suggested that the necessary (though perhaps still not sufficient) extra conditions may include the ability to pass not just the verbal version of the Turing test, but the [[Robotics|robotic]] version,<ref>{{cite web|author1=Graham Oppy|author2=David Dowe|year=2011|title=The Turing test|url=http://plato.stanford.edu/archives/spr2011/entries/turing-test|publisher=Stanford Encyclopedia of Philosophy (Spring 2011 Edition)|access-date=2011-10-26|archive-date=2013-12-02|archive-url=https://web.archive.org/web/20131202073948/http://plato.stanford.edu/archives/spr2011/entries/turing-test/|url-status=live}}</ref> which requires [[Symbol grounding|grounding]] the robot's words in the robot's sensorimotor capacity to [[categorize]] and interact with the things in the world that its words are about, Turing-indistinguishably from a real person. Turing-scale robotics is an empirical branch of research on [[embodied cognition]] and [[situated cognition]].<ref>{{cite journal|author=Margaret Wilson|title=Six views of embodied cognition|journal=Psychonomic Bulletin & Review|volume=9|issue=4|year=2002|pages=625–636|doi=10.3758/BF03196322|pmid=12613670|doi-access=free}}</ref> In 2014, Victor Argonov has suggested a non-Turing test for machine consciousness based on a machine's ability to produce philosophical judgments.<ref>{{cite journal|author=Victor Argonov|title=Experimental Methods for Unraveling the Mind-body Problem: The Phenomenal Judgment Approach|journal=Journal of Mind and Behavior|volume=35|year=2014|pages=51–70|url=http://philpapers.org/rec/ARGMAA-2|access-date=2016-12-06|archive-date=2016-10-20|archive-url=https://web.archive.org/web/20161020014221/http://philpapers.org/rec/ARGMAA-2|url-status=live}}</ref> He argues that a deterministic machine must be regarded as conscious if it is able to produce judgments on all problematic properties of consciousness (such as qualia or binding) having no innate (preloaded) philosophical knowledge on these issues, no philosophical discussions while learning, and no informational models of other creatures in its memory (such models may implicitly or explicitly contain knowledge about these creatures' consciousness). However, this test can be used only to detect, but not refute the existence of consciousness. A positive result proves that a machine is conscious but a negative result proves nothing. For example, absence of philosophical judgments may be caused by lack of the machine's intellect, not by absence of consciousness.
Summary:
Please note that all contributions to Niidae Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Encyclopedia:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Search
Search
Editing
Consciousness
(section)
Add topic