Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Special pages
Niidae Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Analogy
(section)
Page
Discussion
English
Read
Edit
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
View history
General
What links here
Related changes
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
====Artificial intelligence==== {{Further |case-based reasoning}} {{Further |structure-mapping theory}} A computer algorithm has achieved human-level performance on multiple-choice analogy questions from the [[SAT]] test. The algorithm measures the similarity of relations between pairs of words (e.g., the similarity between the pairs HAND:PALM and FOOT:SOLE) by statistically analysing a large collection of text. It answers SAT questions by selecting the choice with the highest relational similarity.<ref>Turney 2006</ref> The analogical reasoning in the human mind is free of the false inferences plaguing conventional [[artificial intelligence]] models, (called ''systematicity''). Steven Phillips and [[William H. Wilson]]<ref>{{Cite journal | last1 = Phillips | first1 = Steven | last2 = Wilson | first2 = William H. | date = July 2010 | title = Categorial Compositionality: A Category Theory Explanation for the Systematicity of Human Cognition | journal = PLOS Computational Biology | volume = 6 | issue = 7 | pages = e1000858| doi = 10.1371/journal.pcbi.1000858 | pmid = 20661306 | pmc = 2908697 | bibcode =2010PLSCB...6E0858P | doi-access = free }}</ref><ref>{{Cite journal | last1 = Phillips | first1 = Steven | last2 = Wilson | first2 = William H. | date = August 2011 | title = Categorial Compositionality II: Universal Constructions and a General Theory of (Quasi-)Systematicity in Human Cognition | journal = PLOS Computational Biology | volume = 7 | issue = 8 | pages = e1002102| doi = 10.1371/journal.pcbi.1002102 | pmid=21857816 | pmc=3154512| bibcode =2011PLSCB...7E2102P | doi-access = free }}</ref> use [[category theory]] to mathematically demonstrate how such reasoning could arise naturally by using relationships between the internal arrows that keep the internal structures of the categories rather than the mere relationships between the objects (called "representational states"). Thus, the mind, and more intelligent AIs, may use analogies between domains whose internal structures [[natural transformation|transform naturally]] and reject those that do not. [[Keith Holyoak]] and [[Paul Thagard]] (1997) developed their multiconstraint theory within structure mapping theory. They defend that the "[[coherence theory of truth|coherence]]" of an analogy depends on structural consistency, [[semantic similarity]] and purpose. Structural consistency is the highest when the analogy is an [[isomorphism]], although lower levels can be used as well. Similarity demands that the mapping connects similar elements and relationships between source and target, at any level of abstraction. It is the highest when there are identical relations and when connected elements have many identical attributes. An analogy achieves its purpose if it helps solve the problem at hand. The multiconstraint theory faces some difficulties when there are multiple sources, but these can be overcome.<ref name="Shelley"/> Hummel and Holyoak (2005) recast the multiconstraint theory within a [[Artificial neural network|neural network]] architecture. A problem for the multiconstraint theory arises from its concept of similarity, which, in this respect, is not obviously different from analogy itself. Computer applications demand that there are some ''identical'' attributes or relations at some level of abstraction. The model was extended (Doumas, Hummel, and Sandhofer, 2008) to learn relations from unstructured examples (providing the only current account of how symbolic representations can be learned from examples).<ref>Doumas, Hummel, and Sandhofer, 2008</ref> [[Mark Keane (cognitive scientist)|Mark Keane]] and Brayshaw (1988) developed their ''Incremental Analogy Machine'' (IAM) to include working memory constraints as well as structural, semantic and pragmatic constraints, so that a subset of the base analogue is selected and mapping from base to target occurs in series.<ref>Keane, M.T. and Brayshaw, M. (1988). The Incremental Analogical Machine: a computational model of analogy. In [[Derek H. Sleeman|D. H. Sleeman]] (Ed). European working session on learning. (pp.53β62). London: Pitman.</ref><ref>{{Cite journal | last1 = Keane | first1 = M.T. Ledgeway | last2 = Duff | first2 = S | year = 1994 | title = Constraints on analogical mapping: a comparison of three models | url =http://www.tara.tcd.ie/bitstream/2262/12939/1/TCD-CS-93-24.pdf | journal = Cognitive Science | volume = 18 | issue = 3| pages = 387β438| doi=10.1016/0364-0213(94)90015-9| doi-access = free }}</ref> [[Empirical evidence]] shows that humans are better at using and creating analogies when the information is presented in an order where an item and its analogue are placed together.<ref>{{Cite journal | last1 = Keane | first1 = M.T. | year = 1997 | title = What makes an analogy difficult? The effects of order and causal structure in analogical mapping | journal = Journal of Experimental Psychology: Learning, Memory, and Cognition | volume = 23 | issue = 4| pages = 946β967 | doi=10.1037/0278-7393.23.4.946| pmid = 9231438 }}</ref> Eqaan Doug and his team<ref>See Chalmers et al. 1991</ref> challenged the shared structure theory and mostly its applications in computer science. They argue that there is no clear line between [[perception]], including high-level perception, and analogical thinking. In fact, analogy occurs not only after, but also before and at the same time as high-level perception. In high-level perception, humans make [[Knowledge representation|representations]] by selecting relevant information from low-level [[stimulus (physiology)|stimuli]]. Perception is necessary for analogy, but analogy is also necessary for high-level perception. Chalmers et al. concludes that analogy actually is high-level perception. Forbus et al. (1998) claim that this is only a metaphor.<ref>Forbus et al., 1998</ref> It has been argued (Morrison and Dietrich 1995) that Hofstadter's and Gentner's groups do not defend opposite views, but are instead dealing with different aspects of analogy.<ref>Morrison and Dietrich, 1995</ref>
Summary:
Please note that all contributions to Niidae Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Encyclopedia:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Search
Search
Editing
Analogy
(section)
Add topic