Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Special pages
Niidae Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Eliezer Yudkowsky
(section)
Page
Discussion
English
Read
Edit
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
View history
General
What links here
Related changes
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Capabilities forecasting=== In the [[intelligence explosion]] scenario hypothesized by [[I. J. Good]], recursively self-improving AI systems quickly transition from subhuman general intelligence to [[superintelligence|superintelligent]]. [[Nick Bostrom]]'s 2014 book ''[[Superintelligence: Paths, Dangers, Strategies]]'' sketches out Good's argument in detail, while citing Yudkowsky on the risk that [[Anthropomorphism|anthropomorphizing]] advanced AI systems will cause people to misunderstand the nature of an intelligence explosion. "AI might make an ''apparently'' sharp jump in intelligence purely as the result of anthropomorphism, the human tendency to think of 'village idiot' and 'Einstein' as the extreme ends of the intelligence scale, instead of nearly indistinguishable points on the scale of minds-in-general."<ref name="aima"/><ref name="gcr"/><ref>{{cite book|last1=Bostrom|first1=Nick|author-link=Nick Bostrom|title=Superintelligence: Paths, Dangers, Strategies|date=2014|isbn=978-0199678112|title-link=Superintelligence: Paths, Dangers, Strategies|publisher=Oxford University Press }}</ref> In ''Artificial Intelligence: A Modern Approach'', Russell and Norvig raise the objection that there are known limits to intelligent problem-solving from [[computational complexity theory]]; if there are strong limits on how efficiently algorithms can solve various tasks, an intelligence explosion may not be possible.<ref name="aima"/>
Summary:
Please note that all contributions to Niidae Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Encyclopedia:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Search
Search
Editing
Eliezer Yudkowsky
(section)
Add topic