Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Special pages
Niidae Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Artificial intelligence
(section)
Page
Discussion
English
Read
Edit
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
View history
General
What links here
Related changes
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==== Existential risk ==== {{Main|Existential risk from artificial intelligence}} It has been argued AI will become so powerful that humanity may irreversibly lose control of it. This could, as physicist [[Stephen Hawking]] stated, "[[Global catastrophic risk|spell the end of the human race]]".{{Sfnp|Cellan-Jones|2014}} This scenario has been common in science fiction, when a computer or robot suddenly develops a human-like "self-awareness" (or "sentience" or "consciousness") and becomes a malevolent character.{{Efn|Sometimes called a "[[robopocalypse]]"{{Sfn|Russell|Norvig|2021|p=1001}}}} These sci-fi scenarios are misleading in several ways. First, AI does not require human-like [[sentience]] to be an existential risk. Modern AI programs are given specific goals and use learning and intelligence to achieve them. Philosopher [[Nick Bostrom]] argued that if one gives ''almost any'' goal to a sufficiently powerful AI, it may choose to destroy humanity to achieve it (he used the example of a [[Instrumental convergence#Paperclip maximizer|paperclip factory manager]]).{{Sfnp|Bostrom|2014}} [[Stuart J. Russell|Stuart Russell]] gives the example of household robot that tries to find a way to kill its owner to prevent it from being unplugged, reasoning that "you can't fetch the coffee if you're dead."{{Sfnp|Russell|2019}} In order to be safe for humanity, a [[superintelligence]] would have to be genuinely [[AI alignment|aligned]] with humanity's morality and values so that it is "fundamentally on our side".<ref>{{Harvtxt|Bostrom|2014}}; {{Harvtxt|Müller|Bostrom|2014}}; {{Harvtxt|Bostrom|2015}}.</ref> Second, [[Yuval Noah Harari]] argues that AI does not require a robot body or physical control to pose an existential risk. The essential parts of civilization are not physical. Things like [[ideologies]], [[law]], [[government]], [[money]] and the [[economy]] are built on [[language]]; they exist because there are stories that billions of people believe. The current prevalence of [[misinformation]] suggests that an AI could use language to convince people to believe anything, even to take actions that are destructive.{{Sfnp|Harari|2023}} <!-- Warnings of existential risk --> The opinions amongst experts and industry insiders are mixed, with sizable fractions both concerned and unconcerned by risk from eventual superintelligent AI.{{Sfnp|Müller|Bostrom|2014}} Personalities such as [[Stephen Hawking]], [[Bill Gates]], and [[Elon Musk]],<ref>Leaders' concerns about the existential risks of AI around 2015: {{Harvtxt|Rawlinson|2015}}, {{Harvtxt|Holley|2015}}, {{Harvtxt|Gibbs|2014}}, {{Harvtxt|Sainato|2015}}</ref> as well as AI pioneers such as [[Yoshua Bengio]], [[Stuart J. Russell|Stuart Russell]], [[Demis Hassabis]], and [[Sam Altman]], have expressed concerns about existential risk from AI. In May 2023, [[Geoffrey Hinton]] announced his resignation from Google in order to be able to "freely speak out about the risks of AI" without "considering how this impacts Google".<ref>{{Cite news |date=25 March 2023 |title="Godfather of artificial intelligence" talks impact and potential of new AI |url=https://www.cbsnews.com/video/godfather-of-artificial-intelligence-talks-impact-and-potential-of-new-ai |url-status=live |archive-url=https://web.archive.org/web/20230328225221/https://www.cbsnews.com/video/godfather-of-artificial-intelligence-talks-impact-and-potential-of-new-ai |archive-date=28 March 2023 |access-date=2023-03-28 |work=CBS News}}</ref> He notably mentioned risks of an [[AI takeover]],<ref>{{Cite news |last=Pittis |first=Don |date=May 4, 2023 |title=Canadian artificial intelligence leader Geoffrey Hinton piles on fears of computer takeover |url=https://www.cbc.ca/news/business/ai-doom-column-don-pittis-1.6829302 |work=CBC |access-date=5 October 2024 |archive-date=7 July 2024 |archive-url=https://web.archive.org/web/20240707032135/https://www.cbc.ca/news/business/ai-doom-column-don-pittis-1.6829302 |url-status=live }}</ref> and stressed that in order to avoid the worst outcomes, establishing safety guidelines will require cooperation among those competing in use of AI.<ref>{{Cite web |date=2024-06-14 |title='50–50 chance' that AI outsmarts humanity, Geoffrey Hinton says |url=https://www.bnnbloomberg.ca/50-50-chance-that-ai-outsmarts-humanity-geoffrey-hinton-says-1.2085394 |access-date=2024-07-06 |website=Bloomberg BNN |archive-date=14 June 2024 |archive-url=https://web.archive.org/web/20240614144506/https://www.bnnbloomberg.ca/50-50-chance-that-ai-outsmarts-humanity-geoffrey-hinton-says-1.2085394 |url-status=live }}</ref> In 2023, many leading AI experts endorsed [[Statement on AI risk of extinction|the joint statement]] that "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war".{{Sfnp|Valance|2023}} <!-- Arguments against existential risk --> Some other researchers were more optimistic. AI pioneer [[Jürgen Schmidhuber]] did not sign the joint statement, emphasising that in 95% of all cases, AI research is about making "human lives longer and healthier and easier."<ref>{{Cite news |last=Taylor |first=Josh |date=7 May 2023 |title=Rise of artificial intelligence is inevitable but should not be feared, 'father of AI' says |url=https://www.theguardian.com/technology/2023/may/07/rise-of-artificial-intelligence-is-inevitable-but-should-not-be-feared-father-of-ai-says |access-date=26 May 2023 |work=The Guardian |archive-date=23 October 2023 |archive-url=https://web.archive.org/web/20231023061228/https://www.theguardian.com/technology/2023/may/07/rise-of-artificial-intelligence-is-inevitable-but-should-not-be-feared-father-of-ai-says |url-status=live }}</ref> While the tools that are now being used to improve lives can also be used by bad actors, "they can also be used against the bad actors."<ref>{{Cite news |last=Colton |first=Emma |date=7 May 2023 |title='Father of AI' says tech fears misplaced: 'You cannot stop it' |url=https://www.foxnews.com/tech/father-ai-jurgen-schmidhuber-says-tech-fears-misplaced-cannot-stop |access-date=26 May 2023 |work=Fox News |archive-date=26 May 2023 |archive-url=https://web.archive.org/web/20230526162642/https://www.foxnews.com/tech/father-ai-jurgen-schmidhuber-says-tech-fears-misplaced-cannot-stop |url-status=live }}</ref><ref>{{Cite news |last=Jones |first=Hessie |date=23 May 2023 |title=Juergen Schmidhuber, Renowned 'Father Of Modern AI,' Says His Life's Work Won't Lead To Dystopia |url=https://www.forbes.com/sites/hessiejones/2023/05/23/juergen-schmidhuber-renowned-father-of-modern-ai-says-his-lifes-work-wont-lead-to-dystopia |access-date=26 May 2023 |work=Forbes |archive-date=26 May 2023 |archive-url=https://web.archive.org/web/20230526163102/https://www.forbes.com/sites/hessiejones/2023/05/23/juergen-schmidhuber-renowned-father-of-modern-ai-says-his-lifes-work-wont-lead-to-dystopia/ |url-status=live }}</ref> [[Andrew Ng]] also argued that "it's a mistake to fall for the doomsday hype on AI—and that regulators who do will only benefit vested interests."<ref>{{Cite news |last=McMorrow |first=Ryan |date=19 Dec 2023 |title=Andrew Ng: 'Do we think the world is better off with more or less intelligence?' |url=https://www.ft.com/content/2dc07f9e-d2a9-4d98-b746-b051f9352be3 |access-date=30 Dec 2023 |work=Financial Times |archive-date=25 January 2024 |archive-url=https://web.archive.org/web/20240125014121/https://www.ft.com/content/2dc07f9e-d2a9-4d98-b746-b051f9352be3 |url-status=live }}</ref> [[Yann LeCun]] "scoffs at his peers' dystopian scenarios of supercharged misinformation and even, eventually, human extinction."<ref>{{Cite magazine |last=Levy |first=Steven |date=22 Dec 2023 |title=How Not to Be Stupid About AI, With Yann LeCun |url=https://www.wired.com/story/artificial-intelligence-meta-yann-lecun-interview |access-date=30 Dec 2023 |magazine=Wired |archive-date=28 December 2023 |archive-url=https://web.archive.org/web/20231228152443/https://www.wired.com/story/artificial-intelligence-meta-yann-lecun-interview/ |url-status=live }}</ref> In the early 2010s, experts argued that the risks are too distant in the future to warrant research or that humans will be valuable from the perspective of a superintelligent machine.<ref>Arguments that AI is not an imminent risk: {{Harvtxt|Brooks|2014}}, {{Harvtxt|Geist|2015}}, {{Harvtxt|Madrigal|2015}}, {{Harvtxt|Lee|2014}}</ref> However, after 2016, the study of current and future risks and possible solutions became a serious area of research.{{Sfnp|Christian|2020|pp=67, 73}}
Summary:
Please note that all contributions to Niidae Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Encyclopedia:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Search
Search
Editing
Artificial intelligence
(section)
Add topic