Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Special pages
Niidae Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Eliezer Yudkowsky
(section)
Page
Discussion
English
Read
Edit
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
View history
General
What links here
Related changes
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== ''Time'' op-ed === In a 2023 op-ed for [[Time (magazine)|''Time'' magazine]], Yudkowsky discussed the risk of artificial intelligence and advocated for international agreements to limit it, including a total halt on the development of AI.<ref>{{Cite news |last=Moss |first=Sebastian |date=2023-03-30 |title="Be willing to destroy a rogue data center by airstrike" - leading AI alignment researcher pens Time piece calling for ban on large GPU clusters |work=Data Center Dynamics |url=https://www.datacenterdynamics.com/en/news/be-willing-to-destroy-a-rogue-data-center-by-airstrike-leading-ai-alignment-researcher-pens-time-piece-calling-for-ban-on-large-gpu-clusters/ |access-date=2023-04-17 |archive-date=April 17, 2023 |archive-url=https://web.archive.org/web/20230417223624/https://www.datacenterdynamics.com/en/news/be-willing-to-destroy-a-rogue-data-center-by-airstrike-leading-ai-alignment-researcher-pens-time-piece-calling-for-ban-on-large-gpu-clusters/ |url-status=live }}</ref><ref>{{Cite news |last=Ferguson |first=Niall |author-link=Niall Ferguson |date=2023-04-09 |title=The Aliens Have Landed, and We Created Them |work=[[Bloomberg News|Bloomberg]] |url=https://www.bloomberg.com/opinion/articles/2023-04-09/artificial-intelligence-the-aliens-have-landed-and-we-created-them |access-date=2023-04-17 |archive-date=April 9, 2023 |archive-url=https://web.archive.org/web/20230409160604/https://www.bloomberg.com/opinion/articles/2023-04-09/artificial-intelligence-the-aliens-have-landed-and-we-created-them |url-status=live }}</ref> He suggested that participating countries should be willing to take military action, such as "destroy[ing] a rogue datacenter by airstrike", to enforce such a moratorium.<ref name=":1">{{Cite magazine |last=Hutson |first=Matthew |date=2023-05-16 |title=Can We Stop Runaway A.I.? |language=en-US |magazine=The New Yorker |url=https://www.newyorker.com/science/annals-of-artificial-intelligence/can-we-stop-the-singularity |access-date=2023-05-19 |issn=0028-792X |quote=Eliezer Yudkowsky, a researcher at the Machine Intelligence Research Institute, in the Bay Area, has likened A.I.-safety recommendations to a fire-alarm system. A classic experiment found that, when smoky mist began filling a room containing multiple people, most didn't report it. They saw others remaining stoic and downplayed the danger. An official alarm may signal that it's legitimate to take action. But, in A.I., there's no one with the clear authority to sound such an alarm, and people will always disagree about which advances count as evidence of a conflagration. "There will be no fire alarm that is not an actual running AGI," Yudkowsky has written. Even if everyone agrees on the threat, no company or country will want to pause on its own, for fear of being passed by competitors. ... That may require quitting A.I. cold turkey before we feel it's time to stop, rather than getting closer and closer to the edge, tempting fate. But shutting it all down would call for draconian measures—perhaps even steps as extreme as those espoused by Yudkowsky, who recently wrote, in an editorial for ''Time'', that we should "be willing to destroy a rogue datacenter by airstrike," even at the risk of sparking "a full nuclear exchange." |archive-date=May 19, 2023 |archive-url=https://web.archive.org/web/20230519014111/https://www.newyorker.com/science/annals-of-artificial-intelligence/can-we-stop-the-singularity |url-status=live }}</ref> The article helped introduce the debate about [[AI alignment]] to the mainstream, leading a reporter to ask President [[Joe Biden]] a question about AI safety at a press briefing.<ref name=":0" />
Summary:
Please note that all contributions to Niidae Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Encyclopedia:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Search
Search
Editing
Eliezer Yudkowsky
(section)
Add topic