Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Special pages
Niidae Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Artificial intelligence
(section)
Page
Discussion
English
Read
Edit
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
View history
General
What links here
Related changes
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==== Bad actors and weaponized AI ==== {{Main|Lethal autonomous weapon|Artificial intelligence arms race|AI safety}} Artificial intelligence provides a number of tools that are useful to [[bad actor]]s, such as [[authoritarian|authoritarian governments]], [[terrorist]]s, [[criminals]] or [[rogue states]]. A lethal autonomous weapon is a machine that locates, selects and engages human targets without human supervision.{{Efn|This is the [[United Nations]]' definition, and includes things like [[land mines]] as well.{{Sfnp|Russell|Norvig|2021|p=989}}}} Widely available AI tools can be used by bad actors to develop inexpensive autonomous weapons and, if produced at scale, they are potentially [[weapons of mass destruction]].{{Sfnp|Russell|Norvig|2021|pp=987–990}} Even when used in conventional warfare, they currently cannot reliably choose targets and could potentially [[murder|kill an innocent person]].{{Sfnp|Russell|Norvig|2021|pp=987–990}} In 2014, 30 nations (including China) supported a ban on autonomous weapons under the [[United Nations]]' [[Convention on Certain Conventional Weapons]], however the [[United States]] and others disagreed.{{Sfnp|Russell|Norvig|2021|p=988}} By 2015, over fifty countries were reported to be researching battlefield robots.<ref>{{Harvtxt|Robitzski|2018}}; {{Harvtxt|Sainato|2015}}</ref> AI tools make it easier for [[Authoritarian|authoritarian governments]] to efficiently control their citizens in several ways. [[Facial recognition system|Face]] and [[Speaker recognition|voice recognition]] allow widespread [[surveillance]]. [[Machine learning]], operating this data, can [[classifier (machine learning)|classify]] potential enemies of the state and prevent them from hiding. [[Recommendation systems]] can precisely target [[propaganda]] and [[misinformation]] for maximum effect. [[Deepfakes]] and [[generative AI]] aid in producing misinformation. Advanced AI can make authoritarian [[technocracy|centralized decision making]] more competitive than liberal and decentralized systems such as [[market (economics)|market]]s. It lowers the cost and difficulty of [[digital warfare]] and [[spyware|advanced spyware]].{{Sfnp|Harari|2018}} All these technologies have been available since 2020 or earlier—AI [[facial recognition system]]s are already being used for [[mass surveillance]] in China.<ref>{{Cite news |last1=Buckley |first1=Chris |last2=Mozur |first2=Paul |date=22 May 2019 |title=How China Uses High-Tech Surveillance to Subdue Minorities |url=https://www.nytimes.com/2019/05/22/world/asia/china-surveillance-xinjiang.html |work=The New York Times |access-date=2 July 2019 |archive-date=25 November 2019 |archive-url=https://web.archive.org/web/20191125180459/https://www.nytimes.com/2019/05/22/world/asia/china-surveillance-xinjiang.html |url-status=live }}</ref><ref>{{Cite web |date=3 May 2019 |title=Security lapse exposed a Chinese smart city surveillance system |url=https://techcrunch.com/2019/05/03/china-smart-city-exposed |url-status=live |archive-url=https://web.archive.org/web/20210307203740/https://consent.yahoo.com/v2/collectConsent?sessionId=3_cc-session_c8562b93-9863-4915-8523-6c7b930a3efc |archive-date=7 March 2021 |access-date=14 September 2020}}</ref> There many other ways that AI is expected to help bad actors, some of which can not be foreseen. For example, machine-learning AI is able to design tens of thousands of toxic molecules in a matter of hours.{{Sfnp|Urbina|Lentzos|Invernizzi|Ekins|2022}}
Summary:
Please note that all contributions to Niidae Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Encyclopedia:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Search
Search
Editing
Artificial intelligence
(section)
Add topic