Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Special pages
Niidae Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
ELIZA effect
(section)
Page
Discussion
English
Read
Edit
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
View history
General
What links here
Related changes
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Incidents == As artificial intelligence has advanced, a number of internationally notable incidents underscore the extent to which the ELIZA effect is realized. In June 2022, Google engineer [[Blake Lemoine]] claimed that the [[large language model]] [[LaMDA]] had become [[sentient]], hiring an attorney on its behalf after the chatbot requested he do so. Lemoine's claims were widely pushed back by experts and the scientific community. After a month of paid administrative leave, he was dismissed for violation of corporate policies on intellectual property. Lemoine contends he "did the right thing by informing the public" because "AI engines are incredibly good at manipulating people".<ref>{{cite web | url=https://www.newsweek.com/google-ai-blake-lemoine-bing-chatbot-sentient-1783340 | title="I worked on Google's AI. My fears are coming true" | website=[[Newsweek]] | date=27 February 2023 }}</ref> In February 2023, Luka made abrupt changes to its [[Replika]] chatbot following a demand from the [[National data protection authority|Italian Data Protection Authority]], which cited "real risks to children". However, users worldwide protested when the bots stopped responding to their sexual advances. Moderators in the Replika [[subreddit]] even posted support resources, including links to suicide hotlines. Ultimately, the company reinstituted erotic roleplay for some users.<ref>{{cite web | url=https://www.vice.com/en/article/ai-companion-replika-erotic-roleplay-updates/ | title='It's Hurting Like Hell': AI Companion Users Are in Crisis, Reporting Sudden Sexual Rejection | date=15 February 2023 }}</ref><ref>{{cite web | url=https://time.com/6257790/ai-chatbots-love/ | title=Why People Are Confessing Their Love for AI Chatbots | date=23 February 2023 }}</ref> In March 2023, a Belgian man killed himself after chatting for six weeks on the app [[Chai Research|Chai]]. The chatbot model was originally based on [[GPT-J]] and had been fine-tuned to be "more emotional, fun and engaging". The bot, ironically having the name Eliza as a default, encouraged the father of two to kill himself, according to his widow and his psychotherapist.<ref>{{cite web | url=https://robots4therestofus.substack.com/p/after-a-chatbot-encouraged-a-suicide | title=After a chatbot encouraged a suicide, "AI playtime is over." | date=10 April 2023 }}</ref><ref>{{cite web | url=https://www.vice.com/en/article/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says/ | title='He Would Still be Here': Man Dies by Suicide After Talking with AI Chatbot, Widow Says | date=30 March 2023 }}</ref><ref>{{cite web | url=https://nypost.com/2023/03/30/married-father-commits-suicide-after-encouragement-by-ai-chatbot-widow/ | title=Married father commits suicide after encouragement by AI chatbot: Widow | date=30 March 2023 }}</ref> In an open letter, Belgian scholars responded to the incident fearing "the risk of emotional manipulation" by human-imitating AI.<ref>{{cite web | url=https://www.law.kuleuven.be/ai-summer-school/open-brief/open-letter-manipulative-ai | title=Open Letter: We are not ready for manipulative AI β urgent need for action }}</ref>
Summary:
Please note that all contributions to Niidae Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Encyclopedia:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Search
Search
Editing
ELIZA effect
(section)
Add topic