Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Special pages
Niidae Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Chatbot
(section)
Page
Discussion
English
Read
Edit
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
View history
General
What links here
Related changes
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Limitations of chatbots == {{expand section|date=December 2024}} Chatbots have difficulty managing non-linear conversations that must go back and forth on a topic with a user.<ref>{{citation |last1=Grudin |first1=Jonathan |title=Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems β CHI '19 |pages=209β219 |year=2019 |chapter=Chatbots, Humbots, and the Quest for Artificial General Intelligence |publisher=ACM CHI 2020 |doi=10.1145/3290605.3300439 |isbn=978-1-4503-5970-2 |s2cid=140274744 |last2=Jacques |first2=Richard |author-link=Jonathan Grudin}}</ref> [[Large language model]]s are more versatile, but require a large amount of conversational data to train. These models generate new responses word by word based on user input, are usually trained on a large dataset of natural-language phrases.<ref name="Caldarini-20223" /> They sometimes provide plausible-sounding but incorrect or nonsensical answers. They can make up names, dates, historical events, and even simple math problems.<ref>{{Cite journal |last=Stover |first=Dawn |date=2023-09-03 |title=Will AI make us crazy? |url=https://www.tandfonline.com/doi/full/10.1080/00963402.2023.2245247 |journal=Bulletin of the Atomic Scientists |language=en |volume=79 |issue=5 |pages=299β303 |doi=10.1080/00963402.2023.2245247 |bibcode=2023BuAtS..79e.299S |issn=0096-3402}}</ref> When large language models produce coherent-sounding but inaccurate or fabricated content, this is referred to as "[[Hallucination (artificial intelligence)|hallucinations]]". When humans use and apply chatbot content contaminated with hallucinations, this results in "botshit".<ref>{{Cite journal |last1=Hannigan |first1=Timothy R. |last2=McCarthy |first2=Ian P. |last3=Spicer |first3=AndrΓ© |date=2024-03-20 |title=Beware of botshit: How to manage the epistemic risks of generative chatbots |url=https://www.sciencedirect.com/science/article/pii/S0007681324000272 |journal=Business Horizons |volume=67 |issue=5 |pages=471β486 |doi=10.1016/j.bushor.2024.03.001 |issn=0007-6813}}</ref> Given the increasing adoption and use of chatbots for generating content, there are concerns that this technology will significantly reduce the cost it takes humans to generate [[misinformation]].<ref>{{Cite news |date=2023-01-06 |title=Transcript: Ezra Klein Interviews Gary Marcus |url=https://www.nytimes.com/2023/01/06/podcasts/transcript-ezra-klein-interviews-gary-marcus.html |access-date=2024-04-21 |work=The New York Times |language=en-US |issn=0362-4331}}</ref>
Summary:
Please note that all contributions to Niidae Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Encyclopedia:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Search
Search
Editing
Chatbot
(section)
Add topic