Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Special pages
Niidae Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Artificial intelligence
(section)
Page
Discussion
English
Read
Edit
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
View history
General
What links here
Related changes
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==== Lack of transparency ==== {{See also|Explainable AI|Algorithmic transparency|Right to explanation}} Many AI systems are so complex that their designers cannot explain how they reach their decisions.{{Sfnp|Sample|2017}} Particularly with [[deep neural networks]], in which there are a large amount of non-[[linear]] relationships between inputs and outputs. But some popular explainability techniques exist.<ref>{{Cite web |date=16 June 2023 |title=Black Box AI |url=https://www.techopedia.com/definition/34940/black-box-ai |access-date=5 October 2024 |archive-date=15 June 2024 |archive-url=https://web.archive.org/web/20240615100800/https://www.techopedia.com/definition/34940/black-box-ai |url-status=live }}</ref> It is impossible to be certain that a program is operating correctly if no one knows how exactly it works. There have been many cases where a machine learning program passed rigorous tests, but nevertheless learned something different than what the programmers intended. For example, a system that could identify skin diseases better than medical professionals was found to actually have a strong tendency to classify images with a [[ruler]] as "cancerous", because pictures of malignancies typically include a ruler to show the scale.{{Sfnp|Christian|2020|p=110}} Another machine learning system designed to help effectively allocate medical resources was found to classify patients with asthma as being at "low risk" of dying from pneumonia. Having asthma is actually a severe risk factor, but since the patients having asthma would usually get much more medical care, they were relatively unlikely to die according to the training data. The correlation between asthma and low risk of dying from pneumonia was real, but misleading.{{Sfnp|Christian|2020|pp=88β91}} People who have been harmed by an algorithm's decision have a right to an explanation.<ref>{{Harvtxt|Christian|2020|p=83}}; {{Harvtxt|Russell|Norvig|2021|p=997}}</ref> Doctors, for example, are expected to clearly and completely explain to their colleagues the reasoning behind any decision they make. Early drafts of the European Union's [[General Data Protection Regulation]] in 2016 included an explicit statement that this right exists.{{Efn|When the law was passed in 2018, it still contained a form of this provision.}} Industry experts noted that this is an unsolved problem with no solution in sight. Regulators argued that nevertheless the harm is real: if the problem has no solution, the tools should not be used.{{Sfnp|Christian|2020|p=91}} [[DARPA]] established the [[Explainable Artificial Intelligence|XAI]] ("Explainable Artificial Intelligence") program in 2014 to try to solve these problems.{{Sfnp|Christian|2020|p=83}} Several approaches aim to address the transparency problem. SHAP enables to visualise the contribution of each feature to the output.{{Sfnp|Verma|2021}} LIME can locally approximate a model's outputs with a simpler, interpretable model.{{Sfnp|Rothman|2020}} [[Multitask learning]] provides a large number of outputs in addition to the target classification. These other outputs can help developers deduce what the network has learned.{{Sfnp|Christian|2020|pp=105β108}} [[Deconvolution]], [[DeepDream]] and other [[generative AI|generative]] methods can allow developers to see what different layers of a deep network for computer vision have learned, and produce output that can suggest what the network is learning.{{Sfnp|Christian|2020|pp=108β112}} For [[generative pre-trained transformer]]s, [[Anthropic]] developed a technique based on [[dictionary learning]] that associates patterns of neuron activations with human-understandable concepts.<ref>{{Cite web |last=Ropek |first=Lucas |date=2024-05-21 |title=New Anthropic Research Sheds Light on AI's 'Black Box' |url=https://gizmodo.com/new-anthropic-research-sheds-light-on-ais-black-box-1851491333 |access-date=2024-05-23 |website=Gizmodo |archive-date=5 October 2024 |archive-url=https://web.archive.org/web/20241005170309/https://gizmodo.com/new-anthropic-research-sheds-light-on-ais-black-box-1851491333 |url-status=live }}</ref>
Summary:
Please note that all contributions to Niidae Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Encyclopedia:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Search
Search
Editing
Artificial intelligence
(section)
Add topic