Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Special pages
Niidae Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Perceptron
(section)
Page
Discussion
English
Read
Edit
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
View history
General
What links here
Related changes
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== ''Perceptrons'' (1969) === {{Main|Perceptrons (book)}} Although the perceptron initially seemed promising, it was quickly proved that perceptrons could not be trained to recognise many classes of patterns. This caused the field of [[neural network (machine learning)|neural network]] research to stagnate for many years, before it was recognised that a [[feedforward neural network]] with two or more layers (also called a [[multilayer perceptron]]) had greater processing power than perceptrons with one layer (also called a [[Feedforward neural network#A threshold (e.g. activation function) added|single-layer perceptron]]). Single-layer perceptrons are only capable of learning [[linearly separable]] patterns.<ref name="Sejnowski">{{Cite book |last=Sejnowski |first=Terrence J.|author-link=Terry Sejnowski|url=https://books.google.com/books?id=9xZxDwAAQBAJ |title=The Deep Learning Revolution |date=2018|publisher=MIT Press |isbn=978-0-262-03803-4 |language=en|page=47}}</ref> For a classification task with some step activation function, a single node will have a single line dividing the data points forming the patterns. More nodes can create more dividing lines, but those lines must somehow be combined to form more complex classifications. A second layer of perceptrons, or even linear nodes, are sufficient to solve many otherwise non-separable problems. In 1969, a famous book entitled ''[[Perceptrons (book)|Perceptrons]]'' by [[Marvin Minsky]] and [[Seymour Papert]] showed that it was impossible for these classes of network to learn an [[XOR]] function. It is often incorrectly believed that they also conjectured that a similar result would hold for a multi-layer perceptron network. However, this is not true, as both Minsky and Papert already knew that multi-layer perceptrons were capable of producing an XOR function. (See the page on ''[[Perceptrons (book)]]'' for more information.) Nevertheless, the often-miscited Minsky and Papert text caused a significant decline in interest and funding of neural network research. It took ten more years until neural network research experienced a resurgence in the 1980s.<ref name="Sejnowski"/>{{Verify source|date=October 2024|reason=Does the source support all of the preceding text and is "often incorrectly believed" true today or was it only true in the past?}} This text was reprinted in 1987 as "Perceptrons - Expanded Edition" where some errors in the original text are shown and corrected.
Summary:
Please note that all contributions to Niidae Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Encyclopedia:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Search
Search
Editing
Perceptron
(section)
Add topic