Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Special pages
Niidae Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Linear classifier
(section)
Page
Discussion
English
Read
Edit
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
View history
General
What links here
Related changes
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Definition == [[Image:Svm separating hyperplanes.png|thumb|right|In this case, the solid and empty dots can be correctly classified by any number of linear classifiers. H1 (blue) classifies them correctly, as does H2 (red). H2 could be considered "better" in the sense that it is also furthest from both groups. H3 (green) fails to correctly classify the dots.]] If the input feature vector to the classifier is a [[real number|real]] vector <math>\vec x</math>, then the output score is :<math>y = f(\vec{w}\cdot\vec{x}) = f\left(\sum_j w_j x_j\right),</math> where <math>\vec w </math> is a real vector of weights and ''f'' is a function that converts the [[dot product]] of the two vectors into the desired output. (In other words, <math>\vec{w}</math> is a [[one-form]] or [[linear functional]] mapping <math>\vec x</math> onto '''R'''.) The weight vector <math>\vec w</math> is learned from a set of labeled training samples. Often ''f'' is a '''threshold function''', which maps all values of <math>\vec{w}\cdot\vec{x}</math> above a certain threshold to the first class and all other values to the second class; e.g., :<math> f(\mathbf{x}) = \begin{cases}1 & \text{if }\ \mathbf{w}^T \cdot \mathbf{x} > \theta,\\0 & \text{otherwise}\end{cases} </math> The superscript T indicates the transpose and <math> \theta </math> is a scalar threshold. A more complex ''f'' might give the probability that an item belongs to a certain class. For a two-class classification problem, one can visualize the operation of a linear classifier as splitting a [[High-dimensional space|high-dimensional]] input space with a [[hyperplane]]: all points on one side of the hyperplane are classified as "yes", while the others are classified as "no". A linear classifier is often used in situations where the speed of classification is an issue, since it is often the fastest classifier, especially when <math>\vec x</math> is sparse. Also, linear classifiers often work very well when the number of dimensions in <math>\vec x</math> is large, as in [[document classification]], where each element in <math>\vec x</math> is typically the number of occurrences of a word in a document (see [[document-term matrix]]). In such cases, the classifier should be well-[[regularization (machine learning)|regularized]].
Summary:
Please note that all contributions to Niidae Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Encyclopedia:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Search
Search
Editing
Linear classifier
(section)
Add topic