Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Special pages
Niidae Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Naive Bayes classifier
(section)
Page
Discussion
English
Read
Edit
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
View history
General
What links here
Related changes
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Introduction== Naive Bayes is a simple technique for constructing classifiers: models that assign class labels to problem instances, represented as vectors of [[feature vector|feature]] values, where the class labels are drawn from some finite set. There is not a single [[algorithm]] for training such classifiers, but a family of algorithms based on a common principle: all naive Bayes classifiers assume that the value of a particular feature is [[Independence (probability theory)|independent]] of the value of any other feature, given the class variable. For example, a fruit may be considered to be an apple if it is red, round, and about 10 cm in diameter. A naive Bayes classifier considers each of these features to contribute independently to the probability that this fruit is an apple, regardless of any possible [[Correlation and dependence|correlations]] between the color, roundness, and diameter features. In many practical applications, parameter estimation for naive Bayes models uses the method of [[maximum likelihood]]; in other words, one can work with the naive Bayes model without accepting [[Bayesian probability]] or using any Bayesian methods. Despite their naive design and apparently oversimplified assumptions, naive Bayes classifiers have worked quite well in many complex real-world situations. In 2004, an analysis of the Bayesian classification problem showed that there are sound theoretical reasons for the apparently implausible [[efficacy]] of naive Bayes classifiers.<ref>{{cite conference | first = Harry | last = Zhang | title = The Optimality of Naive Bayes | conference = FLAIRS2004 conference | url = http://www.cs.unb.ca/profs/hzhang/publications/FLAIRS04ZhangH.pdf }}</ref> Still, a comprehensive comparison with other classification algorithms in 2006 showed that Bayes classification is outperformed by other approaches, such as [[boosted trees]] or [[random forests]].<ref>{{cite conference | last1 = Caruana | first1 = R. | last2 = Niculescu-Mizil | first2 = A. | title = An empirical comparison of supervised learning algorithms | conference = Proc. 23rd International Conference on Machine Learning | year = 2006 | citeseerx = 10.1.1.122.5901 }}</ref> An advantage of naive Bayes is that it only requires a small amount of training data to estimate the parameters necessary for classification.<ref>{{cite web |title=Why does Naive Bayes work better when the number of features >> sample size compared to more sophisticated ML algorithms? |url=https://stats.stackexchange.com/q/379383 |website=Cross Validated Stack Exchange |access-date=24 January 2023}}</ref>
Summary:
Please note that all contributions to Niidae Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Encyclopedia:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Search
Search
Editing
Naive Bayes classifier
(section)
Add topic