Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Special pages
Niidae Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Binomial distribution
(section)
Page
Discussion
English
Read
Edit
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
View history
General
What links here
Related changes
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Estimation of parameters === {{see also|Beta distribution#Bayesian inference}} When {{math|''n''}} is known, the parameter {{math|''p''}} can be estimated using the proportion of successes: : <math> \widehat{p} = \frac{x}{n}.</math> This estimator is found using [[maximum likelihood estimator]] and also the [[method of moments (statistics)|method of moments]]. This estimator is [[Bias of an estimator|unbiased]] and uniformly with [[Minimum-variance unbiased estimator|minimum variance]], proven using [[Lehmann–Scheffé theorem]], since it is based on a [[minimal sufficient]] and [[Completeness (statistics)|complete]] statistic (i.e.: {{math|''x''}}). It is also [[Consistent estimator|consistent]] both in probability and in [[Mean squared error|MSE]]. This statistic is [[Asymptotic distribution|asymptotically]] [[normal distribution|normal]] thanks to the [[central limit theorem]], because it is the same as taking the [[arithmetic mean|mean]] over Bernoulli samples. It has a variance of <math> var(\widehat{p}) = \frac{p(1-p)}{n}</math>, a property which is used in various ways, such as in [[Binomial_proportion_confidence_interval#Wald_interval|Wald's confidence intervals]]. A closed form [[Bayes estimator]] for {{math|''p''}} also exists when using the [[Beta distribution]] as a [[Conjugate prior|conjugate]] [[prior distribution]]. When using a general <math>\operatorname{Beta}(\alpha, \beta)</math> as a prior, the [[Bayes estimator#Posterior mean|posterior mean]] estimator is: : <math> \widehat{p}_b = \frac{x+\alpha}{n+\alpha+\beta}.</math> The Bayes estimator is [[Asymptotic efficiency (Bayes)|asymptotically efficient]] and as the sample size approaches infinity ({{math|''n'' → ∞}}), it approaches the [[Maximum likelihood estimation|MLE]] solution.<ref>{{Cite journal |last=Wilcox |first=Rand R. |date=1979 |title=Estimating the Parameters of the Beta-Binomial Distribution |url=http://journals.sagepub.com/doi/10.1177/001316447903900302 |journal=Educational and Psychological Measurement |language=en |volume=39 |issue=3 |pages=527–535 |doi=10.1177/001316447903900302 |s2cid=121331083 |issn=0013-1644}}</ref> The Bayes estimator is [[Bias of an estimator|biased]] (how much depends on the priors), [[Bayes estimator#Admissibility|admissible]] and [[Consistent estimator|consistent]] in probability. Using the Bayesian estimator with the Beta distribution can be used with [[Thompson sampling]]. For the special case of using the [[standard uniform distribution]] as a [[non-informative prior]], <math>\operatorname{Beta}(\alpha=1, \beta=1) = U(0,1)</math>, the posterior mean estimator becomes: :<math> \widehat{p}_b = \frac{x+1}{n+2}.</math> (A [[Bayes estimator#Posterior mode|posterior mode]] should just lead to the standard estimator.) This method is called the [[rule of succession]], which was introduced in the 18th century by [[Pierre-Simon Laplace]]. When relying on [[Jeffreys prior]], the prior is <math>\operatorname{Beta}(\alpha=\frac{1}{2}, \beta=\frac{1}{2})</math>,<ref>Marko Lalovic (https://stats.stackexchange.com/users/105848/marko-lalovic), Jeffreys prior for binomial likelihood, URL (version: 2019-03-04): https://stats.stackexchange.com/q/275608</ref> which leads to the estimator: : <math> \widehat{p}_{Jeffreys} = \frac{x+\frac{1}{2}}{n+1}.</math> When estimating {{math|''p''}} with very rare events and a small {{math|''n''}} (e.g.: if {{math|1=''x'' = 0}}), then using the standard estimator leads to <math> \widehat{p} = 0,</math> which sometimes is unrealistic and undesirable. In such cases there are various alternative estimators.<ref>{{cite journal |last=Razzaghi |first=Mehdi |title=On the estimation of binomial success probability with zero occurrence in sample |journal=Journal of Modern Applied Statistical Methods |volume=1 |issue=2 |year=2002 |pages=326–332 |doi=10.22237/jmasm/1036110000 |doi-access=free }}</ref> One way is to use the Bayes estimator <math> \widehat{p}_b</math>, leading to: : <math> \widehat{p}_b = \frac{1}{n+2}.</math> Another method is to use the upper bound of the [[confidence interval]] obtained using the [[Rule of three (statistics)|rule of three]]: : <math> \widehat{p}_{\text{rule of 3}} = \frac{3}{n}.</math>
Summary:
Please note that all contributions to Niidae Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Encyclopedia:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Search
Search
Editing
Binomial distribution
(section)
Add topic