Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Special pages
Niidae Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Likelihood-ratio test
(section)
Page
Discussion
English
Read
Edit
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
View history
General
What links here
Related changes
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Definition== ===General=== Suppose that we have a [[statistical model]] with [[Statistical parameter|parameter space]] <math>\Theta</math>. A [[null hypothesis]] is often stated by saying that the parameter <math>\theta</math> lies in a specified subset <math>\Theta_0</math> of <math>\Theta</math>. The [[alternative hypothesis]] is thus that <math>\theta</math> lies in the [[Complement (set theory)|complement]] of <math>\Theta_0</math>, i.e. in <math>\Theta ~ \backslash ~ \Theta_0</math>, which is denoted by <math>\Theta_0^\text{c}</math>. The likelihood ratio test statistic for the null hypothesis <math>H_0 \, : \, \theta \in \Theta_0</math> is given by:<ref>{{cite book |first=Karl-Rudolf |last=Koch |author-link=Karl-Rudolf Koch |title=Parameter Estimation and Hypothesis Testing in Linear Models |url=https://archive.org/details/parameterestimat0000koch |url-access=registration |location=New York |publisher=Springer |year=1988 |isbn=0-387-18840-1 |page=[https://archive.org/details/parameterestimat0000koch/page/306 306]}}</ref> :<math>\lambda_\text{LR} = -2 \ln \left[ \frac{~ \sup_{\theta \in \Theta_0} \mathcal{L}(\theta) ~}{~ \sup_{\theta \in \Theta} \mathcal{L}(\theta) ~} \right]</math> where the quantity inside the brackets is called the likelihood ratio. Here, the <math>\sup</math> notation refers to the [[supremum]]. As all likelihoods are positive, and as the constrained maximum cannot exceed the unconstrained maximum, the likelihood ratio is [[Bounded set|bounded]] between zero and one. Often the likelihood-ratio test statistic is expressed as a difference between the [[log-likelihood]]s :<math>\lambda_\text{LR} = -2 \left[~ \ell( \theta_0 ) - \ell( \hat{\theta} ) ~\right]</math> where : <math>\ell( \hat{\theta} ) \equiv \ln \left[~ \sup_{\theta \in \Theta} \mathcal{L}(\theta) ~\right]~</math> is the logarithm of the maximized likelihood function <math>\mathcal{L}</math>, and <math>\ell(\theta_0)</math> is the maximal value in the special case that the null hypothesis is true (but not necessarily a value that maximizes <math>\mathcal{L}</math> for the sampled data) and :<math> \theta_0 \in \Theta_0 \qquad \text{ and } \qquad \hat{\theta} \in \Theta~</math> denote the respective [[arg max|arguments of the maxima]] and the allowed ranges they're embedded in. Multiplying by −2 ensures mathematically that (by [[Wilks' theorem]]) <math>\lambda_\text{LR}</math> converges asymptotically to being [[chi-squared distribution|{{mvar|χ}}²-distributed]] if the null hypothesis happens to be true.<ref>{{cite book |first=S.D. |last=Silvey |title=Statistical Inference |location=London |publisher=Chapman & Hall |year=1970 |pages=112–114 |isbn=0-412-13820-4}}</ref> The [[Sampling distribution|finite-sample distribution]]s of likelihood-ratio statistics are generally unknown.<ref>{{cite book |first1=Ron C. |last1=Mittelhammer |author-link=Ron C. Mittelhammer |first2=George G. |last2=Judge |author-link2=George Judge |first3=Douglas J. |last3=Miller |title=Econometric Foundations |location=New York |publisher=Cambridge University Press |year=2000 |isbn=0-521-62394-4 |page=66}}</ref> The likelihood-ratio test requires that the models be [[Statistical model#Nested models|nested]] – i.e. the more complex model can be transformed into the simpler model by imposing constraints on the former's parameters. Many common test statistics are tests for nested models and can be phrased as log-likelihood ratios or approximations thereof: e.g. the [[Z-test|''Z''-test]], the [[F-test|''F''-test]], the [[G-test|''G''-test]], and [[Pearson's chi-squared test]]; for an illustration with the [[Student's t-test#One-sample t-test|one-sample ''t''-test]], see below. If the models are not nested, then instead of the likelihood-ratio test, there is a generalization of the test that can usually be used: for details, see ''[[relative likelihood]]''. ===Case of simple hypotheses=== {{Main|Neyman–Pearson lemma}} A simple-vs.-simple hypothesis test has completely specified models under both the null hypothesis and the alternative hypothesis, which for convenience are written in terms of fixed values of a notional parameter <math>\theta</math>: :<math> \begin{align} H_0 &:& \theta=\theta_0 ,\\ H_1 &:& \theta=\theta_1 . \end{align} </math> In this case, under either hypothesis, the distribution of the data is fully specified: there are no unknown parameters to estimate. For this case, a variant of the likelihood-ratio test is available:<ref>{{cite book |last1=Mood |first1=A.M. |last2=Graybill |first2=F.A. |first3=D.C. |last3=Boes |year=1974 |title=Introduction to the Theory of Statistics |edition=3rd |publisher=[[McGraw-Hill]] |at=§9.2}}</ref><ref name="Stuart et al. 20.10–20.13">{{citation |last1=Stuart|first1=A. |last2=Ord |first2=K. |last3=Arnold |first3=S. |year=1999 |title=Kendall's Advanced Theory of Statistics |volume=2A |publisher=[[Edward Arnold (publisher)|Arnold]] |at=§§20.10–20.13}}</ref> :<math> \Lambda(x) = \frac{~\mathcal{L}(\theta_0\mid x) ~}{~\mathcal{L}(\theta_1\mid x) ~}. </math> Some older references may use the reciprocal of the function above as the definition.<ref>{{citation |author1-last=Cox |author1-first=D. R. |author1-link= David Cox (statistician)|author2-last=Hinkley |author2-first=D. V. | author2-link= David Hinkley |title=Theoretical Statistics |publisher= [[Chapman & Hall]] |year=1974 |isbn=0-412-12420-3 |page=92 }}</ref> Thus, the likelihood ratio is small if the alternative model is better than the null model. The likelihood-ratio test provides the decision rule as follows: :If <math>~\Lambda > c ~</math>, do not reject <math>H_0</math>; :If <math>~\Lambda < c ~</math>, reject <math>H_0</math>; :If <math>~\Lambda = c ~</math>, reject <math>H_0</math> with probability <math>~q~</math>. : The values <math>c</math> and <math>q</math> are usually chosen to obtain a specified [[significance level]] <math>\alpha</math>, via the relation :<math>~q~</math> <math> \operatorname{P}(\Lambda=c \mid H_0)~+~\operatorname{P}(\Lambda < c \mid H_0)~=~\alpha~. </math> The [[Neyman–Pearson lemma]] states that this likelihood-ratio test is the [[Statistical power|most powerful]] among all level <math>\alpha</math> tests for this case.<ref name="NeymanPearson1933"/><ref name="Stuart et al. 20.10–20.13"/>
Summary:
Please note that all contributions to Niidae Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Encyclopedia:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Search
Search
Editing
Likelihood-ratio test
(section)
Add topic