Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Special pages
Niidae Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Exponential distribution
(section)
Page
Discussion
English
Read
Edit
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
View history
General
What links here
Related changes
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Properties== ===Mean, variance, moments, and median=== [[File:Mean exp.svg|thumb|The mean is the probability mass centre, that is, the [[first moment]].]] [[File:Median exp.svg|thumb|The median is the [[preimage]] ''F''<sup>−1</sup>(1/2).]] The mean or [[expected value]] of an exponentially distributed random variable ''X'' with rate parameter ''λ'' is given by <math display="block">\operatorname{E}[X] = \frac{1}{\lambda}.</math> In light of the examples given [[#Occurrence and applications|below]], this makes sense; a person who receives an average of two telephone calls per hour can expect that the time between consecutive calls will be 0.5 hour, or 30 minutes. The [[variance]] of ''X'' is given by <math display="block">\operatorname{Var}[X] = \frac{1}{\lambda^2},</math> so the [[standard deviation]] is equal to the mean. The [[Moment (mathematics)|moments]] of ''X'', for <math>n\in\N</math> are given by <math display="block">\operatorname{E}\left[X^n\right] = \frac{n!}{\lambda^n}.</math> The [[central moment]]s of ''X'', for <math>n\in\N</math> are given by <math display="block">\mu_n = \frac{!n}{\lambda^n} = \frac{n!}{\lambda^n}\sum^n_{k=0}\frac{(-1)^k}{k!}.</math> where !''n'' is the [[subfactorial]] of ''n'' The [[median]] of ''X'' is given by <math display="block">\operatorname{m}[X] = \frac{\ln(2)}{\lambda} < \operatorname{E}[X],</math> where {{math|ln}} refers to the [[natural logarithm]]. Thus the [[absolute difference]] between the mean and median is <math display="block">\left|\operatorname{E}\left[X\right] - \operatorname{m}\left[X\right]\right| = \frac{1 - \ln(2)}{\lambda} < \frac{1}{\lambda} = \operatorname{\sigma}[X],</math> in accordance with the [[median-mean inequality]]. ===Memorylessness property of exponential random variable=== An exponentially distributed random variable ''T'' obeys the relation <math display="block">\Pr \left (T > s + t \mid T > s \right ) = \Pr(T > t), \qquad \forall s, t \ge 0.</math> This can be seen by considering the [[complementary cumulative distribution function]]: <math display="block"> \begin{align} \Pr\left(T > s + t \mid T > s\right) &= \frac{\Pr\left(T > s + t \cap T > s\right)}{\Pr\left(T > s\right)} \\[4pt] &= \frac{\Pr\left(T > s + t \right)}{\Pr\left(T > s\right)} \\[4pt] &= \frac{e^{-\lambda(s + t)}}{e^{-\lambda s}} \\[4pt] &= e^{-\lambda t} \\[4pt] &= \Pr(T > t). \end{align} </math> When ''T'' is interpreted as the waiting time for an event to occur relative to some initial time, this relation implies that, if ''T'' is conditioned on a failure to observe the event over some initial period of time ''s'', the distribution of the remaining waiting time is the same as the original unconditional distribution. For example, if an event has not occurred after 30 seconds, the [[conditional probability]] that occurrence will take at least 10 more seconds is equal to the unconditional probability of observing the event more than 10 seconds after the initial time. The exponential distribution and the [[geometric distribution]] are [[memorylessness|the only memoryless probability distributions]]. The exponential distribution is consequently also necessarily the only continuous probability distribution that has a constant [[failure rate]]. ===Quantiles=== [[File:Tukey anomaly criteria for Exponential PDF.png|500px|thumb|alt=Tukey anomaly criteria for exponential probability distribution function.| Tukey criteria for anomalies.{{citation needed|date=September 2017}}]] The [[quantile function]] (inverse cumulative distribution function) for Exp(''λ'') is <math display="block">F^{-1}(p;\lambda) = \frac{-\ln(1-p)}{\lambda},\qquad 0 \le p < 1</math> The [[quartile]]s are therefore: * first quartile: ln(4/3)/''λ'' * [[median]]: ln(2)/''λ'' * third quartile: ln(4)/''λ'' And as a consequence the [[interquartile range]] is ln(3)/''λ''. ===Conditional Value at Risk (Expected Shortfall)=== The conditional value at risk (CVaR) also known as the [[expected shortfall]] or superquantile for Exp(''λ'') is derived as follows:<ref name="Norton-2019">{{cite journal |last1=Norton |first1=Matthew |last2=Khokhlov |first2=Valentyn |last3=Uryasev |first3=Stan |year=2019 |title=Calculating CVaR and bPOE for common probability distributions with application to portfolio optimization and density estimation |journal=Annals of Operations Research |volume=299 |issue=1–2 |pages=1281–1315 |publisher=Springer |doi=10.1007/s10479-019-03373-1 |arxiv=1811.11301 |url=http://uryasev.ams.stonybrook.edu/wp-content/uploads/2019/10/Norton2019_CVaR_bPOE.pdf |access-date=2023-02-27 |archive-date=2023-03-31 |archive-url=https://web.archive.org/web/20230331230821/http://uryasev.ams.stonybrook.edu/wp-content/uploads/2019/10/Norton2019_CVaR_bPOE.pdf |url-status=dead }}</ref> <math display="block">\begin{align} \bar{q}_\alpha (X) &= \frac{1}{1-\alpha} \int_{\alpha}^{1} q_p (X) dp \\ &= \frac{1}{(1-\alpha)} \int_{\alpha}^{1} \frac{-\ln (1 - p )}{\lambda} dp \\ &= \frac{-1}{\lambda(1-\alpha)} \int_{1-\alpha}^{0} -\ln (y ) dy \\ &= \frac{-1}{\lambda(1-\alpha)} \int_{0}^{1 - \alpha} \ln (y ) dy \\ &= \frac{-1}{\lambda(1-\alpha)} [ ( 1-\alpha) \ln(1-\alpha) - (1-\alpha) ] \\ &= \frac{ - \ln(1-\alpha) + 1 } { \lambda} \\ \end{align} </math> ===Buffered Probability of Exceedance (bPOE)=== {{Main|Buffered probability of exceedance}} The buffered probability of exceedance is one minus the probability level at which the CVaR equals the threshold <math>x</math>. It is derived as follows:<ref name="Norton-2019" /> <math display="block">\begin{align} \bar{p}_x (X) &= \{ 1 - \alpha | \bar{q}_\alpha (X) = x \} \\ &= \{ 1 - \alpha |\frac{ - \ln(1-\alpha) + 1 } { \lambda} = x \} \\ &= \{ 1 - \alpha | \ln(1-\alpha) = 1-\lambda x \} \\ &= \{ 1 - \alpha | e^{\ln(1-\alpha)} = e^{1-\lambda x} \} = \{ 1 - \alpha | 1-\alpha = e^{1-\lambda x} \} = e^{1-\lambda x} \end{align} </math> ===Kullback–Leibler divergence=== The directed [[Kullback–Leibler divergence]] in [[nat (unit)|nats]] of <math>e^\lambda</math> ("approximating" distribution) from <math>e^{\lambda_0}</math> ('true' distribution) is given by <math display="block">\begin{align} \Delta(\lambda_0 \parallel \lambda) &= \mathbb{E}_{\lambda_0}\left( \log \frac{p_{\lambda_0}(x)}{p_\lambda(x)}\right)\\ &= \mathbb{E}_{\lambda_0}\left( \log \frac{\lambda_0 e^{\lambda_0 x}}{\lambda e^{\lambda x}}\right)\\ &= \log(\lambda_0) - \log(\lambda) - (\lambda_0 - \lambda)E_{\lambda_0}(x)\\ &= \log(\lambda_0) - \log(\lambda) + \frac{\lambda}{\lambda_0} - 1. \end{align} </math> ===Maximum entropy distribution=== Among all continuous probability distributions with [[Support (mathematics)#In probability and measure theory|support]] {{closed-open|0, ∞}} and mean ''μ'', the exponential distribution with ''λ'' = 1/''μ'' has the largest [[differential entropy]]. In other words, it is the [[maximum entropy probability distribution]] for a [[random variate]] ''X'' which is greater than or equal to zero and for which E[''X''] is fixed.<ref>{{cite journal |last1=Park |first1=Sung Y. |last2=Bera |first2=Anil K. |year=2009 |title=Maximum entropy autoregressive conditional heteroskedasticity model |journal=Journal of Econometrics |volume=150 |issue=2 |pages=219–230 |publisher=Elsevier |doi=10.1016/j.jeconom.2008.12.014 |url=http://www.wise.xmu.edu.cn/Master/Download/..%5C..%5CUploadFiles%5Cpaper-masterdownload%5C2009519932327055475115776.pdf |access-date=2011-06-02 |archive-url=https://web.archive.org/web/20160307144515/http://wise.xmu.edu.cn/uploadfiles/paper-masterdownload/2009519932327055475115776.pdf |archive-date=2016-03-07 |url-status=dead }}</ref> ===Distribution of the minimum of exponential random variables=== Let ''X''<sub>1</sub>, ..., ''X''<sub>''n''</sub> be [[Independent random variables|independent]] exponentially distributed random variables with rate parameters ''λ''<sub>1</sub>, ..., ''λ<sub>n</sub>''. Then <math display="block">\min\left\{X_1, \dotsc, X_n \right\}</math> is also exponentially distributed, with parameter <math display="block">\lambda = \lambda_1 + \dotsb + \lambda_n.</math> This can be seen by considering the [[complementary cumulative distribution function]]: <math display="block">\begin{align} &\Pr\left(\min\{X_1, \dotsc, X_n\} > x\right) \\ ={} &\Pr\left(X_1 > x, \dotsc, X_n > x\right) \\ ={} &\prod_{i=1}^n \Pr\left(X_i > x\right) \\ ={} &\prod_{i=1}^n \exp\left(-x\lambda_i\right) = \exp\left(-x\sum_{i=1}^n \lambda_i\right). \end{align}</math> The index of the variable which achieves the minimum is distributed according to the categorical distribution <math display="block">\Pr\left(X_k = \min\{X_1, \dotsc, X_n\}\right) = \frac{\lambda_k}{\lambda_1 + \dotsb + \lambda_n}.</math> A proof can be seen by letting <math>I = \operatorname{argmin}_{i \in \{1, \dotsb, n\}}\{X_1, \dotsc, X_n\}</math>. Then, <math display="block">\begin{align} \Pr (I = k) &= \int_{0}^{\infty} \Pr(X_k = x) \Pr(\forall_{i\neq k}X_{i} > x ) \,dx \\ &= \int_{0}^{\infty} \lambda_k e^{- \lambda_k x} \left(\prod_{i=1, i\neq k}^{n} e^{- \lambda_i x}\right) dx \\ &= \lambda_k \int_{0}^{\infty} e^{- \left(\lambda_1 + \dotsb +\lambda_n\right) x} dx \\ &= \frac{\lambda_k}{\lambda_1 + \dotsb + \lambda_n}. \end{align}</math> Note that <math display="block">\max\{X_1, \dotsc, X_n\}</math> is not exponentially distributed, if ''X''<sub>1</sub>, ..., ''X''<sub>''n''</sub> do not all have parameter 0.<ref>{{cite web|last1=Michael|first1=Lugo|title=The expectation of the maximum of exponentials| url=http://www.stat.berkeley.edu/~mlugo/stat134-f11/exponential-maximum.pdf|access-date=13 December 2016|archive-url=https://web.archive.org/web/20161220132822/https://www.stat.berkeley.edu/~mlugo/stat134-f11/exponential-maximum.pdf |archive-date=20 December 2016|url-status=dead}}</ref> ===Joint moments of i.i.d. exponential order statistics=== Let <math> X_1, \dotsc, X_n </math> be <math> n </math> [[independent and identically distributed]] exponential random variables with rate parameter ''λ''. Let <math> X_{(1)}, \dotsc, X_{(n)} </math> denote the corresponding [[order statistic]]s. For <math> i < j </math> , the joint moment <math> \operatorname E\left[X_{(i)} X_{(j)}\right] </math> of the order statistics <math> X_{(i)} </math> and <math> X_{(j)} </math> is given by <math display="block">\begin{align} \operatorname E\left[X_{(i)} X_{(j)}\right] &= \sum_{k=0}^{j-1}\frac{1}{(n - k)\lambda} \operatorname E\left[X_{(i)}\right] + \operatorname E\left[X_{(i)}^2\right] \\ &= \sum_{k=0}^{j-1}\frac{1}{(n - k)\lambda}\sum_{k=0}^{i-1}\frac{1}{(n - k)\lambda} + \sum_{k=0}^{i-1}\frac{1}{((n - k)\lambda)^2} + \left(\sum_{k=0}^{i-1}\frac{1}{(n - k)\lambda}\right)^2. \end{align}</math> This can be seen by invoking the [[law of total expectation]] and the memoryless property: <math display="block">\begin{align} \operatorname E\left[X_{(i)} X_{(j)}\right] &= \int_0^\infty \operatorname E\left[X_{(i)} X_{(j)} \mid X_{(i)}=x\right] f_{X_{(i)}}(x) \, dx \\ &= \int_{x=0}^\infty x \operatorname E\left[X_{(j)} \mid X_{(j)} \geq x\right] f_{X_{(i)}}(x) \, dx &&\left(\textrm{since}~X_{(i)} = x \implies X_{(j)} \geq x\right) \\ &= \int_{x=0}^\infty x \left[ \operatorname E\left[X_{(j)}\right] + x \right] f_{X_{(i)}}(x) \, dx &&\left(\text{by the memoryless property}\right) \\ &= \sum_{k=0}^{j-1}\frac{1}{(n - k)\lambda} \operatorname E\left[X_{(i)}\right] + \operatorname E\left[X_{(i)}^2\right]. \end{align}</math> The first equation follows from the [[law of total expectation]]. The second equation exploits the fact that once we condition on <math> X_{(i)} = x </math>, it must follow that <math> X_{(j)} \geq x </math>. The third equation relies on the memoryless property to replace <math>\operatorname E\left[ X_{(j)} \mid X_{(j)} \geq x\right]</math> with <math>\operatorname E\left[X_{(j)}\right] + x</math>. ===Sum of two independent exponential random variables=== The probability distribution function (PDF) of a sum of two independent random variables is the [[convolution of probability distributions|convolution of their individual PDFs]]. If <math>X_1</math> and <math>X_2</math> are independent exponential random variables with respective rate parameters <math>\lambda_1</math> and <math>\lambda_2,</math> then the probability density of <math>Z=X_1+X_2</math> is given by <math display="block"> \begin{align} f_Z(z) &= \int_{-\infty}^\infty f_{X_1}(x_1) f_{X_2}(z - x_1)\,dx_1\\ &= \int_0^z \lambda_1 e^{-\lambda_1 x_1} \lambda_2 e^{-\lambda_2(z - x_1)} \, dx_1 \\ &= \lambda_1 \lambda_2 e^{-\lambda_2 z} \int_0^z e^{(\lambda_2 - \lambda_1)x_1}\,dx_1 \\ &= \begin{cases} \dfrac{\lambda_1 \lambda_2}{\lambda_2-\lambda_1} \left(e^{-\lambda_1 z} - e^{-\lambda_2 z}\right) & \text{ if } \lambda_1 \neq \lambda_2 \\[4 pt] \lambda^2 z e^{-\lambda z} & \text{ if } \lambda_1 = \lambda_2 = \lambda. \end{cases} \end{align} </math> The entropy of this distribution is available in closed form: assuming <math>\lambda_1 > \lambda_2</math> (without loss of generality), then <math display="block">\begin{align} H(Z) &= 1 + \gamma + \ln \left( \frac{\lambda_1 - \lambda_2}{\lambda_1 \lambda_2} \right) + \psi \left( \frac{\lambda_1}{\lambda_1 - \lambda_2} \right) , \end{align}</math> where <math>\gamma</math> is the [[Euler-Mascheroni constant]], and <math>\psi(\cdot)</math> is the [[digamma function]].<ref>{{cite arXiv|last1=Eckford |first1=Andrew W. |last2=Thomas |first2=Peter J. |date=2016 |title=Entropy of the sum of two independent, non-identically-distributed exponential random variables |class=cs.IT |eprint=1609.02911}}</ref> In the case of equal rate parameters, the result is an [[Erlang distribution]] with shape 2 and parameter <math>\lambda,</math> which in turn is a special case of [[gamma distribution]]. The sum of n independent Exp(''λ)'' exponential random variables is Gamma(n, ''λ)'' distributed.
Summary:
Please note that all contributions to Niidae Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Encyclopedia:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Search
Search
Editing
Exponential distribution
(section)
Add topic