Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Special pages
Niidae Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Linear prediction
Page
Discussion
English
Read
Edit
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
View history
General
What links here
Related changes
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Mathematical operation that predicts future values of a discrete-time signal}} '''Linear prediction''' is a mathematical operation where future values of a [[discrete time and continuous time|discrete-time]] [[Signal processing|signal]] are estimated as a [[linear transformation|linear function]] of previous samples. In [[digital signal processing]], linear prediction is often called [[linear predictive coding]] (LPC) and can thus be viewed as a subset of [[filter theory]]. In [[system analysis]], a subfield of [[mathematics]], linear prediction can be viewed as a part of [[mathematical model]]ling or [[optimization (mathematics)|optimization]]. == The prediction model == The most common representation is :<math>\widehat{x}(n) = \sum_{i=1}^p a_i x(n-i)\,</math> where <math>\widehat{x}(n)</math> is the predicted signal value, <math>x(n-i)</math> the previous observed values, with <math> p \leq n </math>, and <math>a_i</math> the predictor coefficients. The error generated by this estimate is :<math>e(n) = x(n) - \widehat{x}(n)\,</math> where <math>x(n)</math> is the true signal value. These equations are valid for all types of (one-dimensional) linear prediction. The differences are found in the way the predictor coefficients <math>a_i</math> are chosen. For multi-dimensional signals the error metric is often defined as :<math>e(n) = \|x(n) - \widehat{x}(n)\|\,</math> where <math>\|\cdot\|</math> is a suitable chosen vector [[norm (mathematics)|norm]]. Predictions such as <math>\widehat{x}(n)</math> are routinely used within [[Kalman filter]]s and smoothers to estimate current and past signal values, respectively, from noisy measurements.<ref>{{Cite web |title=Kalman Filter - an overview {{!}} ScienceDirect Topics |url=https://www.sciencedirect.com/topics/earth-and-planetary-sciences/kalman-filter |access-date=2022-06-24 |website=www.sciencedirect.com}}</ref> === Estimating the parameters === The most common choice in optimization of parameters <math>a_i</math> is the [[root mean square]] criterion which is also called the [[autocorrelation]] criterion. In this method we minimize the expected value of the squared error <math> E[e^2(n)]</math>, which yields the equation :<math>\sum_{i=1}^p a_i R(j-i) = R(j),</math> for 1 ≤ ''j'' ≤ ''p'', where ''R'' is the [[autocorrelation]] of signal ''x''<sub>''n''</sub>, defined as :<math>\ R(i) = E\{x(n)x(n-i)\}\,</math>, and ''E'' is the [[expected value]]. In the multi-dimensional case this corresponds to minimizing the [[Lp space|L<sub>2</sub> norm]]. The above equations are called the [[normal equations]] or [[Autoregressive model#Yule–Walker equations|Yule-Walker equations]]. In matrix form the equations can be equivalently written as :<math>\mathbf{R A} = \mathbf{r}</math> where the autocorrelation matrix <math>\mathbf{R}</math> is a symmetric, <math>p \times p</math> [[Toeplitz matrix]] with elements <math> r_{ij} = R(i-j), 0 \leq i, j<p </math>, the vector <math>\mathbf{r}</math> is the autocorrelation vector <math> r_j = R(j), 0<j \leq p</math>, and <math>\mathbf{A} = [a_1, a_2, \,\cdots\, , a_{p-1}, a_p]</math>, the parameter vector. Another, more general, approach is to minimize the sum of squares of the errors defined in the form :<math>e(n) = x(n) - \widehat{x}(n) = x(n) - \sum_{i=1}^p a_i x(n-i) = - \sum_{i=0}^p a_i x(n-i)</math> where the optimisation problem searching over all <math>a_i</math> must now be constrained with <math>a_0=-1</math>. On the other hand, if the mean square prediction error is constrained to be unity and the prediction error equation is included on top of the normal equations, the augmented set of equations is obtained as :<math>\ \mathbf{R A} = [1, 0, ... , 0]^{\mathrm{T}}</math> where the index <math>i</math> ranges from 0 to <math>p</math>, and <math>\mathbf{R}</math> is a <math>(p+1)\times(p+1)</math> matrix. Specification of the parameters of the linear predictor is a wide topic and a large number of other approaches have been proposed. In fact, the autocorrelation method is the most common<ref>{{Cite web |title=Linear Prediction - an overview {{!}} ScienceDirect Topics |url=https://www.sciencedirect.com/topics/mathematics/linear-prediction |access-date=2022-06-24 |website=www.sciencedirect.com}}</ref> and it is used, for example, for [[speech coding]] in the [[Global System for Mobile Communications|GSM]] standard. Solution of the matrix equation <math>\mathbf{R A} = \mathbf{r}</math> is computationally a relatively expensive process. The [[Gaussian elimination]] for matrix inversion is probably the oldest solution but this approach does not efficiently use the symmetry of <math>\mathbf{R}</math>. A faster algorithm is the [[Levinson recursion]] proposed by [[Norman Levinson]] in 1947, which recursively calculates the solution.{{Citation needed|date=October 2010}} In particular, the autocorrelation equations above may be more efficiently solved by the Durbin algorithm.<ref>{{cite journal | last1 = Ramirez | first1 = M. A. | year = 2008 | title = A Levinson Algorithm Based on an Isometric Transformation of Durbin's | doi = 10.1109/LSP.2007.910319 | journal = IEEE Signal Processing Letters | volume = 15 | pages = 99–102 | bibcode = 2008ISPL...15...99R | s2cid = 18906207 |url=http://www.producao.usp.br/bitstream/handle/BDPI/18665/lts2r1f.pdf}}</ref> In 1986, Philippe Delsarte and Y.V. Genin proposed an improvement to this algorithm called the split Levinson recursion, which requires about half the number of multiplications and divisions.<ref>Delsarte, P. and Genin, Y. V. (1986), ''The split Levinson algorithm'', ''IEEE Transactions on Acoustics, Speech, and Signal Processing'', v. ASSP-34(3), pp. 470–478</ref> It uses a special symmetrical property of parameter vectors on subsequent recursion levels. That is, calculations for the optimal predictor containing <math>p</math> terms make use of similar calculations for the optimal predictor containing <math>p-1</math> terms. Another way of identifying model parameters is to iteratively calculate state estimates using [[Kalman filter]]s and obtaining [[maximum likelihood estimation|maximum likelihood]] estimates within [[expectation–maximization algorithm]]s. For equally-spaced values, a polynomial interpolation is a [[Polynomial interpolation#Linear combination of the given values|linear combination of the known values.]] If the discrete time signal is estimated to obey a polynomial of degree <math>p-1,</math> then the predictor coefficients <math>a_i</math> are given by the corresponding row of the [[Pascal's triangle#The Triangle of Binomial Transform Coefficients is like Pascal's Triangle.|triangle of binomial transform coefficients.]] This estimate might be suitable for a slowly varying signal with low noise. The predictions for the first few values of <math>p</math> are : <math>\begin{array}{lcl} p=1 & : & \widehat{x}(n) = 1x(n-1)\\ p=2 & : & \widehat{x}(n) = 2x(n-1) - 1x(n-2) \\ p=3 & : & \widehat{x}(n) = 3x(n-1) - 3x(n-2) + 1x(n-3)\\ p=4 & : & \widehat{x}(n) = 4x(n-1) - 6x(n-2) + 4x(n-3) - 1x(n-4)\\ \end{array} </math> == See also == * [[Autoregressive model]] * [[Linear predictive analysis]] * [[Minimum mean square error]] * [[Prediction interval]] * [[Rasta filtering]] == References == {{reflist}} {{More footnotes|date=November 2010}} == Further reading == *{{cite book |first=M. H. |last=Hayes |author-link = Monson H. Hayes|title=Statistical Digital Signal Processing and Modeling |publisher=J. Wiley & Sons |location=New York |year=1996 |isbn=978-0471594314 }} *{{cite journal |first=N. |last=Levinson |title=The Wiener RMS (root mean square) error criterion in filter design and prediction |journal=[[Journal of Mathematics and Physics]] |volume=25 |issue=4 |pages=261–278 |year=1947 |doi=10.1002/sapm1946251261 }} *{{cite journal |first=J. |last=Makhoul |title=Linear prediction: A tutorial review |journal=Proceedings of the IEEE |volume=63 |issue=5 |pages=561–580 |year=1975 |doi=10.1109/PROC.1975.9792 }} *{{cite journal |first=G. U. |last=Yule |title=On a Method of Investigating Periodicities in Disturbed Series, with Special Reference to Wolfer's Sunspot Numbers |journal=[[Philosophical Transactions of the Royal Society A|Phil. Trans. Roy. Soc. A]] |volume=226 |pages=267–298 |year=1927 |issue=636–646 |jstor=91170 |doi=10.1098/rsta.1927.0007|doi-access=free |bibcode=1927RSPTA.226..267Y }} == External links == * [http://labrosa.ee.columbia.edu/matlab/rastamat/ PLP and RASTA (and MFCC, and inversion) in Matlab] {{DEFAULTSORT:Linear prediction}} [[Category:Signal estimation]] [[Category:Statistical forecasting]]
Summary:
Please note that all contributions to Niidae Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Encyclopedia:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Templates used on this page:
Template:Citation needed
(
edit
)
Template:Cite book
(
edit
)
Template:Cite journal
(
edit
)
Template:Cite web
(
edit
)
Template:More footnotes
(
edit
)
Template:Reflist
(
edit
)
Template:Short description
(
edit
)
Search
Search
Editing
Linear prediction
Add topic