Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Special pages
Niidae Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Cholesky decomposition
(section)
Page
Discussion
English
Read
Edit
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
View history
General
What links here
Related changes
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Applications == ===Numerical solution of system of linear equations=== The Cholesky decomposition is mainly used for the numerical solution of [[system of linear equations|linear equations]] <math display=inline>\mathbf{Ax} = \mathbf{b}</math>. If {{math|'''A'''}} is symmetric and positive definite, then <math display=inline>\mathbf{Ax} = \mathbf{b}</math> can be solved by first computing the Cholesky decomposition <math display=inline>\mathbf{A} = \mathbf{LL}^\mathrm{*}</math>, then solving <math display=inline>\mathbf{Ly} = \mathbf{b}</math> for {{math|'''y'''}} by [[forward substitution]], and finally solving <math display=inline>\mathbf{L^*x} = \mathbf{y}</math> for {{math|'''x'''}} by [[back substitution]]. An alternative way to eliminate taking square roots in the <math display=inline>\mathbf{LL}^\mathrm{*}</math> decomposition is to compute the LDL decomposition <math display=inline>\mathbf{A} = \mathbf{LDL}^\mathrm{*}</math>, then solving <math display=inline>\mathbf{Ly} = \mathbf{b}</math> for {{math|'''y'''}}, and finally solving <math display=inline>\mathbf{DL}^\mathrm{*}\mathbf{x} = \mathbf{y}</math>. For linear systems that can be put into symmetric form, the Cholesky decomposition (or its LDL variant) is the method of choice, for superior efficiency and numerical stability. Compared to the [[LU decomposition]], it is roughly twice as efficient.<ref name="NR"/> ===Linear least squares=== In [[linear least squares (mathematics)|linear least squares]] problem one seeks a solution {{math|1='''x'''}} of an over-determined system {{math|1='''Ax''' = '''l'''}}, such that quadratic norm of the residual vector {{math|1='''Ax-l'''}} is minimum. This may be accomplished by solving by Cholesky decomposition normal equations <math>\mathbf{Nx}=\mathbf{A}^\mathsf{T}\mathbf{l}</math>, where <math>\mathbf{N}=\mathbf{A}^\mathsf{T}\mathbf{A}</math> is symmetric positive definite. Symmetric equation matrix may also come from an energy functional, which must be positive from physical considerations; this happens frequently in the numerical solution of [[partial differential equation]]s. Such method is economic and works well in many applications, however it fails for near singular {{math|1='''N'''}}. This is best illustrated in pathological case of square <math>\mathbf{A}</math>, where determinant of {{math|1='''N'''}} is square of that of the original system {{math|1='''Ax''' = '''l'''}}. Then it is best to apply SVD or QR decomposition. Givens QR has the advantage that similarly to normal equations there is no need to keep the whole matrix {{math|1='''A'''}} as it is possible to update Cholesky factor with consecutive rows of {{math|1='''A'''}}. ===Non-linear optimization=== [[Non-linear least squares]] are a particular case of nonlinear optimization. Let <math display=inline>\mathbf{f}(\mathbf{x})=\mathbf{l}</math> is an over-determined system of equations with a non-linear function <math>\mathbf{f}</math> returning vector results. The aim is to minimize square norm of residuals <math display=inline>\mathbf{v}=\mathbf{f}(\mathbf{x})-\mathbf{l}</math>. An approximate [[Newton's method]] solution is obtained by expanding <math>\mathbf{f}</math> into curtailed Taylor series <math>\bf f(x_{\rm 0}+\delta x)\approx f(x_{\rm 0})+(\partial f/\partial x)\delta x</math> yielding linear least squares problem for <math>\bf\delta x</math> : <math>{\bf(\partial f/\partial x)\delta x=l-f(x_{\rm 0})=v,\;\;\min_{\delta x}=\|v\|^2}.</math> Of course because of neglect of higher Taylor terms such solution is only approximate, if it ever exists. Now one could update expansion point to <math>\bf x_{\rm n+1}=x_{\rm n}+\delta x</math> and repeat the whole procedure, hoping that (i) iterations converge to a solution and (ii) that the solution is the one needed. Unfortunately neither is guaranteed and must be verified. [[Non-linear least squares]] may be also applied to the linear least squares problem by setting <math>\bf x_{\rm 0}=0</math> and <math>\bf f(x_{\rm 0})=Ax</math>. This may be useful if Cholesky decomposition yields inaccurate inverse triangle matrix <math>\bf R^{\rm -1}</math> where <math>\bf R^{\rm T}R=N</math>, because of rounding errors. Such procedure is called ''differential correction'' of solution. As long as iterations converge, by virtue of Banach fixed point theorem they yield the solution which precision is only limited by precision of calculated residuals <math>\bf v=Ax-l</math>, but independent rounding errors of <math>\bf R^{\rm -1}</math>. Poor <math>\bf R^{\rm -1}</math> may restrict region of initial <math>\bf x_{\rm 0}</math> yielding convergence or altogether preventing it. Usually convergence is slower e.g. linear so that <math>\bf\|\delta x_{\rm n+1}\|\approx\|=\alpha\delta x_{\rm n}\|</math> where constant <math>\alpha<1</math>. Such slow convergence may be sped by ''Aitken <math>\delta^2</math>'' method. If calculation of <math>\bf R^{\rm -1}</math> is very costly, it is possible to use it from previous iterations as long as convergence is maintained. Such Cholesky procedure may work even for Hilbert matrices, notoriously difficult to invert.<ref>{{cite journal | last1 = Schwarzenberg-Czerny | first1 = A. | journal = Astronomy and Astrophysics Supplement | pages = 405–410 | title = On matrix factorization and efficient least squares solution | volume = 110 | year = 1995| bibcode = 1995A&AS..110..405S }}</ref> Non-linear multi-variate functions may be minimized over their parameters using variants of [[Newton's method]] called ''quasi-Newton'' methods. At iteration k, the search steps in a direction <math display=inline> p_k </math> defined by solving <math display=inline> B_k p_k = -g_k </math> for <math display=inline> p_k </math>, where <math display=inline> p_k </math> is the step direction, <math display=inline> g_k </math> is the [[gradient]], and <math display=inline> B_k </math> is an approximation to the [[Hessian matrix]] formed by repeating rank-1 updates at each iteration. Two well-known update formulas are called [[Davidon–Fletcher–Powell]] (DFP) and [[BFGS method|Broyden–Fletcher–Goldfarb–Shanno]] (BFGS). Loss of the positive-definite condition through round-off error is avoided if rather than updating an approximation to the inverse of the Hessian, one updates the Cholesky decomposition of an approximation of the Hessian matrix itself.<ref>{{Cite book |last=Arora |first=Jasbir Singh |url=https://books.google.com/books?id=9FbwVe577xwC&pg=PA327 |title=Introduction to Optimum Design |date=2004-06-02 |publisher=Elsevier |isbn=978-0-08-047025-2 |language=en}}</ref> ===Monte Carlo simulation=== The Cholesky decomposition is commonly used in the [[Monte Carlo method]] for simulating systems with multiple correlated variables. The [[covariance matrix]] is decomposed to give the lower-triangular {{math|'''L'''}}. Applying this to a vector of uncorrelated observations in a sample {{math|'''u'''}} produces a sample vector '''Lu''' with the covariance properties of the system being modeled.<ref name="Matlab documentation">[http://www.mathworks.com/help/techdoc/ref/randn.html Matlab randn documentation]. mathworks.com.</ref> The following simplified example shows the economy one gets from the Cholesky decomposition: suppose the goal is to generate two correlated normal variables <math display=inline>x_1</math> and <math display=inline>x_2</math> with given correlation coefficient <math display=inline>\rho</math>. To accomplish that, it is necessary to first generate two uncorrelated Gaussian random variables <math display=inline>z_1</math> and <math display=inline>z_2</math> (for example, via a [[Box–Muller transform]]). Given the required correlation coefficient <math display=inline>\rho</math>, the correlated normal variables can be obtained via the transformations <math display=inline>x_1 = z_1</math> and <math display="inline">x_2 = \rho z_1 + \sqrt{1 - \rho^2} z_2</math>. ===Kalman filters=== [[Unscented Kalman filter]]s commonly use the Cholesky decomposition to choose a set of so-called sigma points. The Kalman filter tracks the average state of a system as a vector {{math|'''x'''}} of length {{mvar|N}} and covariance as an {{math|''N'' × ''N''}} matrix {{math|'''P'''}}. The matrix {{math|'''P'''}} is always positive semi-definite and can be decomposed into '''LL'''<sup>T</sup>. The columns of {{math|'''L'''}} can be added and subtracted from the mean {{math|'''x'''}} to form a set of {{math|2''N''}} vectors called ''sigma points''. These sigma points completely capture the mean and covariance of the system state. ===Matrix inversion=== The explicit [[inverse matrix|inverse]] of a Hermitian matrix can be computed by Cholesky decomposition, in a manner similar to solving linear systems, using <math display=inline>n^3</math> operations (<math display=inline>\tfrac{1}{2} n^3</math> multiplications).<ref name="kri"/> The entire inversion can even be efficiently performed in-place. A non-Hermitian matrix {{math|'''B'''}} can also be inverted using the following identity, where '''BB'''* will always be Hermitian: <math display=block>\mathbf{B}^{-1} = \mathbf{B}^* (\mathbf{B B}^*)^{-1}.</math>
Summary:
Please note that all contributions to Niidae Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Encyclopedia:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Search
Search
Editing
Cholesky decomposition
(section)
Add topic