Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Special pages
Niidae Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Kalman filter
(section)
Page
Discussion
English
Read
Edit
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
View history
General
What links here
Related changes
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Factored form == One problem with the Kalman filter is its [[numerical stability]]. If the process noise covariance '''Q'''<sub>''k''</sub> is small, round-off error often causes a small positive eigenvalue of the state covariance matrix '''P''' to be computed as a negative number. This renders the numerical representation of '''P''' [[Positive-semidefinite matrix|indefinite]], while its true form is [[Positive-definite matrix|positive-definite]]. Positive definite matrices have the property that they have a factorization into the product of a [[Non-singular matrix|non-singular]], [[lower-triangular matrix]] '''S''' and its [[Matrix transpose|transpose]] : '''P''' = '''S'''·'''S'''<sup>T</sup> . The factor '''S''' can be computed efficiently using the [[Cholesky factorization]] algorithm. This product form of the covariance matrix '''P''' is guaranteed to be symmetric, and for all 1 <= k <= n, the k-th diagonal element '''P'''<sub>kk</sub> is equal to the [[euclidean norm]] of the k-th row of '''S''', which is necessarily positive. An equivalent form, which avoids many of the [[square root]] operations involved in the [[Cholesky factorization]] algorithm, yet preserves the desirable numerical properties, is the U-D decomposition form, '''P''' = '''U'''·'''D'''·'''U'''<sup>T</sup>, where '''U''' is a [[unit triangular matrix]] (with unit diagonal), and '''D''' is a diagonal matrix. Between the two, the U-D factorization uses the same amount of storage, and somewhat less computation, and is the most commonly used triangular factorization. (Early literature on the relative efficiency is somewhat misleading, as it assumed that square roots were much more time-consuming than divisions,<ref name=thornton />{{rp|69}} while on 21st-century computers they are only slightly more expensive.) Efficient algorithms for the Kalman prediction and update steps in the factored form were developed by G. J. Bierman and C. L. Thornton.<ref name=thornton>{{cite thesis|title=Triangular Covariance Factorizations for Kalman Filtering |url=https://ntrs.nasa.gov/citations/19770005172 |type=PhD |first=Catherine L. |last=Thornton |date=15 October 1976|id=NASA Technical Memorandum 33-798 |publisher=[[NASA]] }}</ref><ref name="bierman" /> The [[LDL decomposition|'''L'''·'''D'''·'''L'''<sup>T</sup> decomposition]] of the innovation covariance matrix '''S'''<sub>k</sub> is the basis for another type of numerically efficient and robust square root filter.<ref name=barshalom>{{cite book|last1=Bar-Shalom|first1= Yaakov|author-link1=Yaakov Bar-Shalom|last2=Li|first2=X. Rong|last3=Kirubarajan|first3=Thiagalingam |date=July 2001 |title=Estimation with Applications to Tracking and Navigation |publisher=[[John Wiley & Sons]]|place=New York|isbn =978-0-471-41655-5 |pages=308–317}}</ref> The algorithm starts with the LU decomposition as implemented in the Linear Algebra PACKage ([[LAPACK]]). These results are further factored into the '''L'''·'''D'''·'''L'''<sup>T</sup> structure with methods given by Golub and Van Loan (algorithm 4.1.2) for a symmetric nonsingular matrix.<ref name=golub>{{cite book|last1=Golub |first1=Gene H. |first2=Charles F. |last2=Van Loan |year=1996|title=Matrix Computations|publisher=[[Johns Hopkins University]]|edition=Third|page=139|isbn =978-0-8018-5414-9 |place =Baltimore, Maryland|series=Johns Hopkins Studies in the Mathematical Sciences}}</ref> Any singular covariance matrix is [[Pivot element|pivoted]] so that the first diagonal partition is [[Invertible matrix|nonsingular]] and [[Condition number|well-conditioned]]. The pivoting algorithm must retain any portion of the innovation covariance matrix directly corresponding to observed state-variables '''H'''<sub>k</sub>·'''x'''<sub>k|k-1</sub> that are associated with auxiliary observations in '''y'''<sub>k</sub>. The '''l'''·'''d'''·'''l'''<sup>t</sup> square-root filter requires [[orthogonalization]] of the observation vector.<ref name="bierman" /><ref name="barshalom" /> This may be done with the inverse square-root of the covariance matrix for the auxiliary variables using Method 2 in Higham (2002, p. 263).<ref name=higham>{{cite book|first=Nicholas J. |last=Higham|year=2002|title=Accuracy and Stability of Numerical Algorithms|edition=Second|isbn =978-0-89871-521-7|page=680|publisher=[[Society for Industrial and Applied Mathematics]]|place =Philadelphia, PA}}</ref>
Summary:
Please note that all contributions to Niidae Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Encyclopedia:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Search
Search
Editing
Kalman filter
(section)
Add topic