Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Special pages
Niidae Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Kalman filter
(section)
Page
Discussion
English
Read
Edit
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
View history
General
What links here
Related changes
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Unscented Kalman filter === When the state transition and observation modelsโthat is, the predict and update functions <math>f</math> and <math>h</math>โare highly nonlinear, the extended Kalman filter can give particularly poor performance.<ref name="JU2004">{{cite journal | author1 = Julier, Simon J. | author2 = Uhlmann, Jeffrey K. | year = 2004 | title = Unscented filtering and nonlinear estimation | journal = Proceedings of the IEEE | volume = 92 | issue = 3 | pages = 401โ422 | url = https://ieeexplore.ieee.org/document/1271397 | doi=10.1109/JPROC.2003.823141 | s2cid = 9614092 }}</ref> <ref name="JU97">{{cite book | author1 = Julier, Simon J. | author2 = Uhlmann, Jeffrey K. | year = 1997 | title = Signal Processing, Sensor Fusion, and Target Recognition VI | volume = 3 | pages = 182โ193 | chapter-url = http://www.cs.unc.edu/~welch/kalman/media/pdf/Julier1997_SPIE_KF.pdf | access-date = 2008-05-03 | bibcode = 1997SPIE.3068..182J | doi=10.1117/12.280797 | series=Proceedings of SPIE | citeseerx=10.1.1.5.2891 | chapter = New extension of the Kalman filter to nonlinear systems | s2cid = 7937456 | editor1-last = Kadar | editor1-first = Ivan }}</ref> This is because the covariance is propagated through linearization of the underlying nonlinear model. The unscented Kalman filter (UKF) <ref name="JU2004" /> uses a deterministic sampling technique known as the [[unscented transform|unscented transformation (UT)]] to pick a minimal set of sample points (called sigma points) around the mean. The sigma points are then propagated through the nonlinear functions, from which a new mean and covariance estimate are formed. The resulting filter depends on how the transformed statistics of the UT are calculated and which set of sigma points are used. It should be remarked that it is always possible to construct new UKFs in a consistent way.<ref>{{Cite journal|last1=Menegaz|first1=H. M. T.|last2=Ishihara|first2=J. Y.|last3=Borges|first3=G. A.|last4=Vargas|first4=A. N.|date=October 2015|title=A Systematization of the Unscented Kalman Filter Theory|journal=IEEE Transactions on Automatic Control|volume=60|issue=10|pages=2583โ2598|doi=10.1109/tac.2015.2404511|issn=0018-9286|hdl=20.500.11824/251|s2cid=12606055|hdl-access=free}}</ref> For certain systems, the resulting UKF more accurately estimates the true mean and covariance.<ref name="GH2012">{{cite journal | author = Gustafsson, Fredrik | author2 = Hendeby, Gustaf | year = 2012 | title = Some Relations Between Extended and Unscented Kalman Filters | journal = IEEE Transactions on Signal Processing | volume = 60 | issue = 2 | pages = 545โ555 | doi = 10.1109/tsp.2011.2172431 | bibcode= 2012ITSP...60..545G | s2cid = 17876531 | url = http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-75272 }}</ref> This can be verified with [[Monte Carlo sampling]] or [[Taylor series]] expansion of the posterior statistics. In addition, this technique removes the requirement to explicitly calculate Jacobians, which for complex functions can be a difficult task in itself (i.e., requiring complicated derivatives if done analytically or being computationally costly if done numerically), if not impossible (if those functions are not differentiable). ==== Sigma points ==== For a [[Random variable|random]] vector <math>\mathbf{x}=(x_1, \dots, x_L)</math>, sigma points are any set of vectors :<math> \{\mathbf{s}_0,\dots, \mathbf{s}_N \}=\bigl\{\begin{pmatrix} s_{0,1}& s_{0,2}&\ldots& s_{0,L} \end{pmatrix}, \dots, \begin{pmatrix} s_{N,1}& s_{N,2}&\ldots& s_{N,L} \end{pmatrix}\bigr\}</math> attributed with * first-order weights <math>W_0^a, \dots, W_N^a</math> that fulfill # <math> \sum_{j=0}^N W_j^a=1 </math> # for all <math>i=1, \dots, L</math>: <math> E[x_i]=\sum_{j=0}^N W_j^a s_{j,i} </math> * second-order weights <math>W_0^c, \dots, W_N^c</math> that fulfill # <math> \sum_{j=0}^N W_j^c=1 </math> # for all pairs <math> (i,l) \in \{1,\dots, L\}^2: E[x_ix_l]=\sum_{j=0}^N W_j^c s_{j,i}s_{j,l} </math>. A simple choice of sigma points and weights for <math>\mathbf{x}_{k-1\mid k-1}</math> in the UKF algorithm is :<math>\begin{align} \mathbf{s}_0&=\hat \mathbf{x}_{k-1\mid k-1}\\ -1&<W_0^a=W_0^c<1\\ \mathbf{s}_j&=\hat \mathbf{x}_{k-1\mid k-1} + \sqrt{\frac{L}{1-W_0}} \mathbf{A}_j, \quad j=1, \dots, L\\ \mathbf{s}_{L+j}&=\hat \mathbf{x}_{k-1\mid k-1} - \sqrt{\frac{L}{1-W_0}} \mathbf{A}_j, \quad j=1, \dots, L\\ W_j^a&=W_j^c=\frac{1-W_0}{2L}, \quad j=1, \dots, 2L \end{align} </math> where <math>\hat \mathbf{x}_{k-1\mid k-1}</math> is the mean estimate of <math>\mathbf{x}_{k-1\mid k-1}</math>. The vector <math>\mathbf{A}_j</math> is the ''j''th column of <math>\mathbf{A}</math> where <math>\mathbf{P}_{k-1\mid k-1}=\mathbf{AA}^\textsf{T}</math>. Typically, <math>\mathbf{A}</math> is obtained via [[Cholesky decomposition]] of <math>\mathbf{P}_{k-1\mid k-1}</math>. With some care the filter equations can be expressed in such a way that <math>\mathbf{A}</math> is evaluated directly without intermediate calculations of <math>\mathbf{P}_{k-1\mid k-1}</math>. This is referred to as the ''square-root unscented Kalman filter''.<ref>{{cite book |last1=Van der Merwe |first1=R. |last2=Wan |first2=E.A. |title=2001 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings (Cat. No.01CH37221) |chapter=The square-root unscented Kalman filter for state and parameter-estimation |date=2001 |volume=6 |pages=3461โ3464 |doi=10.1109/ICASSP.2001.940586|isbn=0-7803-7041-4 |s2cid=7290857 }}</ref> The weight of the mean value, <math>W_0</math>, can be chosen arbitrarily. Another popular parameterization (which generalizes the above) is :<math>\begin{align} \mathbf{s}_0&=\hat \mathbf{x}_{k-1\mid k-1}\\ W_0^a&= \frac{\alpha^2\kappa-L}{\alpha^2\kappa}\\ W_0^c&= W_0^a + 1-\alpha^2+\beta \\ \mathbf{s}_j&=\hat \mathbf{x}_{k-1\mid k-1} + \alpha\sqrt{\kappa} \mathbf{A}_j, \quad j=1, \dots, L\\ \mathbf{s}_{L+j}&=\hat \mathbf{x}_{k-1\mid k-1} - \alpha\sqrt{\kappa} \mathbf{A}_j, \quad j=1, \dots, L\\ W_j^a&=W_j^c=\frac{1}{2\alpha^2\kappa}, \quad j=1, \dots, 2L. \end{align} </math> <math>\alpha</math> and <math>\kappa</math> control the spread of the sigma points. <math>\beta</math> is related to the distribution of <math>x</math>. Note that this is an overparameterization in the sense that any one of <math>\alpha</math>, <math>\beta</math> and <math>\kappa</math> can be chosen arbitrarily. Appropriate values depend on the problem at hand, but a typical recommendation is <math>\alpha = 1</math>, <math>\beta = 0</math>, and <math>\kappa \approx 3L/2</math>.{{cn|date=January 2025}} If the true distribution of <math>x</math> is Gaussian, <math>\beta = 2</math> is optimal.<ref>{{Cite book |doi=10.1109/ASSPCC.2000.882463 |chapter-url=http://www.lara.unb.br/~gaborges/disciplinas/efe/papers/wan2000.pdf |chapter=The unscented Kalman filter for nonlinear estimation |title=Proceedings of the IEEE 2000 Adaptive Systems for Signal Processing, Communications, and Control Symposium (Cat. No.00EX373) |page=153 |year=2000 |last1=Wan |first1=E.A. |last2=Van Der Merwe |first2=R. |isbn=978-0-7803-5800-3 |citeseerx=10.1.1.361.9373 |s2cid=13992571 |access-date=2010-01-31 |archive-date=2012-03-03 |archive-url=https://web.archive.org/web/20120303020429/http://www.lara.unb.br/~gaborges/disciplinas/efe/papers/wan2000.pdf |url-status=dead }}</ref> ==== Predict ==== As with the EKF, the UKF prediction can be used independently from the UKF update, in combination with a linear (or indeed EKF) update, or vice versa. Given estimates of the mean and covariance, <math> \hat\mathbf{x}_{k-1\mid k-1}</math> and <math>\mathbf{P}_{k-1\mid k-1}</math>, one obtains <math> N = 2L+1 </math> sigma points as described in the section above. The sigma points are propagated through the transition function ''f''. :<math>\mathbf{x}_{j} = f\left(\mathbf{s}_{j}\right) \quad j = 0, \dots, 2L </math>. The propagated sigma points are weighed to produce the predicted mean and covariance. :<math>\begin{align} \hat{\mathbf{x}}_{k \mid k-1} &= \sum_{j=0}^{2L} W_j^a \mathbf{x}_j \\ \mathbf{P}_{k \mid k-1} &= \sum_{j=0}^{2L} W_j^c \left(\mathbf{x}_j - \hat{\mathbf{x}}_{k \mid k-1}\right)\left(\mathbf{x}_j - \hat{\mathbf{x}}_{k \mid k-1}\right)^\textsf{T}+\mathbf{Q}_k \end{align}</math> where <math>W_j^a</math> are the first-order weights of the original sigma points, and <math>W_j^c</math> are the second-order weights. The matrix <math> \mathbf{Q}_k </math> is the covariance of the transition noise, <math>\mathbf{w}_k</math>. ==== Update ==== Given prediction estimates <math>\hat{\mathbf{x}}_{k \mid k-1}</math> and <math>\mathbf{P}_{k \mid k-1}</math>, a new set of <math>N = 2L+1</math> sigma points <math>\mathbf{s}_0, \dots, \mathbf{s}_{2L}</math> with corresponding first-order weights <math> W_0^a,\dots W_{2L}^a</math> and second-order weights <math>W_0^c,\dots, W_{2L}^c</math> is calculated.<ref>{{cite journal |last1=Sarkka |first1=Simo |title=On Unscented Kalman Filtering for State Estimation of Continuous-Time Nonlinear Systems |journal=IEEE Transactions on Automatic Control |date=September 2007 |volume=52 |issue=9 |pages=1631โ1641 |doi=10.1109/TAC.2007.904453}}</ref> These sigma points are transformed through the measurement function <math>h</math>. :<math> \mathbf{z}_j=h(\mathbf{s}_j), \,\, j=0,1, \dots, 2L </math>. Then the empirical mean and covariance of the transformed points are calculated. :<math>\begin{align} \hat{\mathbf{z}} &= \sum_{j=0}^{2L} W_j^a \mathbf{z}_j \\[6pt] \hat{\mathbf{S}}_k &= \sum_{j=0}^{2L} W_j^c (\mathbf{z}_j-\hat{\mathbf{z}})(\mathbf{z}_j-\hat{\mathbf{z}})^\textsf{T} + \mathbf{R_k} \end{align}</math> where <math>\mathbf{R}_k</math> is the covariance matrix of the observation noise, <math>\mathbf{v}_k</math>. Additionally, the cross covariance matrix is also needed :<math>\begin{align} \mathbf{C_{xz}} &= \sum_{j=0}^{2L} W_j^c (\mathbf{x}_j-\hat\mathbf{x}_{k|k-1})(\mathbf{z}_j-\hat\mathbf{z})^\textsf{T}. \end{align}</math> The Kalman gain is : <math>\begin{align} \mathbf{K}_k=\mathbf{C_{xz}}\hat{\mathbf{S}}_k^{-1}. \end{align}</math> The updated mean and covariance estimates are :<math> \begin{align} \hat\mathbf{x}_{k\mid k}&=\hat\mathbf{x}_{k|k-1}+\mathbf{K}_k(\mathbf{z}_k-\hat\mathbf{z})\\ \mathbf{P}_{k\mid k}&=\mathbf{P}_{k\mid k-1}-\mathbf{K}_k\hat{\mathbf{S}}_k\mathbf{K}_k^\textsf{T}. \end{align} </math>
Summary:
Please note that all contributions to Niidae Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Encyclopedia:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Search
Search
Editing
Kalman filter
(section)
Add topic