Kalman Filter: Ekf, Ukf
Kalman Filter: Ekf, Ukf
Kalman Filter: Ekf, Ukf
Pieter Abbeel
UC Berkeley EECS
Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics
Kalman Filter
n Kalman Filter = special case of a Bayes’ filter with dynamics model and
sensory model being linear Gaussian:
2 -1
Page 1!
Kalman Filtering Algorithm
n At time 0:
n For t = 1, 2, …
n Dynamics update:
Page 2!
Linearity Assumption Revisited
y y
p(y)
x
p(x)
x 5
Non-linear Function
y y
p(y)
x
“Gaussian of p(y)” has p(x)
mean and variance of y
under p(y)
x 6
Page 3!
EKF Linearization (1)
Page 4!
EKF Linearization (3)
10
Page 5!
EKF Linearization: Numerical
n Here ei is the basis vector with all entries equal to zero,
except for the i’t entry, which equals 1.
n If wanting to approximate Ft as closely as possible then ²
is chosen to be a small number, but not too small to avoid
numerical issues
Page 6!
Ordinary Least Squares
n Recall our objective:
n Let’s write this in vector notation:
n , giving:
a0 + a1 x
Page 7!
Ordinary Least Squares 26
24
22
20
30
20 40
More generally:
30
10 20
n 0 0 10
Page 8!
Vector Valued Ordinary Least Squares
Problems
n Solving the OLS problem for each row gives us:
Page 9!
OLS and EKF Linearization: Sample Point
Selection
n OLS vs. traditional (tangent) linearization:
OLS
traditional (tangent)
n
Page 10!
Analytical vs. Numerical Linearization
n Numerical (based on least squares or finite differences) could
give a more accurate “regional” approximation. Size of
region determined by evaluation points.
n Computational efficiency:
n Analytical derivatives can be cheaper or more expensive
than function evaluations
n Development hint:
n Numerical derivatives tend to be easier to implement
n If deciding to use analytical derivatives, implementing finite
difference derivative and comparing with analytical results
can help debugging the analytical derivatives
EKF Algorithm
n At time 0:
n For t = 1, 2, …
n Dynamics update:
Page 11!
EKF Summary
n Highly efficient: Polynomial in measurement dimensionality k
and state dimensionality n:
O(k2.376 + n2)
34
EKF UKF
35
Page 12!
UKF Sigma-Point Estimate (2)
EKF UKF
36
EKF UKF
37
Page 13!
UKF Sigma-Point Estimate (4)
n Y = f(X)
Page 14!
Self-quiz
n When would the UKF significantly outperform the EKF?
y
Page 15!
Original unscented transform
n Picks a minimal set of sample points that match 1st, 2nd and 3rd moments
of a Gaussian:
Page 16!
[Table 3.4 in Probabilistic Robotics]
UKF Summary
n Highly efficient: Same complexity as EKF, with a constant factor
slower in typical practical applications
n Better linearization than EKF: Accurate in first two terms of
Taylor expansion (EKF only first term) + capturing more
aspects of the higher order terms
n Derivative-free: No Jacobians needed
n Still not optimal!
Page 17!