Lecture4 2023 Annotated

Download as pdf or txt
Download as pdf or txt
You are on page 1of 59

Bayes Filters are really important!

Bel ( xt ) =  P( zt | xt )  P( xt | ut , xt −1 ) Bel ( xt −1 ) dxt −1

 Prediction Update 
Bel ( xt −1 ) step Bel ( xt ) step Bel ( xt )

How do we represent the probability density


functions, in particlar bel(xt)?
• Particle filter (non-parametric)
• Kalman filter (parametric)
1
Kalman filter (Thrun Chp 3)
• A parametric Bayes Filter that
represents probability density
functions with Gaussians
• Bel(xt) is updated by exploiting the
properties of Gaussians with respect
to the mean, μ, and variance, σ2
• We will be considering discrete-time
Kalman Filters (KFs)

2
General 1-D Kalman Filter
Prediction step: p(xt | xt-1 , ut ) xt = axt-1 + but + εt
Update step: p(zt | xt) zt = cxt + δt
Belief: Bel(xt) = p(xt | z1:t , u1:t) ~ N ( t ,  t2 )
Bel ( xt ) ~ N ( , 2 )
t t

Update timestep
Bel(xt)

p(zt | xt) p(xt | xt-1 , ut )

Bel ( xt )
3
Multivariate Gaussians
p (x) ~ Ν (μ,Σ) :

1
1 − ( x −μ ) t Σ −1 ( x −μ )
p ( x) = e 2

(2 )
1/ 2
1/ 2
Σ

 x x    x2  xy2 
 
x =  y , μ =   y , Σ =  yx2  y2 
        

1 n
 x =  ( xi −  x ) 2
2

n i =1
1 n
 2
xy =  ( xi −  x )( yi − y ) =  yx2
n i =1
Σ = ΣT [Covariance matrix] 4
Multivariate Gaussian Systems:
Initialization

• Initial belief is normally distributed:

bel ( x0 ) = N (x0 ; 0 , 0 )

5
Kalman Filter Algorithm
1. Algorithm Kalman_filter( t-1, t-1, ut, zt):

2. Prediction:
3.  t = At t −1 + Bt ut
4.  t = At  t −1 AtT + Rt

5. Update:
6. K t =  t CtT (Ct  t CtT + Qt ) −1
7.  t =  t + K t ( z t − Ct  t )
8.  t = ( I − K t Ct )  t
9. Return t, t

6
Kalman Filter Example 1: Correction

7
8
Kalman Filter for mobile robots

• Let’s apply what we’ve learned to the


fundamental problem we’re working
on in this course!

• Define our vectors


• Define our models (i.e. motion model,
sensor model)

9
Velocity Motion Model
(Thrun Chp 5.3)
• Assume that over a
short timespan, Δt, a robot
maintains a commanded
constant linear velocity, v, and
constant angular velocity, ω

10
Velocity Motion Model: Ideal case

• Assume actual v & ω known precisely


• Recall, from mechanics v
=
r
• For a given v & ω, they are related by
a unique r (Note: ω0 implies r∞)
• This r defines an arc of a circle the
robot is traveling on (during Δt). The
centre of this circle is called the
instantaneous centre of rotation,
<xc , yc> 11
12
13
Velocity Motion Model:
Real / non-ideal case
  vt vt vtvt sin( +  t ) 
−  +
 xt −1 − sin  t −1 + sin( t −1
sin ttt )

 x'   t t 
t t 
   vtvt vvtt 
=
= +
+
 t  t −t −11 
yx' xy 
cos
cos  − cos(
cos( + 
+  tt))  + N (0, Rt )
 tt
t −1 t −1 tt
 '    t t 
   t −1 + tttt
  
  
  
 xt 
   vt 
xt =  yt  ut =  
   t 
 t

Not in the form: xt = At xt −1 + Bt ut +  t


Nonlinear!? xt = g (ut , xt −1 ) No A matrix
14
Nonlinear motion models
Mean for linear motion model:
xt = At xt −1 + Bt ut +  t t = At t −1 + Bt ut

Mean for nonlinear motion model:


xt = g (ut , xt −1 ) t = g (ut , t −1 )
Can still be computed directly

Variance for linear motion model:


 t = At  t −1 A + Rt
T
t

For nonlinear, what do we do without A? 15


Nonlinear motion models
• Don’t need A to revise estimate of μ
• Can compute directly

• Need A to revise estimate of 


• Note: Also need C for K t , 

• Can we approximate A? (and C?)

• What is A ? (or a ?)
• How does it relate to Σ ? (or σ ?)
16
17
Approximating nonlinear motion

• In a linear transition, a Gaussian stays a


Gaussian
• Remember, maintaining this Gaussian
parameterization is the fundamental assumption
of the Kalman Filter
• a is for 1-D input & output state “vectors”
• A is equivalent for >1 input and >1 output

18
Approximating nonlinear motion
• What is A ? (or a ?)
• a defines the line for relating 1-D input &
output state “vectors”
• A is equivalent (plane) for >1-D input

19
Relating changes in input to output

Linear

1-D: y = ax + b
small change in x⎯
⎯→ a
corresponding change in y
y = Ax
2-D:
 y1   a11 a12   x1 
 y  = a   
 2   21 a22   x2 
⎯⎯→ corresponding change in yi
aij
small change in x j

20
• So… what if motion model is nonlinear?

21
Approximating nonlinear motion

• So… what if motion model is nonlinear?

• No longer maintains bel(xt) as a Gaussian


• Can no longer track state estimate using
the Gaussian parameters ( μ, σ2 )
• We broke the Kalman Filter!

• One fix is to linearize


• Extended Kalman Filter

22
Extended Kalman Filter
(Thrun Chp 3.3)
• Treat a non-linear system like a linear
one by linearizing (Taylor Series
expansion)

23
24
Linearizing nonlinear motion

# of # of Linear / A tangent to … is defined


inputs outputs Nonlinear this… by its:
(n) (m)
Line Slope 1-D
1 1 Linear
y = mx+b m KF
Derivative
Curve 1-D
1 1 Nonlinear dg
y = g(x) (x) EKF
dx
Plane
>1 1 Linear Normal vector
y=m1x1+m2x2+…
Gradient vector
Surface  g g 
>1 1 Nonlinear g = 
y = g(x1,x2,…) ...
 x1 x2 

 g g 
g ( x1 , x2 ,...) =  ( x1 , x2 ,...) ( x1 , x2 ,...) ...
 x1 x2  25
Gradient vector and tangent plane

26
# of # of Linear / A tangent to … is defined
inputs outputs Nonlinear this… by its:
(n) (m)
Line Slope 1-D
1 1 Linear
y = ax+b m KF
Derivative
Curve 1-D
1 1 Nonlinear dg
y = g(x) (x) EKF
dx
Plane
>1 1 Linear Normal vector
y=a1x1+a2x2+…
Gradient vector
Surface  g g 
>1 1 Nonlinear g = 
y = g(x1,x2,…) ...
 x1 x2 
Jacobian matrix
Nonlinear Manifold  g1 
>1 >1 (in (a collection of G ( x1 , x2 ,...) = g 2 
general) m surfaces)
 ... 

General EKF 27
Relating changes in input to output

Nonlinear

1-D: y = g ( x) → y = g ( x0 ) + dg [ x − x0 ]
dx x0
dg

small change in x ⎯⎯→ corresponding change in


dx
y
(in the vicinity of x0)

28
Relating changes in input to output

Nonlinear
2-D: y = g ( x) → y = G ( x 0 ) x
 g1 g1 
 y1   g1 ( x1 , x2 )   y1   x1 x2   x1 
 y  =  g ( x , x ) →  y  =  g  
g 2   x2 
 2  2 1 2   2  2
 x1 x2 
g i
x j
small change in x j ⎯⎯→ corresponding change in yi
(in the vicinity of x0)
29
Relating changes in input to output

Compare to linear:

2-D: y = Ax
 y1   a11 a12   x1 
 y  = a   
 2   21 a22   x2 

⎯⎯→ corresponding change in yi


aij
small change in x j

Jacobian G(x0) approximates A


(in the vicinity of x0)
30
EKF with Velocity Motion Model

Compute Jacobian of velocity motion


model

 v v 
 xt −1 − t sin  t −1 + t sin( t −1 + t t ) 
 x'   t t 
𝑥𝑡 = g 𝑢𝑡 , 𝑥𝑡−1  y=   v v 
'  =  yt −1 + t cos  t −1 − t cos( t −1 + t t ) 
 '   t t
  
  t −1 + t t 
 
 

31
  v vt vtvt sin( +  t )
 xt −1 − −t sinsin  t − ++ sin( t −1 + ttt ) 
 x'     t
 t
1
 t t 
    vvt t v 
 yx't == xty−t1−1++ cos  t−1 − tt cos(
cos cos(t −1++t ttt))  + N (0, Rt )
 '   t t tt
   
   t −1 + 
 t tt 
 
t
  

32
EKF Algorithm
1. Extended_Kalman_filter( t-1, t-1, ut, zt):

2. Prediction:
3. t = g (ut , t −1 )  t = At t −1 + Bt ut
4.  t = Gt  t −1GtT + Rt  t = At  t −1 AtT + Rt

5. Update:
6. K t =  t H tT ( H t  t H tT + Qt ) −1
7. t = t + K t ( zt − h( t ))
8.  t = ( I − K t H t ) t
9. Return t, t g (ut , t −1 )
Gt =
xt −1
33
Nonlinear Dynamic Systems
• Most realistic robotic problems involve
nonlinear functions (for motion and/or
sensor models)

xt = g (ut , xt −1 )

zt = h( xt )

34
EKF Linearization: First Order
Taylor Series Expansion
• Prediction:
g (ut , t −1 )
g (ut , xt −1 )  g (ut , t −1 ) + ( xt −1 − t −1 )
xt −1
g (ut , xt −1 )  g (ut , t −1 ) + Gt ( xt −1 − t −1 )

• Update:
h( t )
h( xt )  h( t ) + ( xt − t )
xt
h( xt )  h( t ) + H t ( xt − t )

35
EKF Algorithm
1. Extended_Kalman_filter( t-1, t-1, ut, zt):

2. Prediction:
3. t = g (ut , t −1 )  t = At t −1 + Bt ut
4.  t = Gt  t −1GtT + Rt  t = At  t −1 AtT + Rt

5. Update:
6. K t =  t H tT ( H t  t H tT + Qt ) −1 K t =  t CtT (Ct  t CtT + Qt ) −1
7. t = t + K t ( zt − h( t ))  t =  t + K t ( z t − Ct  t )
8.  t = ( I − K t H t ) t  t = ( I − K t Ct )  t
9. Return t, t h( t ) g (ut , t −1 )
Ht = Gt =
xt xt −1
36
Linearity Assumption Revisited

37
Non-linear Function

38
EKF Linearization (1)

39
EKF Linearization (2)

40
EKF Linearization (3)

41
Determining if EKF is a good choice

• EKFs work best when:


• The underlying dynamics are close to
linear
• The errors can be kept manageably small

42
Velocity Motion Model:
Real / non-ideal case
• Next, we need to find Rt
• Additional noise/errors introduced during
motion

• How do we model velocity motion


model errors?
• What do they depend on?

43
What does Rt depend on?
• Are these robots likely to generate
the same amount of absolute error
in x (after the same Δt)?
y
vt

vt

(Stationary)
x
44
Noise for Velocity Motion Model
• How does our belief about the actual
velocities (denoted here with ^) correspond
to commanded velocities (vt , ωt)

(
vˆt = N vt ,  v +  4
2
3 t )t
2

ˆ t = N ( ,  v
t
2
2 t +  )
1 t
2

(
recall N  ,  )
is a Gaussian with
2

mean μ and variance σ2


Noise for Velocity Motion Model

(
vˆt = N vt ,  v +  4
2
3 t )t
2

ˆ t = N ( ,  v
t
2
2 t +  )
1 t
2

• α1 is a measure of rotational error


• α3 is a measure of translational error
• α2 is a measure of rotational error due to
translation
• α4 is a measure of translational error due to
rotation
Propagated uncertainty

Low α1, α2 High α1, α2


Typical αi’s
High α3, α4 Low α3, α4
Coefficient describes error in due to Note:
α1 rotation rotation coefficient
numbering
α2 rotation translation
different
α3 translation translation from
α4 translation rotation textbook
Noise Model for Odometry

(
vˆt = N vt ,  v +  4 2
3 t )t
2

ˆ t = N ( ,  v
t
2
2 t +  )
1 t
2

• Define a related matrix:


 v +  4
2 2
0 
Mt =  3 t t
2
 0  2 vt + 1t 
2
Relating velocity errors to Rt
• The noise model gives us the noise
with respect to commanded velocities,
i.e. ut
• How do we relate this to noise with
respect to the current state?

Noise wrt ut Noise wrt xt


?

49
Relating velocity errors to Rt
• Recall, to relate noise wrt xt-1 to noise
wrt xt, we used the Jacobian:
g (ut , t −1 )
Gt =
xt −1
Noise wrt xt-1 Noise wrt xt

• Revised post-motion noise


(covariance), assume for now Rt = 0:
 t = Gt  t −1GtT

50
Relating velocity errors to Rt
• We can do something similar for noise
wrt ut:
g (ut , t −1 )
Vt =
ut
Noise wrt ut Noise wrt xt

• Post-motion noise (covariance):


 t = Gt  t −1G + Vt M tVt
T
t
T

Rt
51
Relating velocity errors to Rt
𝜕𝑥𝑡 𝜕𝑥𝑡 𝜕𝑔1 𝜕𝑔1
𝜕𝑣𝑡 𝜕𝜔𝑡 𝜕𝑣𝑡 𝜕𝜔𝑡
𝜕𝑦𝑡 𝜕𝑦𝑡 𝜕𝑔2 𝜕𝑔2
𝑉𝑡 = =
𝜕𝑣𝑡 𝜕𝜔𝑡 𝜕𝑣𝑡 𝜕𝜔𝑡
𝜕𝜃𝑡 𝜕𝜃𝑡 𝜕𝑔3 𝜕𝑔3
𝜕𝑣𝑡 𝜕𝜔𝑡 𝜕𝑣𝑡 𝜕𝜔𝑡

(see eq. 7.11 in Thrun)

Note: Rt = Vt M tVt = Rt (ut , t −1 , t )


T

52
  v vt vtvt sin( +  t )
 xt −1 − −t sinsin  t − ++ sin( t −1 + ttt ) 
 x'     t
 t
1
 t t 
    vvt t v 
 yx't == xty−t1−1++ cos  t−1 − tt cos(
cos cos(t −1++t ttt))  + N (0, Rt )
 '   t t tt
   
   t −1 + 
 t tt 
 
t
  

53
What does Rt depend on?
• Are these robots likely to generate
the same amount of absolute error
in x and y(after the same Δt)?
y

Rt = Vt M tVt = Rt (ut , t −1 , t )
T

x
54
Assignment 1 tips (commands)
Motion commands:
 vt 
provided with ut =  
 t 

Note: Curvature refers to arc of overall


robot motion (not the Ackerman
steering arc)

v
Recall:  = = v
r
55
Assignment 1 tips (commands)

 vt  v
ut =    = = v
 t  r
If v = 0 (ω = 0)
Don’t modify μ, Σ
t = t −1
t =  t −1

56
Assignment 1 tips (state)

57
Assignment 1 tips (state)

58
Assignment 1 tips (state)

  5 
 =  ,   [1.5708,7.8540]
2 2 

5
If t ,  ⎯→ t , = t , − 2

2

If t ,  ⎯→ t , = t , + 2

2

59

You might also like