0% found this document useful (0 votes)
4 views70 pages

snesie

Uploaded by

ranjith001v
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
4 views70 pages

snesie

Uploaded by

ranjith001v
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 70

Estimation of Infinite-Dimensional Systems

Amir Issaei

Spring 2014

Abstract

The purpose of this research paper is to discuss the observation systems in an infinite-

dimensional space. Since finite-dimensional approximations are used to design an

estimator for infinite-dimensional systems, it is necessary to discuss the conditions

under which the finite-dimensional estimator converges to the one that estimates the

infinite-dimensional system. On the other hand, the measurements of the states of an

infinite-dimensional system are often taken discretely in time. The convergence of

discrete-time observers is presented for an infinite-dimensional system. The results

are applied to a one-dimensional parabolic partial differential equation.

1
Acknowledgement
I would like to thank my supervisor, Prof. Kirsten Morris, who led me to become

more independent, and through which I found my passion. I must express my grat-

itude to Isabel, my fiancee, for her continued support and encouragement. I was

amazed by the patience of my father, mother, brother and sister who experienced all

of the ups and downs of my post-graduate studies.

2
Contents
1 Introduction 5

2 Optimal control problem with linear quadratic cost 6

3 Kalman filter 11

4 Approximation theory for infinite-dimensional estimator 14

5 Infinite-dimensional sampled-data observer 20

6 Numerical application 23

6.1 One-dimensional heat equation . . . . . . . . . . . . . . . . . . . . . 24

6.2 Problem statement . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

6.3 Continuous-time finite difference method . . . . . . . . . . . . . . . 27

6.4 Continuous-time Fourier method . . . . . . . . . . . . . . . . . . . . 33

6.5 Transient response of the sample-data observer . . . . . . . . . . . . 37

6.6 Discrete-time observer . . . . . . . . . . . . . . . . . . . . . . . . . 40

7 Conclusion 45

Appendix A 46

3
List of Figures
1 Observer gain for different number of grid points. . . . . . . . . . . . 32

2 Snapshot of true state and estimated state at t = 50s. . . . . . . . . . 32

3 b1 (x) and b2 (x) along with their Fourier approximations, N = 80. . . 35

4 c(x) along with its Fourier approximation, N = 80. . . . . . . . . . . 35

5 Observer gain for different number of eigenfunctions. . . . . . . . . . 36

6 True solution and observer for different number of eigenfunctions. . . 36

7 Transient response for different sampling periods (Finite difference

method). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

8 Transient response for different sampling periods (Fourier method). . 39

9 Finite difference, discrete-time Kalman filter. . . . . . . . . . . . . . 43

10 Finite difference, transient response at x = 0.25m, N = 32. . . . . . 43

11 Fourier approximation, discrete-time Kalman filter. . . . . . . . . . . 44

12 Fourier method, transient response at x = 0.25m, N = 25 modes. . . 44

4
1 Introduction
Some of the most important practical problems in control theory have statistical and

probabilistic nature. Such problems are: 1) prediction of random function (signals);

2) separation of noise and signal; 3) detection of signals of known form in the pres-

ence of random noise. In order to find solutions to such problems, the knowledge of

statistics, mathematics and probability is needed.

Consider a dynamical system, where the measurements and the states of the sys-

tem are contaminated with noise. Different methods have been proposed to esti-

mate the states of such a system (see [1], [3], [4] and [15]). Kalman approached

the state estimation from the point of view of conditional distributions and expec-

tations. He treated linear systems as a system of coupled first-order difference (or

differential) equations. Since Kalman considered the state transition formulation for

a dynamical system, a single derivation covered a large variety of problems including

non-stationary statistics, non-linear problems and non-Gaussian processes. The new

formulation of the problem brought it into contact with the growing new theory of

control systems based on the state point of view. The implementation of Kalman fil-

ter is easy in practice. Kalman filter is a recursive algorithm, which does not require

a large amount of memory on digital machines [14].

The structure of this research paper is as follows: first, an optimal control problem

with linear quadratic cost is discussed. Then the results from that problem are ex-

tended for designing Kalman filters for infinite-dimensional systems. The conditions

under which the sampled-data finite-dimensional estimator converges is discussed

and the results are applied to a one-dimensional heat equation.

5
2 Optimal control problem with linear quadratic

cost
The linear quadratic control criterion for placement of actuators is a popular choice

among researchers. In many applications, such as vibration control and diffusion

modeling, the mathematical formulation of the control system is given by a partial

differential equation. The state space for such systems is infinite-dimensional. In this

section the control system is formulated, then the existence and uniqueness of the

optimal input function, uoptimal (.), on an infinite-dimensional space and infinite-time

interval, is discussed. The material in this section can be found in [2].

Definition 1. A strongly continuous (C0 )-semigroup S(t) on Hilbert space Z is a

family S(t) ∈ L(Z; Z), where L(Z; Z) indicates bounded linear operators from Z

to Z, such that

(i) S(0) = I,

(ii) S(t + s) = S(t)S(s), ∀t, s ≥ 0,

(ii) lim S(t)z = z, for all z ∈ Z.


t→0+

Definition 2. The infinitesimal generator A of a C0 -semigroup on Z is defined by

1
Az = lim (S(t)z − z). (1)
t→0+ t

The domain of A, D(A), is defined as the set of elements z ∈ Z for which the limit

exists.

6
Consider a dynamical system described by

ż(t) = Az(t) + Bu(t), t ≥ 0, z(0) = z0 , (2)

where A, with domain D(A), generates a C0 -semigroup S(t) on a Hilbert space Z,

u(t) ∈ L2 ([0, ∞]; U ) is the input function, U is a finite-dimensional space (for in-

stance, Rp , where p is a positive integer) and B ∈ L(U ; Z) where L(U ; Z) indicates

the set of bounded linear operators from U to Z. The trajectories of (2) can be written

as
Z t
z(t) = S(t)z0 + S(t − s)Bu(s)ds, (3)
0

using semigroup theory [2]. In the following example, a partial differential equation

(PDE) is formulated as an abstract differential equation (2). This example can be

found in [2].

Consider a heat transfer problem

∂z ∂2z
= + u(x, t), z(x, 0) = z0 (x),
∂t ∂x2 (4)
∂z ∂z
(0, t) = (1, t) = 0,
∂x ∂x

where z(x, t) represents the temperature of a bar at time t and position x, and

u(x, t) is the control function. Choose Z = L2 (0, 1) as the state space and

7
z(., t) = {z(x, t), 0 ≤ x ≤ 1} as the state of the system. Define

d2 h
Ah =
dx2
dh
D(A) = {h ∈ L2 (0, 1)|h, are absolutely continious,
dx (5)
dh2 dh dh
∈ L2 (0, 1) and (0) = (1) = 0},
dx2 dx dx
B = I,

and regard the input trajectory, u(., t), as the input and the function z0 (.) ∈ L2 (0, 1)

as the initial state. The PDE (4) can be written as

ż(t) = Az(t) + Bu(t), t ≥ 0, z(0) = z0 . (6)

The trajectories of (4) can also be written as in (3). It is shown in [2, page 107] that

the solution to (2) is

Z 1 Z tZ 1
z(x, t) = g(t, x, y)z0 (y)dy + g(t − s, x, y)u(y, s)dyds, (7)
0 0 0

where z0 satisfies the boundary conditions and u(x, t) ∈ L2 ([0, τ ]; L2 (0, 1)) is the

input function. The integrand, g(t, x, y), represents the Green’s function


X 2 2
g(t, x, y) = 1 + 2e−n π t
cos(nπx) cos(nπy). (8)
n=1

To interpret (7) abstractly on L2 (0, 1), consider the following bounded operator on

L2 (0, 1)

z(t) = S(t)z0 , t ≥ 0, (9)

8
where for each t ≥ 0, S(t) ∈ L(L2 (0, 1)) is defined by

Z 1
S(t)z0 (x) = g(t, x, y)z0 (y)dy. (10)
0

Then, the abstract formulation of the solution (7) on L2 (0, 1) is

Z t
z(t) = S(t)z0 + S(t − s)u(s)ds. (11)
0

The partial differential equation (4) was formulated as an abstract differential equa-

tion (2) on an infinite-dimensional state space Z = L2 (0, 1), where A is the un-

bounded operator on Z defined by (5), B is the identity on Z, z0 and u(., t) are func-

tions on Z, and the solution is given by (11). The operator S(t) plays the role of eAt

in finite dimensions.

Theorem 2.1. [12, page 104] Every bounded, self-adjoint, non-negative operator T

has a unique bounded, self-adjoint, non-negative square root G such that G2 = T .

Furthermore, G commutes with any linear operator that commutes with T.

Consider the dynamical system (2) with output

1
y(t) = Q 2 z(t) (12)

1
where Q ∈ L(Z; Z) is a self-adjoint, non-negative operator, Q 2 ∈ L(Z; Y ), and

Y is a Hilbert space. In linear quadratic optimal control, a cost function J(z0 ; u) is

associated with the trajectories in (3)

Z ∞ 1 1
J(z0 ; u) = hQ 2 z(s), Q 2 z(s)i + hu(s), Ru(s)ids, (13)
0

9
where z(s) is given by (3), u ∈ L2 ([0, ∞]; U ) and R ∈ L(U ; U ) is a self-adjoint,

coercive operator, i.e R ≥ I for some  > 0.

The control problem is, given the initial condition, z0 , to find an optimal control

uoptimal ∈ L2 ([0, ∞]; U ) that minimizes the cost function J over all trajectories of

the control system (2). Since given any initial condition, z0 , the trajectories of the

system are completely determined by input, it follows that the minimization problem

is to find u that minimizes the cost function, J.

Definition 3. The linear system (2) with the cost function (13) is optimizable if, for

every z0 ∈ Z, there exists an input function u ∈ L2 ([0, ∞); U ) such that the cost

function (13) is finite.

Definition 4. A C0 −semigroup, S(t), on Z is exponentially stable, if there exists

positive constants, M and α such that

||S(t)|| ≤ M e−αt

where ||.|| is the operator norm on L(Z; Z).

Definition 5. The system (A, B) is stabilizable if there exists K ∈ L(Z; U ) such that

A − BK generates an exponentially stable semigroup.

1
Definition 6. The pair (A, Q 2 ) is detectable if there exists F ∈ L(Y ; Z) such that
1
A − F Q 2 generates an exponentially stable semigroup.

1
Theorem 2.2. [2, Thm 6.2.4, 6.2.7] If the system (2) is optimizable and (A, Q 2 ) is

detectable, then the cost function (13) has a minimum for every z0 ∈ Z. Furthermore,

10
there exists a self-adjoint non-negative operator Π ∈ L(Z) such that

min J(z0 ; u) = hz0 , Πz0 i. (14)


u∈L2 ([0,∞],U )

The operator Π is the unique non-negative solution to the Riccati operator equation

1 1
hAz1 , Πz2 i + hΠz1 , Az2 i + hQ 2 z1 , Q 2 z2 i − hB ∗ Πz1 , R−1 B ∗ Πz2 i = 0, z1 , z2 ∈ D(A).
(15)

Let K = R−1 B ∗ Π. The optimal control is uoptimal (t) = −Kz(t) and A − BK

generates an exponentially stable semigroup.

3 Kalman filter
Consider the infinite-dimensional dynamical system

ż(t) = Az(t) + Bu(t) + w(t), z(0) = z0 ,


(16)
y(t) = Cz(t) + v(t),

where A, B and u(.) are defined in (2). Since it is not possible to measure all the

states of the system, the output space Y , is a finite- dimensional space (for instance,

Rq , where q is a positive integer). The operator C is a bounded linear operator that

maps Z to Y , i.e. C ∈ L(Z, Y ). It is assumed that w(t) and v(t) are uncorrelated

white Gaussian noise with mean zero and covariance M = M ∗ ≥ 0 and V = V ∗ > 0

respectively. . The goal is to estimate the states of the system (16) in an optimal sense.

11
Definition 7. Let X be a continuous random variable. The expectation of X is given

by
Z ∞
E[X] = xf (x)dx, (17)
−∞

where f is the probability density function of X.

Definition 8. Consider the linear system (16) with state space Z, input space U, and

output space Rq . The observer

˙ = Aẑ(t) + Bu(t) + L[y(t) − C ẑ(t)], ẑ(0) = ẑ0


ẑ(t)
(18)
ŷ(t) = C ẑ(t),

that minimizes the estimation cost

Je = lim E ||z(t) − ẑ(t)||2 ,


 
(19)
t→∞

is called “minimum-variance estimator” or “Kalman filter” and L ∈ L(Rq ; Z) is

called “observer gain” or “Kalman gain”.

Problem 1. Minimum-Variance estimator design (Kalman filter design)

Given the system (16), find an L such that the observer (18) minimizes (19).

It is shown in [5] and [9] that the Riccati equation associated with estimation

problem above is

AΠ̂ + Π̂A∗ − Π̂C ∗ V −1 C Π̂ + M = 0, (20)

and the observer gain is

L = Π̂C ∗ V −1 . (21)

12
The similarity between the linear stochastic filtering and the deterministic linear

regulator problems was mentioned by Kalman and Bucy in [7] and [8]. The duality is

usually described as a correspondence between the Kalman filter gain, the regulator

feedback gain and the associated Riccati equations. The differences between the

results of the two problems are characterized by a reversal of time and transposition

(adjoint) of system matrices (operators) as explained in [13]. The duality between the

two problems can be summarized by noting that the gain and the Riccati equation of

the filter for the system

ż(t) = A∗ z(t) + B ∗ u(t) + w(t),


(22)
y(t) = C ∗ z(t) + v(t),

are the same as those for the linear regulator for the system

ż(t) = Az(t) + Cu(t), z(0) = z0 , (23)

Z ∞
z ∗ (t)M z(t) + u∗ (t)V u(t) dt.

Jr = (24)
0

Note that A, B and C are matrices with appropriate dimensions, ∗ denotes the con-

jugate transposition, u(t) is an input, w(t) and v(t) are uncorrelated white Gaussian

noise with mean zero and covariance

M = M ∗ ≥ 0,
(25)

V = V > 0,

1 1
respectively [13]. Substitutions of A, B, R and Q 2 in (15) by A∗ , C ∗ , V and M 2

result in (20).

13
Due to duality, the theorems regarding the existence and uniqueness of the opti-

mal estimator in an infinite-dimensional space follow from the similar results for the

optimal control problem. To design an estimator numerically, the infinite-dimensional

space is usually projected into a finite-dimensional space. The convergence criteria of

finite-dimensional approximation of controllers are summarized in [11]. The results

can be extended to derive conditions under which the finite-dimensional approxima-

tions of the estimator also converge.

4 Approximation theory for infinite-dimensional

estimator
For most practical examples, a closed form solution of the partial differential equation

or of the transfer function is not available and an approximation must be used. The ap-

proximation is generally calculated using one of the many standard approaches, such

as the finite difference, the finite element and Fourier methods, which are developed

for solving partial differential equations. The resulting system of ordinary differen-

tial equations is used in the controller and observer design. The advantage of this

approach is that the wide body of synthesis methods available for finite-dimensional

systems can be used. Three main concerns arise. First, the controlled (observer) sys-

tem may not perform as predicted because a finite-dimensional design is applied to

an infinite-dimensional system. Second, the sequence of controller gains (observer

gains) may not converge. Third, the original infinite-dimensional system may not be

stabilizable even if the approximated finite-dimensional systems are stabilizable (the

14
finite-dimensional approximation of the estimator may not converge to the infinite-

dimensional estimator). The conditions under which the sequence of controller gains

(observer gains) converge should be discussed. It is also necessary to investigate the

conditions under which the finite-dimensional approximation stabilizes the original

infinite-dimensional system and the finite-dimensional estimator converges to the one

that estimates the infinite-dimensional system.

Suppose the approximation lies in some finite-dimensional sub-space Zn of Z

with an orthogonal projection Pn : Z → Zn , where for each z ∈ Z, lim ||Pn z −


n→∞
z|| = 0. It is assumed that the subspace, Zn , is equipped with the norm of original

space Z. Define Bn = Pn B, Cn = C|Zn (the restriction of Cn to Zn ), and define

An ∈ L(Zn , Zn ), Qn ∈ L(Zn , Zn ) and Mn ∈ L(Zn , Zn ) using finite difference,

finite element or Fourier methods. A sequence of finite-dimensional approximations

for the linear quadratic optimal control problem are

żn (t) = An z(t) + Bn u(t), z(0) = Pn z0 , n ≥ 1. (26)

The linear quadratic cost associated with (26) is

Z ∞ 1 1
J(z0 ; u) = hQn2 z(s), Qn2 z(s)i + hu(s), Ru(s)ids, (27)
0

and the Riccati equation associated with (26) and (27) is

A∗n Πn + Πn An − Πn Bn R−1 Bn∗ Πn + Qn = 0. (28)

A sequence of finite-dimensional approximations can also be written for the observer

15
system (18)

ẑ˙n (t) = An ẑn (t) + Bn u(t) + Ln [y(t) − Cn ẑn (t)], ẑ(0) = Pn ẑ0 , n ≥ 1.
(29)
ŷ(t) = Cn ẑn (t),

where Ln = Π̂n Cn∗ V −1 and Π̂n satisfies

An Π̂n + Π̂n A∗n − Π̂n Cn∗ V −1 Cn Π̂n + Mn = 0. (30)

The conditions under which the state-feedback approximation of an infinite-dimensional

system converges is discussed in [2] and [11]. The following results are from [11].

Definition 9. The control systems (An , Bn ) are uniformly stabilizable if there exists a

sequence of feedback operators Kn ∈ L(Zn ; U ) with ||Kn || ≤ M1 for some constant

M1 such that An − Bn Kn generates SKn (t) and ||SKn || ≤ M2 e−α1 t , M2 ≥ 1, α1 >

0.
1
Definition 10. The pairs (An , Qn2 ) are uniformly detectable if there exists a sequence

of feedback operators Fn ∈ L(Y ; Zn ) with ||Fn || ≤ M3 for some constant M3 such


1
that An − Fn Qn2 generates SF n (t) and ||SF n || ≤ M4 e−α2 t , M4 ≥ 1, α2 > 0.

Theorem 4.1. [11, Thm. 4.3] Let Sn (t) indicate the semigroup generated by An

and S(t) be the semigroup generated by A. Assume lim supt∈[t1 ,t2 ] ||Sn (t)Pn (t)z −
n→∞
S(t)|| = 0 and lim supt∈[t1 ,t2 ] ||Sn∗ (t)Pn (t)z ∗
− S (t)|| = 0 for all z ∈ Z and all
n→∞

16
intervals of time [t1 , t2 ]. In addition, assume

||Bn u − Bu|| → 0 for all u ∈ U,

||Bn∗ Pn z − B ∗ z|| → 0 for all z ∈ Z, (31)


1 1
||Qn2 Pn z − Q 2 z|| → 0 for all z ∈ Z.

1
If (An , Bn ) is uniformly stabilizable and (An , Qn2 )is uniformly detectable, for each

n, the Riccati equation (28) has a unique nonnegative solution, Πn , with sup||Πn || <

∞. There exists constants M5 ≥ 1 and α5 > 0, independent of n, such that the

semigroups SnK (t) generated by An − Bn Kn satisfy

||SnK (t)|| ≤ M5 e−α5 t , (32)

where Kn = R−1 Bn∗ Πn . For sufficiently large n, the semigroups SKn (t) generated

by A − BKn are uniformly exponentially stable or in other words there exists con-

stants M6 ≥ 1 and α6 > 0 such that

||SKn (t)|| ≤ M6 e−α6 t . (33)

Furthermore,

lim ||Πn Pn z − Πz|| = 0 (34)


n→+∞

where Π is the solution to Riccati equation

A∗ Π + ΠA − ΠBR−1 B ∗ Π + Q = 0. (35)

17
Moreover,

lim ||Kn Pn z − Kz|| = 0, (36)


n→+∞

where K = R−1 B ∗ Π. The cost associated with feedback Kn z(t) converges to opti-

mal cost

lim J(−Kn z(t), z0 ) = hΠz0 , z0 i. (37)


n→∞

The following corollary gives a result for the minimum-variance estimation prob-

lem that uses the duality between the linear regulator and the minimum-variance es-

timation problems.

Corollary 4.2. Let Sn (t) and An be defined as in theorem (4.1) and satisfy the as-

sumptions mentioned in that theorem. Assume

||Cn∗ y − C ∗ y|| → 0 for all y ∈ Y,

||Cn Pn z − Cz|| → 0 for all z ∈ Z, (38)


1 1
||Mn2 Pn z − M 2 z|| → 0 for all z ∈ Z.

1
If (A∗n , Cn∗ ) is uniformly stabilizable and (A∗n , Mn2 ) is uniformly detectable, then for

each n, the Riccati equation (29) has a unique non-negative solution, Π̂n , and

lim ||Π̂n Pn z − Π̂z|| = 0, (39)


n→+∞

where Π̂ is the solution to Riccati equation (20). Define Ln = Π̂n Cn∗ V −1 and L =

Π̂C ∗ V −1 . Then, the semigroups generated by An −Ln Cn and A−Ln C are uniformly

exponentially stable and

lim ||Ln y − Ly|| = 0. (40)


n→+∞

18
Proof. Consider the dynamical system

ż(t) = A∗ z(t) + C ∗ u(t), t ≥ 0, z(0) = z0 (41)

with the quadratic cost

Z ∞ 1 1
J(z0 ; u) = hM 2 z(s), M 2 z(s)i + hu(s), V u(s)ids. (42)
0

A sequence of finite-dimensional approximations of (41) and (42) are

żn (t) = A∗n z(t) + Cn∗ u(t), z(0) = Pn z0 , n ≥ 1, (43)

and
Z ∞ 1 1
J(z0 ; u) = hMn2 z(s), Mn2 z(s)i + hu(s), V u(s)ids (44)
0

respectively. The Riccati equation associated with (41) and (42) is (20) and a se-

quence of finite-dimensional approximations of (20) is (29). The results of Theorem

(4.1) apply to this corollary. Thus, lim ||Π̂n Pn z − Π̂z|| = 0. The semigroups gen-
n→+∞
erated by A∗n − Cn∗ L∗n and A − C L∗n are uniformly
∗ ∗ exponentially stable. It is known

that (A∗ , C ∗ ) is uniformly stabilizable if and only if (A, C) is uniformly detectable.

Similarly, (A∗n , Cn∗ ) is uniformly stabilizable if and only if (An , Cn ) is uniformly de-

tectable [11]. Hence, the semigroups generated by An − Ln Cn and A − Ln C are

uniformly exponentially stable. In addition, (38) and (39) imply

lim ||Ln y − Ly|| = 0. (45)


n→+∞

19
5 Infinite-dimensional sampled-data observer
Since measurements are taken discretely in time, the criteria under which the

sampled-data observer estimates the state of the original infinite-dimensional sys-

tem should be investigated. The stability of the infinite-dimensional sampled-data

feedback control systems is investigated in [10]. The results can be extended to de-

sign sampled-data infinite-dimensional observers that estimate the state of the original

infinite-dimensional system. The material of this section can be found in [10].

Consider the dynamical system

ż(t) = Az(t) + Bu(t), z(0) = z0 , (46)

where A and B are defined in (2). Suppose that the feedback control u(t) = Kz(t),

where K ∈ L(Z; U ), is an exponentially stabilizing state feedback control for (46)

in the sense that A + BK generates an exponentially stable, strongly continuous

semigroup on Z. A natural implementation of this continuous-time control u(t) =

Kz(t) is to use the sample and hold method, i.e sample the continuously varying

states, z(t), and holds its value at a constant level for a specified minimum period of

time, i.e.

u(t) = Kz(kτ ) t ∈ [kτ, (k + 1)τ ) , (47)

where k is an integer, τ > 0 is the sampling period and integer multiple of τ are

sampling times. The control (47) is called the sampled-data feedback control and

the overall system, (46) and (47), is called the sampled-data feedback system. It is

expected that for all sufficiently small sampling periods, (47) is a stabilizing control

20
for (46) in the sense that there exists N1 ≥ 1 and v1 > 0 such that

||z(t)|| ≤ N1 e−v1 t ||z0 ||. (48)

The following theorem states the necessary conditions for which the sampled-data

feedback controller stabilizes the system (46).

Theorem 5.1. [10, Thm. 3.1] Assume that A generates a strongly continuous semi-

group S(t) on Z, and B is a bounded operator which maps U to Z. In addition,

assume K is a compact operator and the semigroup generated by A + BK is expo-

nentially stable. Then, there exists τ ∗ > 0 such that for every τ ∈ (0, τ ∗ ) there exists

N2 ≥ 1 and v2 > 0 such that all the solutions of the sampled-data feedback (46) and

(47) satisfy

||z(t)|| ≤ N2 e−v2 t ||z0 || (49)

for all z0 ∈ Z, and for all t ≥ 0.

Consider the dynamical system (46) with the discrete measurement,

ż(t) = Az(t) + Bu(t), z(0) = z0 ,


(50)
y(t) = Cz(kT ), t ∈ [kT, (k + 1)T ),

and an observer

˙ = Aẑ(t) + Bu(t) + L[y(t) − ŷ(t)], ẑ(0) = ẑ0 ,


ẑ(t)
(51)
ŷ(t) = C ẑ(kT ), t ∈ [kT, (k + 1)T ),

where k is an integer, T > 0 is the sampling period and integer multiples of T are the

sampling times. The observer (51) is called the sampled-data observer.

21
Definition 11. The estimation error is defined as

e(t) = z(t) − ẑ(t), (52)

where z(t) and ẑ(t) are defined in (50) and (51).

The goal is to find conditions under which the sampled-data estimator estimates the

state of the original infinite-dimensional system (50). The following corollary gives

a result for the sampled-data observer.

Corollary 5.2. Let A and S be defined as in theorem (5.1) and C be a bounded

operator. In addition, assume L is a compact operator and the semigroup generated

by A − LC is exponentially stable. Then, there exists τ ∗ > 0 such that for every

T ∈ (0, τ ∗ ) there exists N3 ≥ 1 and v3 > 0 such that all the solutions to

ė(t) = Ae(t) − L(y(t) − ŷ(t)),

y(t) = Cz(kT ), t ∈ [kT, (k + 1)T ), (53)

ŷ(t) = C ẑ(kT ), t ∈ [kT, (k + 1)T ),

satisfy

||e(t)|| ≤ N3 e−v3 t ||e(0)||, (54)

for all e(0) = z0 − ẑ0 , and for all t ≥ 0.

22
Proof. Using (50), (51) and (52),

˙
ė(t) = ż(t) − ẑ(t)

= Az(t) + Bu(t) − Aẑ(t) + Bu(t) + L(y(t) − ŷ(t)
 
= A z(t) − ẑ(t) − L y(t) − ŷ(t) (55)

= Ae(t) − LC z(kT ) − ẑ(kT )

= Ae(t) − LCe(kT ).

By the assumptions in the corollary statement A − LC generates an exponentially

stable semigroup, L is a compact operator and C is bounded. Therefore, Theorem

5.1 applies to (55). There exists τ ∗ > 0 such that for every T ∈ (0, τ ∗ ) there exists

N3 ≥ 1 and v3 > 0 such that

||e(t)|| ≤ N3 e−v3 t ||e0 ||, (56)

for all e(0) = z0 − ẑ0 , and for all t ≥ 0. 

6 Numerical application
In this section a Kalman filter is designed for the one-dimensional heat equation with

Dirichlet boundary conditions. Continuous-time finite difference, continuous-time

Fourier, discrete-time finite difference and discrete-time Fourier methods are used to

investigate the convergence of the Kalman filter and its gain.

Finite difference and Fourier methods are used to solve partial differential equa-

tions. Finite difference methods are easy to code, cheap to implement on digital

machines and easy to formulate for simple geometries. It is not easy to generalize

23
finite difference methods for complex geometries. In addition, the solution is only

defined pointwise, so reconstruction at arbitrary locations is not uniquely defined. Fi-

nite difference methods are not Galerkin methods, so the convergence may be more

difficult to prove. Fourier method satisfies the orthogonality conditions. Hence, the

convergence is easy to prove. The method can be generalized for complex geometry

and it usually results in high order accuracy. Moreover, it is easy to implement bound-

ary conditions using Fourier method. Fourier method implementation on computers

is usually slower than finite difference method. To choose between finite difference

and Fourier methods one should consider different factors such as required accuracy

and geometry of the domain.

6.1 One-dimensional heat equation

Consider a one-dimensional rod of length L that can be heated along its length ac-

cording to
∂z(x, t) ∂ 2 z(x, t)
=α + B1 u(t) + B2 w2 (t),
∂t ∂x2
z(0, t) = 0,
(57)
z(L, t) = 0,

z(x, 0) = z0 (x),

where z(x, t) represents the temperature of the rod at time t and position x, z0 (x) is

the initial temperature profile, and u(t) ∈ L2 ([0, ∞]; R) is the addition of heat along

the bar. Choose Z = L2 (0, L) as the state space and z(., t) = {z(x, t), 0 ≤ x ≤ L} as

the state of the system. The operator B1 ∈ L(R; Z) describes the input of the system,

B2 ∈ L(R; Z) describes the noise on the state of the system, w(t) : [0, ∞) → R is

a scalar white Gaussian noise with mean zero and variance Q, and α is the thermal

24
diffusivity.
∂2z
Define Az = α . Since not all elements of L2 (0, L) are differentiable, and the
∂x2
boundary conditions need to be considered, define the domain of the operator A as

D(A) = {z ∈ H2 (0, L); z(0) = z(L) = 0} (58)

where H2 (0, L) indicates the Sobolev space of functions with weak second deriva-

tives. The operator B1 ∈ L(R; Z) is defined by B1 u(t) = b1 (x)u(t) for some

b1 (x) ∈ L2 (0, L) [11]. Similarly, the operator B2 ∈ L(R; Z) can is defined by

B2 w2 (t) = b2 (x)w2 (t) for some b2 (x) ∈ L2 (0, L). The dynamical system (57) can

be written as
ż(t) = Az(t) + B1 u(t) + B2 w2 (t),

z(0, t) = 0,
(59)
z(L, t) = 0,

z(x, 0) = z0 (x).

The operator A generates a strongly continuous semigroup S(t) on Z = L2 (0, L).

The state z, the temperature of the rod, evolves on the infinite-dimensional space

L2 (0, L), [2].

Define the output of the dynamical system (59) as

Z L
y(t) = c(x)z(x, t)dx + v(t) (60)
0

where c(x) ∈ L2 (0, L) and v(t) : [0, ∞) → R is a scalar white Gaussian noise with

mean zero and variance V. The dynamical system (59) with output (60) can be written

as

25
ż(t) = Az(t) + B1 u(t) + B2 w2 (t),

y(t) = Cz(t) + v(t),


Z L
Cz = c(x)z(x, t)dx,
0
(61)
z(0, t) = 0,

z(L, t) = 0,

z(x, 0) = z0 (x),

6.2 Problem statement

Consider a rod with length L and thermal diffusivity α. Assume



 1 : x ∈ [x1 , x2 ]

b1 (x) = (62)
 0 : otherwise,

and 
 1 : x ∈ [x3 , x4 ]

b2 (x) = (63)
 0 : otherwise.

where [x1 , x2 ] and [x3 , x4 ] indicate the intervals where the input and noise are present.

Suppose one sensor with half-width ds is placed at x = xs to measure the tempera-

ture. The kernel of the integral in (60) is



1
: |x − xs | < ds


2ds
c(x) = (64)
 0 : |x − xs | > ds .

Even for finite-dimensional systems, the entire state cannot generally be mea-

26
sured. Measurement of the entire state is never possible for systems described by

partial differential equations. Hence, designing an observer is necessary for estimat-

ing the state of the dynamical system (59). Consider the observer

˙ = Aẑ(t) + Bu(t) + L[y(t) − ŷ(t)], ẑ(0) = ẑ0 ,


ẑ(t)

y(t) = Cz(kT ) + v(kT ), t ∈ [kT, (k + 1)T ), (65)

ŷ(t) = C ẑ(kT ), t ∈ [kT, (k + 1)T ),

where k is an integer, T > 0 is the sampling period and integer multiples of T are the

sampling times. The output space Y , is finite-dimensional, C ∈ L(Z, Y ) is defined

in (61), A, u(t), and B1 are defined in (59), and L is the estimator gain. The goal is to

find the L that minimizes (19). To design the estimator (65), for the system (57), given

(62), (63) and (64), a sequence of finite-dimensional approximations of the infinite-

dimensional system are required. In the following sections, the finite difference and

Fourier methods are used to design an estimator for the infinite-dimensional system

(57).

6.3 Continuous-time finite difference method

A finite difference method can be used to find the finite-dimensional approximations

of operators A, B1 , B2 and C in (61). The state of the system is discretized so that it

becomes the temperature of the rod at distinct points. Define

xi = x0 + i∆x, i = 1, 2, 3, 4, ..., N,
L
∆x = , (66)
N
x0 = 0, xN = L,

27
where N + 1 is the number of mesh points in [0, L] and ∆x indicates the distance

between adjacent mesh points. A sequence of finite-dimensional approximations of

(61) can be written as

żn (t) = An zn + B1n u(t) + B2n w(t) (67)

with output

y(t) = Cn z(kT ) + v(kT ), t ∈ [kT, (k + 1)T ), (68)

where  
 z0 (t) 
 
 z (t) 
 1 
 
 z2 (t)
 

 
 .. 
 . 
zn =  . (69)
 
..
.
 
 
 
 .. 

 . 

 
 z (t) 
 N 
 
zN +1 (t)

28
The finite difference approximations of B1 , B2 , C and A are (N +1)×1, (N +1)×1,

1 × (N + 1) and (N + 1) × (N + 1) matrices

 
 r −2r r 0 0 0 0 ...
0     
  0  0 
r −2r r 0 0 0 0 ... 0 
   
    
   0   0 
     
 0 r −2r −r 0 0 0 ... 0    .
 . 
  . 
 . 
 .   . 

 

 0 0 r −2r r 0 0 ... 0   
 
  
 
 1   1 
.. .. .. .. .. .. .. .. .. 
 
An =  , B = , B =  ,
    
. . . . . . . . .  1n   2n
 1   1 
     
 .. .. .. .. .. .. .. .. .. 

 . . . . . . . . .   
 .. 
 
 .. 
 .. .. .. .. .. .. .. .. ..   .   . 
. . . . . . . . . 
     
    
   0   0 
     

 ... ... ... . . . . . . . . . r −2r r      
  0 0
0 0 0 0 . . . . . . r −2r r

 
Cn = 1 1 .
0 0 ... ... a a ... ... 0 0
(70)
α
where r = . The non-zero elements of B1n are placed where the mesh points lie
∆x2
in [x1 , x2 ]. Similarly, the non-zero elements of B2n are placed where the mesh points

lie in [x3 , x4 ]. The non-zero entries of Cn are placed where the mesh points lie in
L
[xs − ds , xs + ds ]. Since the distance between adjacent mesh points is ∆x = and
N
2dS 2N ds
the width of sensor is 2dS , there are approximately = mesh points that
∆x L
lie in [xs − ds , xs + ds ]. The positive integer a represents the number of mesh points
1
in [xs − ds , xs + ds ] and is used for normalization. The central difference method
a
is used to find the approximation of operator A in (61),

∂ 2 z(xi , t) z(xi + ∆x, t) − 2z(xi , t) + z(xi − ∆x, t)


Az(xi , t) = α =α + O(∆x).
∂x2i ∆x2
(71)

29
Since the temperature is fixed at both ends, zn , An , B1n , B2n and Cn matrices can be

reduced to (N − 1) × 1, (N − 1) × (N − 1), (N − 1) × 1, (N − 1) × 1 and 1 × (N − 1)

matrices for the interior points

 
 z1 (t) 
 
 z2 (t)
 

..
 
 

 . 

 .. 
zn = 
 . ,
 (72)
..
 
 
 . 
 
 
 zN −2 (t) 
 
 
zN −1 (t)

 
 −2r r 0 0 0 0 0 ... 0    

 0   0 
 
 r −2r r 0 0 0 0 ... 0
 
    
   0   0 
     
 0 r −2r −r 0 0 0 ... 0   . 
 . 
 
 .   . 
   
 
 0
 0 r −2r r 0 0 ... 0 

 
 
 
 
 1   1 
 .. .. .. .. .. .. .. .. ..
 
An =  .  , B1n =   , B2n =   ,
    
. . . . . . . .
 1   1 
     
 . .. .. .. .. .. .. .. ..
 ..

 . . . . . . . . 

 
 .. 
 
 .. 
 . .. .. .. .. .. .. .. ..   .   . 
 .
 . . . . . . . . .
    
    
   0   0 
     
 ...
 ... ... ... ... ... r −2r r 

   
  0 0
0 0 0 0 ... ... 0 r −2r

 
Cn = 1 1 .
0 0 ... ... a a ... ... 0 0
(73)

To design a continuous-time Kalman filter, the Riccati equation associated with

minimum-variance estimation problem is solved with (73). The Riccati equation as-

30
sociated with the observer (65) and (19) is

AΠ̂ + Π̂A∗ − Π̂C ∗ V −1 C Π̂ + B2 QB2∗ = 0 (74)

and the finite-dimensional approximation of (74) is

An Π̂n + Π̂n A∗n − Π̂n Cn∗ V −1 C Π̂n + B2n Qn B2n



=0 (75)

where Q and V are variance of w(t) and v(t) respectively. The Kalman gain can

be found by solving (74) and using (21). MATLAB is used to solve the algebraic

Riccati equation (75). The following figures show the observer gain, true state and

estimated state for the different number of mesh points. The physical properties of

the system are given in Table 1. Properties of the input function are given in Table 2.

It is assumed that the measurements are taken every 1 second, i.e. T = 1 s.

Table 1: Physical properties of the system


2
α( ms ) 0.001
L(m) 1 Table 2: Initial conditions and input function
ds (m) 0.005
Input function 10 sin(t)
xs 0.5
Initial condition (C) z0 (x) = 400
[x1 , x2 ] [0.125, 0.25]
Initial condition of observer (C) 0
[x3 , x4 ] [0.25, 0.75]
Q 5
V 10

31
Estimator Gain for different Grid sizes
0.8
N=4
N=8
0.7 N=16
N=32
N=64
0.6 N=128
N=256
N=512
0.5

Estimator Gain
0.4

0.3

0.2

0.1

−0.1
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
x (m)

Figure 1: Observer gain for different number of grid points.

Initial condition of true and observer Initial condition of true and observer
500 500
True initial condition True initial condition
Mismatch iintial condition

Mismatch iintial condition

400 Observer initial condition 400 Observer initial condition

300 300

200 200

100 100

0 0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
x (m) x (m)

True and observer temperature True and observer temperature


500 500
True solution True solution
400 Observer solution 400 Observer solution
Temperature (C)

Temperature (C)

300 300

200 200

100 100

0 0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
x (m) x (m)

(a) True state and estimated state at t = 50s for (b) True state and estimated state at t = 50s for
N = 16. N = 128.

Figure 2: Snapshot of true state and estimated state at t = 50s.

Since the noise on the state of the system (57) is uniformly distributed in

[0.25, 0.75], the gain in this interval is expected to be larger than other parts of the

32
domain. As more grid points are considered, the estimated state becomes closer to

the true state. Hence, the observer gain approaches zero in regions with no random

noise as N increases. Figure 3 illustrates that the estimates state converges to the true

state within 100 seconds.

6.4 Continuous-time Fourier method

In this section, the estimator (65) is designed using the eigenfunctions of the system

(57). The solution to the infinite-dimensional system (57), with no input and no
nπx
random noise, can be expressed as a sum of eigenfunctions, sin( ), that is
L


X nπx
z(x, t) = kn (t) sin( ). (76)
L
i=1

The new state becomes the Fourier sine coefficients of z(t) and the dynamical system

(57) can be written in the new state space representation. Consider the Nth order

approximation where only the first N coefficients are included. The approximation

reduces the infinite-dimensional system to a finite-dimensional system. The finite-

dimensional approximation can be written as

żn (t) = An zn + B1n u(t) + B2n w(t), (77)

with output

y(t) = Cn z(kT ) + v(kT ), t ∈ [kT, (k + 1)T ), (78)

33
where
   
 k1 (t)   b11 
     
−π 2
k2 (t)  0 0 0 ...  b12
   

   L2  


 
  −4π 2  

 k3 (t) 


 0 L2
0 0 ... 
  b13



..  ..
    
zn (t) =  , A =
 −9π 2  , B1n

=  ,
 .   n 
 0 0 L2
0 ...   . 
.. .. .. .. ..
     
. .
   
 kN −2  
 . . . 
  b1(N −2) 
     
−N 2 π 2
   

 kN −1  0 0 0 ... L2
 b
 1(N −1)


   
kN b1N
  (79)

 b21 
 
 b22
 

 
 
 b23 
   
 .. 
B2n =
 .  , Cn = c c c . . . c
 1 2 3 N −2 cN −1 cN , (80)
 
 
 b2(N −2) 
 
 
 b 
 2(N −1) 
 
b2N

where Z L
2 nπx
b1n = b1 (x) sin( )dx,
L 0 L

Z L
2 nπx
b2n = b2 (x) sin( )dx, (81)
L 0 L

Z L
2 nπx
cn = c(x) sin( )dx.
L 0 L

Figures 4 and 5 show b1 (x), b2 (x) and c(x) along with their Fourier approximations.

34
b1(x) and b2(x) vs. x b1(x) and b2(x) representations using first 80 modes
2 2
b1(x) b1(x)
b2(x) b2(x)
1.8 1.8

1.6 1.6

1.4 1.4

1.2 1.2
b1(x) & b2(x)

b1(x) & b2(x)


1 1

0.8 0.8

0.6 0.6

0.4 0.4

0.2 0.2

0 0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
x(m) x(m)

(a) b1 (x) and b2 (x). (b) Approximation of b1 (x) and b2 (x).

Figure 3: b1 (x) and b2 (x) along with their Fourier approximations, N = 80.

c(x) vs. x c(x) using first 80 modes


120 120

100 100

80 80
c(x)

c(x)

60 60

40 40

20 20

0 0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
x(m) x(m)

(a) c(x). (b) Approximation of c(x).

Figure 4: c(x) along with its Fourier approximation, N = 80.

To design a Kalman filter, the Riccati equation (75) is solved using (79), (80) and

(81) in MATLAB. The following figures show the Kalman gain for the continuous-

time plant, snapshots of the true state and the estimated. Physical properties and

initial conditions are the same as in Table 1 and 2.

35
Estimator Gain for different number of modes (Fourier)
1
6
11
0.9 16
21
26
0.8 31
36
41
0.7 46
51
56
0.6
61

Estimator Gain
66
71
0.5
76
81
0.4 86
91

0.3

0.2

0.1

0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
x (m)

Figure 5: Observer gain for different number of eigenfunctions.

Estimator gain vs lenght of the medium using Fourier approximations


Estimator gain vs lenght of the medium using Fourier approximations
400
400 True initial condition
Estimator Gain (M)

True initial condition Observer initial condition


Estimator Gain (M)

300
300 Observer initial condition

200
200

100
100

0
0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 x (m)
x (m)
True and observer temperature
True and observer temperature

400 True solution


400 True solution
Observer solution
Observer solution
Temperature (C)
Temperature (C)

300 300

200 200

100 100

0 0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
x (m) x (m)

(a) True state and estimated state at t = 50s and (b) Exact and observer solutions at t = 50s and
N = 25. N = 50.

Figure 6: True solution and observer for different number of eigenfunctions.

The convergence of the Kalman filter can be explained in two steps. First, note

that the finite-dimensional approximations of Kalman filter gains, Ln , stabilize the

original infinite-dimensional estimator. Secondly, a finite number of sensors are used

36
to measure the temperature, and the output space is finite-dimensional. Hence, L is

compact and C is a bounded operator in (65). The hypothesis of corollary (5.2) are

satisfied. Therefore, the sampled data finite-dimensional observed state converges to

the continuous-time infinite-dimensional state.

6.5 Transient response of the sample-data observer

The construction of τ ∗ in Theorem 5.1 is not discussed in [10]. Different sampling

periods are used in this section to estimate this parameter for the sampled-data ob-

server (65). It is expected intuitively that the estimated state converges to true state

faster as sampling periods become smaller. Finite difference and Fourier methods are

used to find the estimated state at x = 0.25m for T = 1s, T = 10s and T = 20s.

The following figures illustrate the transient response of the observer for different

sampling periods.

37
Observed and true state Observed and true state
450 450
Observed state Observed state
True state True state
400 400

350
350

300
300
Temperature (C)

Temperature (C)
250
250
200
200
150

150
100

100
50

50 0

0 −50
0 50 100 150 200 250 300 0 50 100 150 200 250 300 350 400
Time (s) Time (s)

(a) Transient response at x = 0.25ms, T = (b) Transient response at x = 0.25ms, T =


1s and N = 16. 10s and N = 16.
Observed and true state
450
Observed state
True state
400

350

300
Temperature (C)

250

200

150

100

50

−50
0 50 100 150 200 250 300 350 400
Time (s)

(c) Transient response at x = 0.25ms, T =


20s and N = 16.

Figure 7: Transient response for different sampling periods (Finite difference method).

38
Observed and true state Observed and true state
450 450
True state True state
Observed state Observed state
400 400

350
350

300
300
Temperature (C)

Temperature (C)
250
250
200
200
150

150
100

100
50

50 0

0 −50
0 50 100 150 200 250 300 0 50 100 150 200 250 300 350 400
Time (s) Time (s)

(a) Transient response at x = 0.25ms, T = (b) Transient response at x = 0.25ms, T =


1s and N = 10. 10s and N = 10.
Observed and true state
450
True state
Observed state
400

350

300
Temperature (C)

250

200

150

100

50

0
0 50 100 150 200 250 300 350 400
Time (s)

(c) Transient response at x = 0.25ms, T =


20s and N = 10.

Figure 8: Transient response for different sampling periods (Fourier method).

Figure 7 and 8 confirm that the small sampling periods result in high convergence

rates. The convergence occurs within 50 seconds when T = 1s is used as a sampling

period. When T = 10s or T = 20s , the convergence rates become smaller. It is

illustrated in Figure 7 and 8 that the convergence occurs within 150 and 300 seconds

respectively.

Other sampling periods (T = 30s, T = 45s and T = 50s) were also used to find

39
the upper bound for the sampling period. It was observed that the estimated state

converges to true state in all cases. One explanation for this observation might be the

fact that the upper bound for the sampling period depends on the dynamical system

under investigation. Hence, choosing a sampling period might not be an issue for

designing the observer (65) for a dynamical system with a simple dynamics similar

to (57).

6.6 Discrete-time observer

Practical implementation of observers for continuous-time systems is typically per-

formed in discrete-time. To design a discrete-time observer for the dynamical system

(57), An , B1n , B2n and Cn in (67), (68), (77) and (78) are replaced by new matri-

ces Ad , B1d , B2d and Cd [9]. These new matrices are transition matrices of state,

disturbance, noise and measurement between the sampling times. In addition, it is

necessary to find the power spectral densities of the process and measurement noise

for the discrete-time system. The material of this section can be found in [9].

The solution to (67) and (77) can be written as

Z t 
z(t) = exp(An (t − t0 ))z(t0 ) + exp(An (t − τ ))B1n u(τ )dτ +
Z t t0 (82)
exp An (t − τ ))B2n w(τ )dτ .
t0

Suppose u(t) and the measurement are sampled every T seconds. To describe the

state propagation between samples, set t0 = kT , t = (k + 1)T where k ≥ 0. Define

40
the sampled state as zk := z(kT ). Then, (82) becomes

Z (k+1)T 
zk+1 = exp(An T )zk + exp(An [(k + 1)T − τ ])B1n u(τ )dτ )+
Z (k+1)T kT

exp(An [(k + 1)T − τ ])B2n w(τ )dτ .


kT
(83)

Assuming the control input, u(t), is reconstructed from the discrete control sequence

uk by using sample-hold method, u(τ ) has a constant value of u(kT ) = uk over the

integration interval. Define

Z (k+1)T
wk = exp(An [(k + 1)T − τ ])B2n w(τ )dτ . (84)
kT

Equation (83) becomes

Z (k+1)T 
zk+1 = exp(An T )zk + exp(An [(k + 1)T − τ ])B1n dτ uk + wk .
kT
(85)

By choosing λ = τ − kT and τ = T − λ, Equation (85) can be written as

Z T 
zk+1 = exp(An T )zk + exp(An τ )B1n dτ uk + wk . (86)
0

The measurement equation, (68), has no dynamics. Hence, (68) and (78) can be

written as

yk = Cn zk + vk . (87)

The state-space representation of (61) for discrete-time plant can be written as

zk+1 = Ad zk + B1d uk + wk ,
(88)
yk = Cd zk + vk ,

41
where
Ad = exp(An T ),
Z T
B1d = exp(An τ )B1n dτ,
0

Cd = Cn , (89)
Z (k+1)T
wk = exp(An [(k + 1)T − τ ])B2n w(τ )dτ ,
k

vk = v(kT ).

Due to sampling, the variance of random noise should be redefined for the discrete-

time system. It is shown in [9] that the variance of measurement and process noise

are
V
Vd = ,
T (90)
2
Qd = QT + O(T ),

where T is the sampling period and V and Q are the variance of the random noise

defined in (57) and (60). The derivation of Kalman filter for the discrete-time plant

(88) is discussed in [9]. It is shown that the steady-state Kalman filter for the discrete-

time system (88) is


ẑ(k + 1|k) = Ad I − Kd Cd ẑ(k|k − 1) + Ad Kd yk + B1d uk ,
(91)

ẑ(k + 1|k + 1) = I − Kd Cd ẑ(k + 1|k) + Kd yk+1 ,

where Kd is the steady-state discrete-time Kalman gain, ẑ(k + 1|k) is the prediction

of the next state using all the previous measurements but excluding the current mea-

surement, and ẑ(k + 1|k + 1) is the corrected estimate using the current measurement,

[9].

42
The following figures show the snapshots of the estimate and the true solution for the

discrete-time system associated with (57). The finite difference and Fourier methods

are used to design an observer. Data in Table 1 and 2 is used.

Initial condition for original system and observer

Initial condition for original system and observer


400
Initial condition of system
400
Initial condition of observer
Temperature

300
initial condition of system

Temperature
300
initial condition of observer
200
200
100
100
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0
x (m) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
x (m)
Original system and observer
Original system and observer
400 True solution
Observer solution 400 True solution
Observer
Temperature (C)

300

Temperature (C)
300

200
200

100 100

0 0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
x (m) x (m)

(a) True state and estimated state at t=100, N=16. (b) True state and estimated state at t=100, N=32.

Figure 9: Finite difference, discrete-time Kalman filter.

True solution and observer at x=0.25


450
Observer
True solution
400

350

300
Temperature (c)

250

200

150

100

50

0
0 20 40 60 80 100 120 140 160 180 200

[h] Time (s)

Figure 10: Finite difference, transient response at x = 0.25m, N = 32.

43
Initial conditions, N=25 modes
Initial condition for original system and observer, N=50
500
500

400
400
Temperature

Initial condition of the system Initial condition of the system

Temperature
300 300
Initial condition of observer Initial condition of observer

200 200

100 100

0 0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
x (m) x (m)

Estimator Estimator
500 500
Observer Estimator
400 True solution True solution
400
Temperature (C)

Temperature (C)
300 300

200 200

100 100

0 0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
x (m) x (m)

(a) True state and estimated state at t=10, N=25 (b) True state and estimated state at t=50, N=50
modes. modes.

Figure 11: Fourier approximation, discrete-time Kalman filter.

True solution and observer at x=0.25


450
True solution
Observer
400

350

300
Temperature (c)

250

200

150

100

50

0
0 20 40 60 80 100 120 140 160 180 200

[h] Time (s)

Figure 12: Fourier method, transient response at x = 0.25m, N = 25 modes.

44
7 Conclusion
State estimation of dynamical systems in presence of random noise is crucial in many

different applications [6]. State of many dynamical systems evolves in an infinite-

dimensional space. Since it is not possible to measure all the states of an infinite-

dimensional system, it is necessary to design an observer. Finite-dimensional approx-

imation of an infinite-dimensional system is necessary for numerical computations.

The conditions under which the finite-dimensional approximation of an observer

converges to the one that estimates the infinite-dimensional system were presented.

In addition, the measurements are often taken discretely in time. The conditions

under which the sampled-data infinite-dimensional observer estimates the state of the

infinite-dimensional system were also presented.

Finite difference and Fourier methods were used to design a Kalman filter for the

one-dimensional heat equation. Continuous and discrete-time Kalman filters were

designed numerically in MATLAB. It was shown that the observer gain converges to

the observer gain of the infinite-dimensional estimator. Also, the finite-dimensional

observer estimates the infinite-dimensional system for both continuous and discrete-

time systems.

45
Appendix A
The following MATLAB codes are used in section 6 to design Kalman filters for
continuous-time and discrete-time systems.

function[]=Continious_time_Finite_difference_gain(L,k_max,alpha,x_3,
x_4,x_s,d_s,R,Q)
% Amir Issaei
% SN: 20488100
% Master of Applied Mathematics, University of Waterloo, Spring 2014

% This code produces plot of estimator game for continious-time using a


% finite difference method.

% L: length of medium
% k_max: maximum number of grid points
% alpha: thermal diffucivity
% R: variance of the noise on masurements
% Q: variance of the noise on the states
% x_1: Start of the interval of input
% x_2: End of the interval of the input
% x_s: Sensor location
% d_s: Half-width of the sensor
%------------------------------------------------------------------

%Clear screen and close all windiws


clc;
format long;
close all;

%Start grid size for-loop

for k=2:k_max

% Number of points in our grid


N=2ˆk;

% Grid size in space


dx=L/N;

46
x=[0:dx:L];

%Create A, B_2, and C matrices for the finite-dimensional


%approximation of the system, These matrices are needed to solve ARE
A=make_A(alpha,N,dx);
B=make_B(x,N,x_3,x_4);
C=make_C(x,N,x_s,d_s);

%Find the covariance matrix


W=B*B’*Q;

%Solve continious-time ARE


[p,l,g]=care(A’,C’,W,R);

% Find gain
M=p*C’*Rˆ(-1);

%Plot estimator gain vs. length of the medium

plot(x,[0;M;0],’LineWidth’,1.4,’color’,[rand(1,1) rand(1,1) rand(1,1)]);


xlabel(’x (m)’);
ylabel(’Estimator Gain’);
title(’Estimator Gain for different Grid sizes’);
axis([0 1 0 1])
pause(1);

end
end
function[]=Continious_time_Fourier_gain(L,dx,n_max, alpha,R,Q,
x_1,x_2,x_s,d_s)
% Amir Issaei
% SN: 20488100
% Master of Applied Mathematics, University of Waterloo, Spring 2014

% This code solves steady-state, continious-time ARE


% and finds Kalman gain for different number of modes

% L: Length of the medium

47
% n_max: Maximum number of modes for simulation
% alpha: Thermal diffusivity constant
% R: Variance of the measurements noise
% Q: Variance of process noise
% x_1: Start point of noise interval
% x_2: End point of noise interval
% x_s: Sensor location
% d_s: Sensor half-width

%--------------------------------------------------------------

% Close all windows and clear screen


clc;
format long;
close all;

%Start mode loop

for n=1:n_max

% Number of modes
N=n;

% Grid size in space


x=(0:dx:L);
%Create A, B_2 and C matrices to solve ARE.
%Use Fourier series approximation
A=make_A_Fourier(alpha,N,L);
B=make_B_Fourier(N,L,x_1,x_2);
C=make_C_Fourier(N,L,x_s,d_s);

%Creat covariance matrix


W=B*B’*Q;

%Solve continious-time ARE


[p,l,g]=care(A’,C’,W,R);

% Find gain

48
M=p*C’*Rˆ(-1);

% Create table of space coefficients for Fourier series

S=Make_space(L,x,N);

%Plot gain in space

M_final=M’*S;
plot(x,M_final’,’LineWidth’,1.4,’color’,[rand(1,1) rand(1,1) rand(1,1)]);
ylim([0,1]);
xlabel(’x (m)’);
ylabel(’Estimator Gain’);
title(’Estimator Gain for different number of modes (Fourier)’);
pause(1);

end
end
function[]=Continious_time_observer_FInite_difference(L,k_max,dt,t_final,
alpha, T_initial,R,Q,x_1,x_2,x_3,x_4,x_s,d_s,u_max,y_lim_min,y_lim_max)
% Amir Issaei
% SN: 20488100
% Master of Applied Mathematics, University of Waterloo, Spring 2014

% This code implement Kalman filter desing for the finite-dimensional


% approximation of the system using a finite difference method.

% L: Length of the medium


% k_max: Maximum numbe of points on the grid
% dt: Sampling period of the sensor
% t_final: Final time of simulation (simulaiton duraiton)
% alpha: Thermal diffiusivity constant
% T_initial: Initial condition of the system
% R: Variance of the noise on the measurements
% Q: Variance of the noise on the states
% u_max= Amplitude of the input function
% y_lim_in: Minimum range of y axis

49
% y_max: Maximum range of y axis
% x_1: Start point of input
% x_2: End point of input interval
% x_3: Start point of noise interval
% x_4: End point of noise interval
% x_s: Sensor location
% d_s: Sensor half-width
%---------------------------------------------------------------------

%Define global variables

global A;
global C;
global B;
global u;
global Y_noise;
global i;
global M;
global B_input;

%Close all windows and clear the screen

clc;
format long;
close all;

%Start for loop for different grid sizes

for k=7:k_max

% Number of points in our grid

N=2ˆk;
dt_temp=dt;

% Grid size in space

dx=L/N;

50
x=[0:dx:L];

%Number of time steps

t_step=t_final/dt;

%Observer initial condition


T_initial_vector=0*ones(N-1,1);

%Exact initial condition


T_initial_exact=T_initial*ones(N-1,1);

%Creat matrices for the system

B_input=make_B_inout(x,N,x_1,x_2);
B=make_B(x,N,x_3,x_4);

W=B*B’*Q;

%Create matrix A for finite dimensional approximaiton

A=make_A(alpha,N,dx);

%Create matrix C for y=C*x. This indicates the location of the sensor.
%Assumed sensor is located in the middle of the rod.

C=make_C(x,N,x_s,d_s);

%Solve ARE
[p,l,g]=care(A’,C’,W,R);

% Find gain
M=p*C’*Rˆ(-1);

%Plot initial condition


h1=subplot(2,1,1);

51
plot(x,[0;T_initial_exact;0],’LineWidth’,1.3,’color’,’b’)
hold on
plot(x,[0;T_initial_vector;0],’LineWidth’,1.3,’color’,’r’)
set(h1, ’YLim’, [0 y_lim_max])
xlabel(h1, ’x (m)’);
ylabel(h1, ’Temperature(c)’);
title(h1, ’Initial conditions’);

% Plot initial condition of the observer


h2=subplot(2,1,2);
plot(x,[0;T_initial_exact;0],’LineWidth’,1.3,’color’,’b’)
hold on
plot(x,[0;T_initial_vector;0],’LineWidth’,1.3,’color’,’r’)
set(h2, ’YLim’, [y_lim_min y_lim_max])
xlabel(h2, ’x (m)’);
ylabel(h2, ’Temperature (C)’);
title(h2, ’True and observer temperature’);
t=0;
indd=find(x==0.25);
T_ess(1,1)=T_initial_vector(indd,1);
T_exx(1,1)=T_initial_exact(indd,1);
sys=ss(A,[B_input],C,0);%,B%
t_span=[0:dt:t_final];
dt_1=dt;

%Simulate the system response


u=u_max*sin(t_span);
Y=lsim(sys,[u’],t_span’,T_initial_exact’);
Y_noise=Y+sqrt(R)*randn(1,t_step+1)’;

% Move forward in time and find true solution and observer

for i=2:t_step+1

[time,X]=ode45(@solve_ode,[t,dt_1],T_initial_vector’);
[time_2,X_exact]=ode45(@exact_sol,[t,dt_1],T_initial_exact’);
size_X_e=size(X_exact);
T_initial_exact=X_exact(size_X_e(1,1),:);
size_X=size(X);
T_initial_vector=X(size_X(1,1),:);

52
% Plot exact solution

h2=subplot(2,1,2);
plot(x,[0,T_initial_exact,0],’LineWidth’,1.3,’color’,’b’)
hold on;

%plot observer

plot(x,[0,T_initial_vector,0],’LineWidth’,1.3,’color’,’r’)
set(h2, ’YLim’, [y_lim_min y_lim_max])
xlabel(h2, ’x (m)’);
ylabel(h2, ’Temperature (C)’);
title(h2, ’True and observer temperature’)
T_es=[0,T_initial_vector,0];
T_ex=[0,T_initial_exact,0];

t=t+dt;
dt_1=t+dt;

T_ess(1,i)=T_es(1,indd);
T_exx(1,i)=T_ex(1,indd);
pause(0.01)
delete(h2);
end

% Plot transient solution at x=0.25m


figure
tempp=[0:dt:t_final];
plot(tempp,T_ess,’r’,’LineWidth’,1.3);
hold on
plot(tempp,T_exx,’b’,’LineWidth’,1.3);
xlabel(’t(s)’);
ylabel(’Temperature(c)’);
title(’True and observer temperature at x=0.25’);
keyboard;
end

%Find exact solution

53
function [dx_dt]=solve_ode(t,x)
dx_dt=(A-M*C)*x+B_input*u(1,i-1)+M*Y_noise(i-1,1);
return
end

% Solve continious-time observer


function [dx_dt]=exact_sol(t,x)
dx_dt=A*x+B_input*u(1,i-1);
return
end

end
function[]=Continious_time_observer_Fourier(L,dx,n_max,dt,t_final,alpha,
T_initial,R,Q,x_1,x_2,x_3,x_4,x_s,d_s,u_max,y_lim_min,y_lim_max)
% Amir Issaei
% SN: 20488100
% Master of Applied Mathematics, University of Waterloo, Spring 2014

% This code implement Kalman filter desing for the finite-dimensional


% approximation of the system using Fourier method.

% L: Length of the medium


% dx: Space grid size
% n_max: Maximum numbe of modes
% dt: Sampling period of the sensor
% t_final: Final time of simulation (simulaiton duraiton)
% alpha: Thermal diffiusivity constant
% T_initial: Initial condition of the system
% R: Variance of the noise on the measurements
% Q: Variance of the noise on the states
% u_max= Amplitude of the input function
% y_lim_in: Minimum range of y axis
% y_max: Maximum range of y axis
% x_1: Start point of input
% x_2: End point of input interval
% x_3: Start point of noise interval
% x_4: End point of noise interval
% x_s: Sensor location

54
% d_s: Sensor half-width
%---------------------------------------------------------------------

% Define Global variables

global A;
global C;
global Y_noise;
global i;
global u;
global M;
global B_input;

% Clear screen and close all windows


clc;
close all;

%Start mode-loop, i.e increase the number of modes starting from 20


for n=20:n_max

% Number of points in our grid

N=n;

% Space grids

x=[0:dx:L];

% Make a copy of original variance for later use

Q_temp=Q;
R_temp=R;

%Number of time steps

t_step=t_final/dt;

%Initial temperature of the observer

55
T_initial_vector=make_initial(N,L,0);

%Exact initial temperature

T_initial_exact=make_initial(N,L,T_initial);

%Creat system matrices using fourier series

A=make_A_Fourier(alpha,N,L);
B_input=make_B_inout_Fouries(N,L,x_1,x_2);
B=make_B_Fourier(N,L,x_3,x_4);
C=make_C_Fourier(N,L,x_s,d_s);
W=B*B’*Q;
S=Make_space(L,x,N);

% Simulate the system

T_initial_sim=T_initial_exact;
sys=ss(A,[B_input],C,0);%,B%
t_span=(0:dt:t_final);
u=u_max*sin(t_span);
Y=lsim(sys,[u’],t_span’,T_initial_sim’);
Y_noise=Y+sqrt(R_temp)*randn(1,t_step+1)’;

% Covariance of continious-plant with discrete measurements

%Solve ARE
[p,l,g]=care(A’,C’,W,R);

% Find gain
M=p*C’*Rˆ(-1);

%Plot initial condition of the system

h1=subplot(2,1,1);

56
plot(x,T_initial_exact’*S,’LineWidth’,1.3,’color’,’b’)
hold on
plot(x,T_initial_vector’*S,’LineWidth’,1.3,’color’,’r’)
set(h1, ’YLim’, [0 y_lim_max])
xlabel(h1, ’x (m)’);
ylabel(h1, ’Estimator Gain (M)’);
title(h1, ’Estimator gain vs length of the medium using Fourier
approximations’);
pause(1)
hold on

%Plot initial condition of the observer


h2=subplot(2,1,2);
plot(x,T_initial_exact’*S,’LineWidth’,1.3,’color’,’b’)
hold on
plot(x,T_initial_vector’*S,’LineWidth’,1.3,’color’,’r’)
set(h2, ’YLim’, [y_lim_min y_lim_max])
xlabel(h2, ’x (m)’);
ylabel(h2, ’Temperature (C)’);
title(h2, ’True and observer temperature’);
indd=find(x==0.25);
T_e_temp=T_initial_exact’*S;
T_ex(1,1)=T_e_temp(1,indd);
T_e_temp=T_initial_vector’*S;
T_es(1,1)=T_e_temp(1,indd);
%Initializa the time
t=0;
clear x_hat;
dt_1=dt;

% Move forward in time to find observer and tru solution

for i=2:t_step+1
[time,X]=ode45(@solve_ode,[t,dt_1],T_initial_vector’);
[time_2,X_exact]=ode45(@exact_sol,[t,dt_1],T_initial_exact’);
size_X_e=size(X_exact);
T_initial_exact=X_exact(size_X_e(1,1),:);
T_e_temp_1=T_initial_exact*S;
T_ex(1,i)=T_e_temp_1(1,indd);

size_X=size(X);

57
T_initial_vector=X(size_X(1,1),:);

%Plot true solution

h2=subplot(2,1,2);
plot(x,T_initial_exact*S,’LineWidth’,1.3,’color’,’b’)
hold on

%Plot observer

plot(x,T_initial_vector*S,’r’,’LineWidth’,1.3);
set(h2, ’YLim’, [y_lim_min y_lim_max])
xlabel(h2, ’x (m)’);
ylabel(h2, ’Temperature (C)’);
title(h2, ’True and observer temperature’);
t=t+dt;
dt_1=dt+t;
T_e_temp_2=T_initial_vector*S;
T_es(1,i)=T_e_temp_2(1,indd);
pause(0.1)
delete(h2);
end

%Plot transietn solution at x=0.25m


figure;
t_vector=[0:dt:t_final];
plot(t_vector,T_ex,’b’,’LineWidth’,1.3);
hold on
plot(t_vector,T_es,’r’,’LineWidth’,1.3);
keyboard;
end

%Find exact solution


function [dx_dt]=solve_ode(t,x)
dx_dt=(A-M*C)*x+B_input*u(1,i-1)+M*Y_noise(i-1,1);
return
end

%Solve continious-time observer

function [dx_dt]=exact_sol(t,x)

58
dx_dt=A*x+B_input*u(1,i-1);
return
end

end
function[]=Discrete_time_Estimator_WD(L,k_max,dt,t_final,
alpha, T_initial,R,Q,x_1,x_2,x_3,x_4,x_s,d_s,u_max,y_lim_min,y_lim_max)
% Amir Issaei
% SN: 20488100
% Master of Applied Mathematics, University of Waterloo, Spring 2014

%In this code we design an observer for continious-time plant with


% discrete measurements for different grid sizes using a finite difference
% method.

% L: Length of the medium


% k_max: Maximum numbe of mesh point
% dt: Sampling period of the sensor
% t_final: Final time of simulation (simulaiton duraiton)
% alpha: Thermal diffiusivity constant
% T_initial: Initial condition of the system
% R: Variance of the noise on the measurements
% Q: Variance of the noise on the states
% u_max= Amplitude of the input function
% y_lim_in: Minimum range of y axis
% y_max: Maximum range of y axis
% x_1: Start point of noise interval
% x_2: End point of noise interval
% x_3: Start point of input interval
% x_4: End point of input interval
% x_s: Sensor location
% d_s: Sensor half-width
% u_max: Amplitude of input function
% y_lim_min: Minimum of the Y axis
% y_lim_max: Maximum of the Y axis
%--------------------------------------------------------------------------

% Define global variables

global A_d;

59
global C_d;
global Y_noise;
global i;
global u;
global M;
global B_input;

% Close all windows and clear screen


clc;
format long;
close all;

%Start for loop for different grid sizes

for k=5:k_max

% Number of points on the grid


N=2ˆk;

% Grid size in space

dx=L/N;
x=[0:dx:L];

%Number of time steps for


t_step=t_final/dt;

%Initial temperature of the rod for estimator desing.


%Since we do not know what it is, it is set to zero.
% This is a miss-match initial condition

T_initial_vector=0*ones(N-1,1);

%Exact initial temperature;


T_initial_exact=T_initial*ones(N-1,1);

%Creat A, B_1,B_2 and C matrices to design the estimator

60
A=make_A(alpha,N,dx);
B=make_B(x,N,x_1,x_2);
B_input=make_B_inout(x,N,x_3,x_4);
C=make_C(x,N,x_s,d_s);
D=0;

%Transform the continious-time plant to discrete-time plant


sysc=ss(A,B_input,C,D);
sysd=c2d(sysc,dt,’zoh’);
[A_d,B_d,C_d,D]=ssdata(sysd);
[Kfd2,Pd2]=lqed(A,B,C,Q,R,dt);

%Keep copy of original variances for later use

Q_temp=Q;
R_temp=R;

% Find gain of Kalman filter


M=Kfd2;

% Update the covariance matrix and variance of measurement


% and process noise

W=B*B’*Q*dt;
R=R/dt;
Q=Q*dt;

%Plot the exact initial condition

h1=subplot(2,1,1);
plot(x,[0;T_initial_exact;0],’LineWidth’,1.3,’color’,’b’)
hold on
plot(x,[0;T_initial_vector;0],’LineWidth’,1.3,’color’,’r’)
set(h1, ’YLim’, [0 y_lim_max])
xlabel(h1, ’x (m)’);
ylabel(h1, ’Temperature’);
title(h1, ’Initial condition for original system and observer’);

61
% Plot observer initial temperature

h2=subplot(2,1,2);
plot(x,[0;T_initial_exact;0],’LineWidth’,1.3,’color’,’b’)
hold on
plot(x,[0;T_initial_vector;0],’LineWidth’,1.3,’color’,’r’)
set(h2, ’YLim’, [y_lim_min y_lim_max])
xlabel(h2, ’x (m)’);
ylabel(h2, ’Temperature (C)’);
title(h2, ’Original system and observer’);

%Initializa the time

t=0;

%Simulate the original system to find noisy measurmeents

sys=ss(A,[B_input],C,0);
t_span=[0:dt:t_final];
u=u_max*sin(t_span);
Y=lsim(sys,[u’],t_span’,T_initial_exact’);
Y_noise=Y+sqrt(R_temp)*randn(1,t_step+1)’;
indd=find(x==0.25);
T_es(1,1)=T_initial_vector(indd,1);
T_ex(1,1)=T_initial_exact(indd,1);

%Create a vector of observer for one time-step

x_final=zeros(t_step+1,N-1);
dt_1=dt;

%Move forward in time and solve steady-state Kalman-filter

for i=2:t_step+1

%Predict the states


x_hat=A_d*(eye(N-1)-M*C_d)*T_initial_vector+A_d*M*Y_noise(i-1,1)+
B_d*u(1,i-1);

%Correct the states

62
x_final(i,:)=(eye(N-1)-M*C_d)*x_hat+M*Y_noise(i,1);

%Plot the true solution and observer

[time_2,X_exact]=ode45(@exact_sol,[t,dt_1],T_initial_exact’);
size_X_e=size(X_exact);
T_initial_exact=X_exact(size_X_e(1,1),:);
h3=subplot(2,1,2);
plot(x,[0,T_initial_exact,0],’LineWidth’,1.3,’color’,’b’)
hold on;

plot(x,[0,x_final(i,:),0],’LineWidth’,1.3,’color’,’r’);
T_initial_vector=x_final(i,:)’;
set(h3, ’YLim’, [y_lim_min y_lim_max])
xlabel(h3, ’x (m)’);
ylabel(h3, ’Temperature (C)’);
title(h3, ’Original system and observer’);

%Make a copy of solution for transient resonse

T_es_temp=[0,x_final(i,:),0];
T_ex_temp=[0,T_initial_exact,0];
T_es(1,i)=T_es_temp(1,indd);
T_ex(1,i)=T_ex_temp(1,indd);
t=t+dt;
dt_1=t+dt;
pause(0.1)
delete(h3);
end

% Plot transietn response at x=0.25m


figure;
t_vector=[0:dt:t_final];
plot(t_vector,T_es,’r’,’LineWidth’,1.3);
hold on
plot(t_vector,T_ex,’b’,’LineWidth’,1.3);
keyboard;
end

% Function to find the true solution of the system

63
function [dx_dt]=exact_sol(t,x)
dx_dt=A*x+B_input*u(1,i-1);
end
end
function[]=Discrete_time_Estimator_Fourier_WD(L,dx,n_max,dt,t_final,
alpha, T_initial,R,Q,x_1,x_2,x_3,x_4,x_s,d_s,u_max,y_lim_min,y_lim_max)
% Amir Issaei
% SN: 20488100
% Master of Applied Mathematics, University of Waterloo, Spring 2014

% This code implement Kalman filter desing for the finite-dimensional


% approximation of the system using Fourier method.

% L: Length of the medium


% dx: Space grid size
% n_max: Maximum numbe of modes
% dt: Sampling period of the sensor
% t_final: Final time of simulation (simulaiton duraiton)
% alpha: Thermal diffiusivity constant
% T_initial: Initial condition of the system
% R: Variance of the noise on the measurements
% Q: Variance of the noise on the states
% u_max= Amplitude of the input function
% y_lim_in: Minimum range of y axis
% y_max: Maximum range of y axis
% x_1: Start point of input
% x_2: End point of input interval
% x_3: Start point of noise interval
% x_4: End point of noise interval
% x_s: Sensor location
% d_s: Sensor half-width
%---------------------------------------------------------------------

% Define Global variables

global A_d;
global C_d;

64
global Y_noise;
global i;
global u;
global M;
global B_input;

% Clear screen and close all windows


clc;
close all;

%Start mode-loop, i.e increase the number of modes starting from 20


for n=20:n_max

% Number of points in our grid

N=n;

% Space grids

x=[0:dx:L];

% Make a copy of original variance for later use

Q_temp=Q;
R_temp=R;

%Number of time steps

t_step=t_final/dt;

%Initial condition of the observer


T_initial_vector=make_initial(N,L,0);

%Exact initial temperature

T_initial_exact=make_initial(N,L,T_initial);

%Creat system matrices using Fourier series

65
A=make_A_Fourier(alpha,N,L);
B_input=make_B_inout_Fouries(N,L,x_1,x_2);
B=make_B_Fourier(N,L,x_3,x_4);
C=make_C_Fourier(N,L,x_s,d_s);

% Simulate the system to find discrete sensor measurements

T_initial_sim=T_initial_exact;
sys=ss(A,[B_input],C,0);%,B%
t_span=[0:dt:t_final];
u=u_max*sin(t_span);
Y=lsim(sys,[u’],t_span’,T_initial_sim’);
Y_noise=Y+sqrt(R_temp)*randn(1,t_step+1)’;

% Covariance of continious-plant with discrete measurements


R=R/dt;
Q=Q*dt;

% Equivalent matrices for continious-plant with discrete measurements


D=0;
sysc=ss(A,B_input,C,D);
sysd=c2d(sysc,dt,’zoh’);
[A_d,B_d,C_d,D]=ssdata(sysd);

% Find steady-state Kalman gain

[Kfd2,Pd2]=lqed(A,B,C,Q_temp,R_temp,dt);
S=Make_space(L,x,N);
M_p=Kfd2’*S;
M=Kfd2;

%Plot The exact initial condition


h1=subplot(2,1,1);
plot(x,T_initial_exact’*S,’LineWidth’,1.3,’color’,’b’)

66
hold on
plot(x,T_initial_vector’*S,’LineWidth’,1.3,’color’,’r’)
set(h1, ’YLim’, [0 y_lim_max])
xlabel(h1, ’x (m)’);
ylabel(h1, ’Temperature’);
title(h1, ’Initial condition for original system and observer’);
pause(1)
hold on
%Plot the initial of observer
h2=subplot(2,1,2);
plot(x,T_initial_exact’*S,’LineWidth’,1.3,’color’,’b’)
hold on
plot(x,T_initial_vector’*S,’LineWidth’,1,’color’,’r’)
set(h2, ’YLim’, [y_lim_min y_lim_max])
xlabel(h2, ’x (m)’);
ylabel(h2, ’Temperature (C)’);
title(h2, ’Original system and observer’);
indd=find(x==0.25);
T_e_temp=T_initial_exact’*S;
T_ex(1,1)=T_e_temp(1,indd);
T_e_temp=T_initial_vector’*S;
T_es(1,1)=T_e_temp(1,indd);

%Initializa the time


t=0;
clear x_hat;
x_final=zeros(t_step+1,N);

% Move forward in time and find observer and true solution


dt_1=dt;
for i=2:t_step+1

x_hat=A_d*(eye(N)-M*C_d)*T_initial_vector+A_d*M*Y_noise(i-1,1)
+B_d*u(1,i-1);
x_final(i,:)=(eye(N)-M*C_d)*x_hat+M*Y_noise(i,1);
[time_2,X_exact]=ode45(@exact_sol,[t,dt_1],T_initial_exact’);
size_X_e=size(X_exact);
T_initial_exact=X_exact(size_X_e(1,1),:);

67
T_e_temp_1=T_initial_exact*S;
T_ex(1,i)=T_e_temp_1(1,indd);

T_initial_exact_temp=T_initial_exact;
h2=subplot(2,1,2);
plot(x,x_final(i,:)*S,’LineWidth’,1.3,’color’,’r’);
hold on
plot(x,T_initial_exact_temp*S,’LineWidth’,1.3,’color’,’b’)
T_initial_vector=x_final(i,:)’;
set(h2, ’YLim’, [y_lim_min y_lim_max])
xlabel(h2, ’x (m)’);
ylabel(h2, ’Temperature (C)’);
title(h2, ’Estimator’);

t=t+dt;
dt_1=dt+t;

T_e_temp_2=T_initial_vector’*S;
T_es(1,i)=T_e_temp_2(1,indd);
pause(0.1)
delete(h2);
end

% Plot transietn solution at x=0.25m


figure;
t_vector=[0:dt:t_final];
plot(t_vector,T_ex,’b’,’LineWidth’,1.3);
hold on
plot(t_vector,T_es,’r’,’LineWidth’,1.3);
end

% Find exact solution of the system


function [dx_dt]=exact_sol(t,x)
dx_dt=A*x+B_input*u(1,i-1);
return
end

end

68
References
[1] R.C. Booton. An optimization theory for time-varying linear systems with non-

stationary statistical inputs. Proceedings of the IRE, 40(8):977–981, 1952.

[2] R.F. Curtain and H. Zwart. An introduction to infinite-dimensional linear sys-

tems theory, volume 21. Springer, 1995.

[3] S. Darlington. Linear least-squares smoothing and prediction, with applications.

Bell System Technical Journal, 37(5):1221–1294, 1958.

[4] R.C. Davis. On the theory of prediction of nonstationary stochastic processes.

Journal of Applied Physics, 23(9):1047–1053, 2004.

[5] J.S. Gibson and A. Adamian. Approximation theory for linear-quadratic-

gaussian control of flexible structures. SIAM journal on control and optimiza-

tion, 29(1):1–37, 1991.

[6] T. Jiang, N.D. Sidiropoulos, and G.B. Giannakis. Kalman filtering for power

estimation in mobile communications. Wireless Communications, IEEE Trans-

actions on, 2(1):151–161, 2003.

[7] R.E. Kalman. A new approach to linear filtering and prediction problems. Jour-

nal of basic Engineering, 82(1):35–45, 1960.

[8] R.E. Kalman and R.S. Bucy. New results in linear filtering and prediction theory.

Journal of basic engineering, 83(1):95–108, 1961.

[9] F.L. Lewis, L. Xie, and D. Popa. Optimal and robust estimation: with an intro-

duction to stochastic control theory, volume 26. CRC, 2008.

69
[10] H. Logemann, R. Rebarber, and S. Townley. Stability of infinite-dimensional

sampled-data systems. Transactions of the American Mathematical Society,

355(8):3301–3328, 2003.

[11] K.A. Morris. Control of systems governed by partial differential equations. The

Control Theory Handbook, 2010.

[12] F. Riesz and B. Sz.Nagy. Lectures on functional analysis. Mir, Moscow, 1979.

[13] K.W. Simon and A.R. Stubberud. Duality of linear estimation and control. Jour-

nal of Optimization Theory and Applications, 6(1):55–67, 1970.

[14] J.L. Speyer and W.H. Chung. Stochastic processes, estimation, and control,

volume 17. SIAM, 2008.

[15] N. Wiener. Extrapolation, interpolation, and smoothing of stationary time se-

ries: with engineering applications, volume 8. MIT press, 1964.

70

You might also like