2013 2014 Report1
2013 2014 Report1
2013 2014 Report1
2
() (eq 3-1)
Mode Frequency
Hz
1 21.7
2 135.44
3 213.31
4 378.96
5 651.06
6 743.54
7 829.31
8 1123.7
9 1231.7
10 1654.7
Table 3-1: Natural frequencies of the cantilever beam calculated with ANSYS
TM
software.
4. Implemented Controllers
From the modeling and the setup of the system it follows that the system is classified as a Single
Input Single Output system (SISO). That means that each controller will receive only one
input, process it, and provide only one control signal. This signal will be sent through the FPGA
to the output module, then to the piezo-amplifier and, finally, to the actuator. The actuator will
apply force to the beam, and thus, change the way it is oscillating. The input and the output are
filtered using a Butterworth Filter implemented in the FPGA with a cutoff frequency of 100 Hz
(the details can be found in the Appendix D).
The implementation of the controllers in this project can be divided in two groups: controllers
that were compiled in the FPGA entirely, and the two controllers that were implemented mostly
in the CompactRIO, with the exception of filtering.
The most simple way of implementing a continuous time controller in discrete time is to design
it in the continuous time without any regard for sampling, and then to discretize it using Z-
transformation. This method works as long as the required sampling rate does not exceed the
hardware limitations, which was not a problem in this project.
Mechatronics Project Fachhochshule Aachen 10
For such an approach, the highest possible sampling rate will make a discrete time controller
comparable to a continuous time controller, but of course there is no need to waste the
computing resources. If the sampling rate is too high, then the difference between the two
samples is indistinguishable, and the controller does nothing useful with the extra data. The
extreme case of this phenomenon is called aliasing. Therefore, it is a common practice to use
the sampling rate that is sufficiently high, and not the highest possible one to save the resources.
However, this approach does not apply to this project. Unlike a function generator, that can
excite the beam with a desired frequency varied by simply adjusting the knob, the hardware
setup does not have a button or an option in the settings to change the sampling rate for the
controllers because of the following reason: the implemented controllers in this project are
software structures, running in either the FPGA or CompactRIO. The sampling time for such an
implementation is the total time needed to process the input and to produce the output. This
depends on the complexity of the program and the computing power of the hardware.
If the controller is implemented entirely in the FPGA, the time to execute that program and the
time needed to receive signals from the input module and the time to send a signal to the output
module will determine the sampling time of the controller, which was 100 s in this project.
For the controllers that were implemented in compactRIO, it is slightly different. The input data
that entered the FPGA is filtered there and passed on to the CompactRIO for processing.
CompactRIO executes the control algorithm. Since the FPGA is faster than the CompactRIOs
processor, there will be new data in the FPGA ready to be sent to CompactRIO before it finishes
processing the current data. To avoid the overflow of data and to prevent the loss of data, buffers
are generally used to store the extra data before it can be processed. However, in this project this
approach was not taken because the extra data was not useful to the controller because of the
slow nature of the plant.
According to the NyquistShannon sampling theorem, to successfully sample the data, the
sampling rate must be at least twice the frequency of the signal. Since the project was focused on
dampening the vibrations at the natural frequency of the beam, which was 19.2 Hz. Therefore the
minimum sampling rate required was 38.4 Hz, or 38.4 samples per second, equivalent to the
sampling time of about 26 ms. The equivalent sampling time achieved with CompactRIO was
300s, which is more than enough. Therefore, the extra data that could be stored in the buffer
would be useless. Instead of storing the data, a special command was given to the FPGA to wait
until the Compact RIO finishes the execution of the program before reading a new input from the
sensor. The command was implemented using the LabView
TM
Wait(x) function. The sampling
rate of 300 s is the minimal time required to finish executing the main program in CompactRIO
and to send and receive signals from other modules. If one would tell the FPGA to wait longer,
the equivalent sampling time would be greater, and that would produce a smaller sampling rate,
which is acceptable for this plant.
Mechatronics Project Fachhochshule Aachen 11
Naturally, one might think that this allows saving the resources, as it is generally suggested in
literature. However, in this case there is no benefit to that, since neither the hardware nor the
software is doing anything useful during that time. Having a lower sampling rate would not save
any memory or increase the performance of the controller. For the given hardware setup, the
only benefit of having a plant with a relatively low frequency is that a longer and more
complicated algorithm can be implemented. Instead of having a program that runs only for 300
s, one could write more sophisticated control algorithm that runs for up to 26 ms.
4.1 PID Controller
PID controller is one of the most common controllers used in closed loops systems.
The controller is defined by the following equation [VUK-02]:
() =
() +
()
0
+
()
(eq. 4-1)
Where K
P,
K
I,
and K
D
are the constant coefficients, e(t) is the input the error signal, which is the
difference between the desired value and the output of the controlled system (plant), m(t) is the
output of the controller which is fed to the controlled system. The signal m(t) is often called the
control signal.
Figure 4-1: Block diagram of the PID controller in Simulink
TM
..
Mechatronics Project Fachhochshule Aachen 12
The transfer function in the frequency domain:
() =
+
(eq. 4-2)
Having a PID-Controller in your system does not guarantee stability. The controller must be
tuned to achieve stability and the desired performance, or the best possible performance. Tuning
of the PID is a heuristic process. First, the control loop is simulated with different coefficients,
and then it is tested on the plant until the results are satisfactory.
PID was completely implemented in the FPGA with the following algorithm, and the sampling
rate was 100 s. The Z-Transformation gave the following discrete time equations [VUK-02]:
Y(t) = Y(t) Kp +
() + () ) (eq. 4-3)
() = [( 1) ( 2)] (eq. 4-4)
() = [ ( 1) ( 2)]
2
(eq. 4-5)
(k) = ( 1)Kp + [( 1) ( 2)] + [( 1) + ( 2)]
2
(eq. 4-6)
Mechatronics Project Fachhochshule Aachen 13
(k) = ( 1)
Kp
+ [( 1) ( 2)] + [( 1) + ( 2)]
2
(eq.4-7)
Where the sampling time T = 100s and Y is the sensor reading and u is the control signal.
The controller relies only on the feedback and the set point value to calculate the control signal
with the help of its algorithm. No additional functions and considerations are necessary, which
makes this controller very simple to implement.
4.2 Linear Quadratic Regulator (LQR)
4.2.1 State Space Model
State Space representation is the foundation of the modern control engineering and is widely
used, because it is easily digitized and computed with software, higher order and nonlinear
systems are easier to handle, and the systems internal dynamics (or states) is more observable as
compared to classical control theory with transfer functions.
The system is modeled with the following fundamental equations:
() = () + () (eq. 4-8)
() = () + ()() (eq. 4-9)
Where x-vector, -vector, and u-vector are of the size n, same as the systems order, and the A
and B matrices are n-by-n matrices.Y can be a vector of a different order p or can be a scalar,
depending on how many outputs the system has. C can be a p-by-n matrix, or an n-size row
vector if y is a scalar [OGA-97].
The matrix A is called System Matrix, or State Matix, B is the Control Matrix, C is the
Output Matrix, and D is the Direct Feedthrough Matrix, or Direct Feedforward Matrix, in
cases where there is a direct dependence between the input and the output, x is the State
Vector, u is the Control Vector or Input Vector, y is the Output Vector. The names are
self explanatory; they show what information of the model the matrix contains.
Mechatronics Project Fachhochshule Aachen 14
The name State Space comes from the fact that the internal states of the system can be
observed. For example, if the plant is represented with a transfer function, only one equation
serves as a model. This equation can be of first, second, or even higher order. For example, the
plant needs position control, but the information about velocity and acceleration is hidden inside
of that equation, and cannot be seen. The output is only in the form of the position. This may be
enough for many applications, but sometimes, it is important to know the acceleration and the
velocity, and to control them, for example for the cars braking system or elevators acceleration.
Instead of having one n
th
order equation, in State Space there are n first order differential
equations that show the dependencies between states. To further understand the state space
description consider the example 3-2 from K. Ogatas Modern Control Engineering[OGA-73].
It shows how a simple mechanical spring-mass-damper system can be modeled in state space.
Figure 4-2: Mass-Spring-Damper System
.
+ + = (eq. 4-10)
1
() = () (eq. 4-11)
2
() = () (eq. 4-12)
Mechatronics Project Fachhochshule Aachen 15
Or
1
=
2
(eq. 4-13)
2
=
2
+
1
(eq. 4-14)
The two state equations relate the position and the velocity, the velocity and acceleration:
2
=
0 1
2
+
0
1
(eq. 4-15)
They are transferred into the corresponding matrices to get the following:
=
0 1
, =
0
1
, = (
1 0
), = 0 (eq. 4-16)
A transfer function would only provide us with the position signal. Of course it is possible to
calculate the velocity based on that signal, but it is not as easy and convenient, especially when
the systems order increases. With State Space method that information is easily accessible and
easier to calculate, which will be shown later and implemented in this project with the Observer.
Since the system used for this project is a Single Input - Single Output System (SISO), the
transfer function generated previously can be directly and easily converted into the matrices A,
B, C, and D manually, describing the system either in Controllability Form, Observability Form,
or some other form. These forms are also knows as Control Canonical Form and Observer
Canonical Form, among other names. It can be also easily done with LabView
TM
or by hand, and
results are practically identical.
The plant was modeled in State Space with the following matrices:
=
0 1
158.11 0.04
D = 0 (eq. 4-17)
Mechatronics Project Fachhochshule Aachen 16
=
13.15
0
= (0.22 0) (eq. 4-18)
Figure 4-3: Simulink
TM
System model in State Space representation.
Once the plant (without the controller) is modeled in State Space, the matrices A, B, and C are
found, and the decision regarding the poles of the systems is made, Full State Feedback control
will place the systems poles as desired via feedback gain matrix K.
Pole Placement or Full State Feedback control is the basic closed loop control method using
State-Space representation. The Pole Placement control is defined by the following equations
[OGA-97]:
() = () (eq. 4-16)
() = () (eq. 4-17)
Mechatronics Project Fachhochshule Aachen 17
A typical control loop of such a controller can be shown graphically in the following manner:
Figure 4-4: Pole Placement Controller in State Space representation.
Like the PID controller, this is also a feedback controller; however, there is a major difference.
Implementing this controller guarantees stability, because the chosen poles are on negative side
of the imaginary axis, and as it is known from the basics of the Control Theory, that ensures the
stability of the system.
The word regulator in the name LQR means that the set point, or the desired value of the
output, is zero. Thus, if the set point value, commonly denoted by w, is zero, the control loop
slightly changes:
Mechatronics Project Fachhochshule Aachen 18
Figure 4-5: Implementation of Pole Placement controller in Simulink
TM
The hardware used in this project can measure only one state: the position. When all of the states
of the system cant be measured directly with instruments, the Observer must be incorporated
because the state vector must be of the same order as the system matrix in order to have matrix
multiplication. Thus, the Observer allows us to use all of the benefits of State Space control,
without having a sensor for each state.
Observer is an online simulation of the plant. It receives the same control input as the plant, and
the real output is compared with the simulated one to correct the simulation. Since the controller
doesnt have all the states from the sensors, the observer must be accurate, because the states it
calculates will be used to control the plant. In other words, real time position feedback is used to
correct the simulation of the plant, so that the simulated states can be used to control the plant. In
Pole Placement control without the observer, the controller would use a sensor per state to
control the plant. For the sample spring mass system considered before, that would mean that a
position sensor and a velocity sensor must be integrated in the hardware setup.
Observer model is based on the plant model, therefore the A, B, and C matrices used are the
same. The states are however different, and are labeled with a tilde symbol above the letter, for
instance (), (), and etc., and the feedback gain matrix for the observer is labeled K
e
. When
order of the Observer is the same as of the plant, i.e. every state is observed, the controller is
called the Full State Observer. [OGA-97].
Mechatronics Project Fachhochshule Aachen 19
When Full State Observer is implemented in the feedback loop, the control law changes to:
() =
) () (eq. 4-20)
Mechatronics Project Fachhochshule Aachen 20
Combining the two, an equation that describes the system dynamics is obtained:
(eq. 4-21)
The detailed steps on how to derive the Observer equations from the beginning can be found in
the Appendix B.
The observer design can be done independently from the pole placement design, because the
closed loop poles of the observer design and pole placement design are not coupled in any way.
After each is designed separately, they can be combined to form a Observer State Feedback
system. Usually, the Observer is designed to be at least five times faster, because it is not a
hardware structure unlike the plant, but a software implementation which tends to be slower.
[OGA-97] In this project the plant is relatively slow, and the speed of the software was high
enough, and the observer was designed to be the same as the plant model.
4.2.2 LQR Method
LQR is a controller based on the Pole Placement technique, also known as Full State Feedback,
described in detail in the previous section. As it was shown before, the system can be made
stable by introducing feedback control.
The control signal was defined by the equation 4-11:
() = () (eq. 4-22)
K matrix acts a compensator that makes sure that the input signal u will bring the system to
stability as the time passes, similar to the PID controller. Instead of changing the K
I
, K
P
, and K
D
coefficients, the poles of the overall system are changed via the K matrix, and it is certain that
the system will be stable even before the simulation because stable poles have been chosen, i.e.
poles to the left of the imaginary axis. This is an advantage over the PID, and the fact that the
states and their dynamics are observable.
There is another similarity between the Pole Placement controller and the PID controller:
as these two controllers are tuned, one cant be sure that the resultant performance is satisfactory
unless simulated, and one has no scientific procedure for tuning. Where should the poles be?
Mechatronics Project Fachhochshule Aachen 21
What is the difference in performance as the poles move in the region between negative thousand
and negative three? Only simulation can provide the answer.
There are other limitations and problems when selecting the poles freely, according to Dr. J ake
Abbott from the University of Utah. In the example of the spring mass and damper system used
before, with the Pole Placement it is possible to get a stable output: the position will be what it
was set to be. Placing poles at any desired locations does not give us any insight into the
behavior of the states. In other word, the position will be stable, but it is unknown how the
system will achieve that result. There is no logical method to make sure that the internal
dynamics is acceptable. For example, the accelerations could be too high for mechanical
systems, like for an elevator or for the car.
Another issue is the controller effort, or the value of the controllers output signal. For
instance, the poles were chosen, and the simulated system is very fast and stable, but the
controller would have to send 1000V to the motor, which can only receive 10V. Pole placement
and PID would not be able to address this issue analytically or systematically.
Those issues are addressed in a very systematic and mathematical manner in LQR. In fact, LQR
is not a different controller: it is still the same Pole Placement method, but the LQR method
provides a way of knowing and controlling what will happen with the states as the stability is
achieved. For example, as the system is reaching the set position, and it is certain that it will be
stable, but thanks to LQR, the acceleration and velocity are within the allowed limits. LQR also
makes the tuning process easier.
Before introducing the details of the LQR method, it is helpful to summarize:
1) LQR is not a different controller, it is Pole Placement with a smart and intentional choice
of K matrix
2) Why LQR? Pole Placement ensures stability, and LQR ensures that the system internal
states and the controller effort are within the desired bounds, thus making the system
feasible in practice
3) LQR is a method for finding an optimized matrix K that gives the desired performance
In Pole Placement approach, one must first chose the stable poles, then find the matrix K, and
finally the performance is evaluated using simulations and tests. The goal of LQR is to find a K
matrix that gives the desired performance.
The LQR method can be broken down in steps:
1) Choose matrices Q and R
2) Find the cost function J
3) Find matrix P by solving the algebraic Riccati equation
Mechatronics Project Fachhochshule Aachen 22
4) Find the matrix K
The first two steps of the LQR method are closed related. Using a scalar cost function, LQR will
penalize or weigh the systems states and inputs with the help of carefully chosen weighting
matrices Q and R. It is possible make the acceleration smaller with respect to velocity or
position, for instance, by choice of the corresponding diagonal element of the Q matrix.
Similarly, by the choice of the R matrix, it is possible weigh the inputs, to prevent the controller
from giving 100 V to the motor, for example. Thus, there is a certain form of control of the
systems behavior prior to simulation, and it is possible to find a system that meets the
expectations faster, and instead of guessing blindly, there is a mathematical way to tune. The
single scalar J found by means of the cost function, reflects the value of all of the states and
inputs, and being a regulator, LQR has a set point value of zero, and wants to drive the states to
zero, therefore, minimizing the cost function to achieve that. [ABB-12].
Cost Function:
= (
+
0
(eq. 4-23)
Where x* is a conjugate transpose of vector x, Q and R are positive-definite matrices (to satisfy
the condition of J being minimal).
Looking at the dimension of the matrices and vectors in equations 4-6, it is clear that J is scalar.
[OGA-97], [ABB-12]
X(1 by - n) Q(n by -n) x(n-by-1) +u(1-by-p) R(p-by-p) u(p-by-1) ={1} (eq. 4-24)
Once the Q and R are chosen, the third step is to find the matrix P. Matrix P must be calculated
by solving the following algebraic Riccati equation:
+ +
1
+ = (eq. 4-25)
Matrix P is just an intermediate calculation step, we dont bother with its meaning. Our goal is
just to find the optimized matrix K.
Finally, having matrix P allows us to calculate the optimal feedback gain matrix K:
=
1
(eq. 4-26)
Mechatronics Project Fachhochshule Aachen 23
And the control law of LQR becomes:
() = () =
1
() (eq. 4-27)
Since the Observer design and the pole placement design for the plant can be carried out
independently, LQR method can be applied to optimize the K-matrix for the pole placement
design, and K
e
-matrix for the Observer design.
Since the system here is of the second order, the Riccati equation was solved by hand and
verified using freely available online tools, such as solvers and Matlab
TM
scripts.
LQR was implemented entirely in CompactRIO
TM
using FPGA only as input and output ports
and to implement a Butterworth filter for the input and output. It was not possible to implement
LQR in FPGA, and since it was implemented in CompactRIO, the sampling rate was 300 s.
Higher sampling rates produce disconnection problem with CompactRIO
TM
because of data
overflow.
The continuous time equations were converted into discrete one using Z-transformation. Here is
the result:
() =
( 1) + () +
( 1)
( 1)(eq.4-28)
() =
2
( 1) + () (eq. 4-29)
() =
() (eq. 4-30)
Where the sampling time T = 300 s , and index int stands for integrated variable, a scalar
= 5, and vector
=
954.37
48400.00
(eq. 4-31)
Mechatronics Project Fachhochshule Aachen 24
To summarize, LQR is an optimization method for the Pole Placement controller. For a SISO
system, the controller only receives the position feedback from the optical sensor. Since Pole
Placement requires all states to be fed back (hence the other name Full State Feedback
Control), the Observer was implemented to simulate all the states and to provide the missing
information. The controller itself is based on the State Space model of the plant, complemented
with the position feedback coming from the optical position sensor and an observer to estimate
the velocity and the position. The simulated position will be corrected online by the Observer
using the data from position sensor. The position and velocity are fed back to the controller, and
the controller sends a control signal to the plant, and that completes the loop.
4.3 Adaptive Control Direct MRAC with unnormalized Adaptive Laws.
Adaptive control is used mainly when the plant parameters change significantly or are unknown.
In such situations, a regular controller would have a degraded performance or would become
unstable. There are two commonly used adjustment algorithms:
1. The MIT rule based on the Gradient
2. The Lyapunov Stability Theory, specifically, the Second Method of Lyapunov.
The adaptive controller that was implemented in this work is based on Model Reference
Adaptive Control (MRAC) and is called Direct MRAC with Unnormalized Adaptive Laws,
where Lyapunovs Second Method is used for the adaptation of controller parameters.
Figure 4 - 7: Typical block diagram of direct MRAC control algorithm [FEI-04].
Mechatronics Project Fachhochshule Aachen 25
The output of the model and the output of the plant are compared, and the error is fed to the
Adjustment Mechanism block. Based on the error, the controller signal, and the plant output the
adjustment is made. The idea of the reference model is very similar to the Observer used in LQR.
Observer is an online simulation of the plant to compensate for lacking sensor signals. In this
case, the reference model is a simulation of the plant that provides the ideal performance. The
difference is that instead of a state space model, a transfer function is used, and its output shows
what the position should be. If the plant does not behave ideally, the adaptive law will modify
the gains until that performance is reached.
The reference model is usually a modified version of the closed loop transfer function of the
plant with the desired poles and the controller. For example, the plant transfer function is
determined, and PI controller is tuned, and the overall system is stable. For that overall system a
closed loop transfer function is found, and the closed loop behavior of this system is exactly
what MRAC must deliver. The closed loop transfer function of the overall system is the
reference model. Then this transfer function can be modified, for example, the damping ratio can
be increased, and the adaptive controller forces the plant to behave like that model.
4.3.1 MRAC and Lyapunovs Second Method
The Second Method is based on the energy principle of stable systems: the total energy is
positive and the energy is continuously decreasing. In Lyapunovs method, this principle is
generalized, and instead of energy a special generalized scalar function V(x,t), often called the
Lyapunov Function, is used. The Second Method of Lyapunov does not only show stability: it is
also used to find the suitable adaptive law. The basics of The Second Method of Lyapunov are
included in the Appendix B.
The main challenge in using Lyapunovs Second Method is not the complexity of theory, but
finding the Lyapunov function V(x,t) that exists. There are no clear algorithms or directions on
how to find it. This may become a trial and error process. Sometimes it is possible to use some
well known functions as a starting point in the search. If the function does not exist, nothing can
be said about the stability. Sometimes Lyapunov-like functions are used instead, because they
are easier to find and the instead of Lyapunov Stability theory, several lemmas are used to show
the stability. Those lemmas resemble Lyapunovs Stability theory, but are less restrictive.
Before introducing the control law and the adaptive law, it is helpful to describe the system and
the assumptions in detail:
Given Single Input - Single Output (SISO) system, and a LTI plant [FEI-04]:
(eq. 4-32)
Mechatronics Project Fachhochshule Aachen 26
(eq. 4-33)
The transfer function of the plant is given by:
()
(eq. 4-34)
With
() =
()
()
(eq. 4-35)
Relative degree of the plant is n* =2.
The reference model is:
(eq. 4-36)
(eq. 4-37)
() =
()
()
(eq. 4-38)
The transfer function of the reference model and of the plant are given by
() =
()
()
(eq. 4-39)
()
()
()
(eq. 4-40)
The plant with relative degree n* >2 can be discussed in the similar way. We assume that the
plant and reference model satisfy the following assumptions listed in the Appendix C.
Mechatronics Project Fachhochshule Aachen 27
Now that the plant and the reference model are described, the control law and the adaption law
can be introduced. This MRAC design for a SISO system has its own control law described by
the following equation:
=
1
()
()
+
2
()
()
+
3
+
4
(eq. 4-41)
Where () = [
2
,
3
, , ,1]
for n 2,
3
and
4
R
1
,
and
2
R
n-1
, (s) is an arbitrary monic Hurwitz polynomial of degree n-1 that contains Z
m
as
a factor [FEI-04].
In space-state, Ioannou and Sun [IOA-96-p359] describe the control law with the following state
space equations:
1
=
1
+
(eq. 4-42)
2
=
2
+
(eq. 4-43)
=
0
+ (eq. 4-44)
1
(eq. 4-45)
Where
1
(0) = 0,
2
(0) = 0 and w
1
, w
2
R
n-1
and
= [
1
,
2
,
3
,
4
(eq. 4-46)
=
1 ,
, (eq. 4-47)
Mechatronics Project Fachhochshule Aachen 28
=
2
3
4
0
1 0 0 0
0 1 0 0
0 0 1 0
(eq. 4-48)
=
1
0
1
(eq. 4-49)
2
+ +
1
+
0
= det ( ) (eq. 4-50)
F and g are state space realization of
()
()
, therefore:
()
()
= ( )
1
[IOA-96-p338].
=
+
( )
1
1
()
( )
1
(eq. 4-51)[FEI-04]
Where
is a PT1-element (K= T =
1
1
()
=
1
(
) (eq. 4-52)[IOA-96-p359]
Mechatronics Project Fachhochshule Aachen 30
Where e
1
is the error, r is the input signal, and is the adaptive gain. The derivation of this
adaptive law is included in the Appendix C.
Figure 4-9: MRAC Control Scheme with Unnormalized Adaptive Laws.
It is useful to mention key points of the adaptive control implemented:
1. Adaptive controller adjusts its parameters based on the changes in the plant or the
environment expressed in the signals entering the adaptive law block. Adaptive law
describes how the controller changes, and therefore is the key element of the controller.
This controller only adapts its gains depending on the feedbacks u
p
, y
p
, e
1
and r received
and the above mentioned adaptive law. It does not use an alternative control algorithm
depending on the situation.
2. The adaptive law used is a Lyapunov function described by the equation 4-41. This
particular function was used because of the similarities between this project and the
project discussed in by Fei and Song [FEI-04]. Their simulation showed promising
Mechatronics Project Fachhochshule Aachen 31
results, and since deriving a proper Lyapunov function is a heuristic process, the same
function was implemented to save the time.
3. The controller itself is a mere sum of the signals used multiplied by the corresponding
gains, as can be seen in the equation 4-30. This is almost exactly the same as the PID
controller, where each part is multiplied by a gain and then added together to produce the
control signal.
4. The main difference from the PID is the presence of multiple feedbacks, including the
internal ones. Those feedbacks graphically represent the internal workings of the
algorithm, and in the scope of this project they do not have a practical meaning for the
implementation or use. The gains were tuned heuristically, starting with the vales used by
Fei and Song [FEI-04].
5. Implementing the control law (eq 4-30) in state space makes the controller look more
complicated, because that single algebraic equation is broken down into 4 first order state
space equations. Those extra blocks arising from those equations, together with the extra
dependencies caused by multiple feedbacks, may seem confusing, but they all add up to
be the solutions to the state space equations in s-domain. If one would solve the four state
space equations using Laplace Transformation, they would get the same blocks as seen in
the control diagram, and if that diagram is converted to the algebraic equation, one would
get the before mentioned equation 4-30.
6. The controller has the internal first order filters (PT-1 elements) for each signal that adds
complexity to the diagram, but does not change the way the controller functions.
4.3.2 Implementation of the MRAC Control Scheme
Since the set point for position is zero as before, this controller is also a regulator, in the same
sense as LQR. Since in this case r =0 and y
m
= 0, e
1
= y
p
y
m
= y
p
, and the block diagram
and the equations containing e
1
change slightly:
:
Mechatronics Project Fachhochshule Aachen 32
Figure 4-8: Implemented MRAC Control Scheme with unnormalized Adaptive Laws.
These equations represent the state space description of the control law with the gains used in this
project:
= 2
(eq. 4-53)
= 2
(eq. 4-54)
They are represented in the diagram by (sI-F)
-1
g blocks, which are the solutions to those
equations in s-domain.
Here is the adaptive law for a regulator takes a different form:
(eq. 4-55)
These equations represent signal vector :
0
=
0
+
1
(eq. 4-56)
Mechatronics Project Fachhochshule Aachen 33
1
=
1
+
2
(eq. 4-57)
2
=
2
+
(eq. 4-58)
(eq. 4-59)
Where:
=
1
(eq. 4-60)
= [
0
2
] (eq. 4-61)
=
5 0 0
0 2 0
0 0 20
(eq. 4-62)
Laplace Transformation:
() =
1
+ 2
1
+ 1
()
(eq. 4-63)
1
() =
1
+ 2
1
+ 1
()
(eq. 4-64)
() =
1
+ 1
()
(eq. 4-65)
1
=
1
+ 2
()
(eq. 4-66)
Mechatronics Project Fachhochshule Aachen 34
2
=
1
+ 2
()
(eq. 4-67)
Using
1
+ 1
0 ()
=
(
2
+ 2
1
+ 1)
2
2
(2
2
6 + 4) +
1
(4
2
8) + (2
2
+ 6 + 4)
()
(eq. 4-68)
1 ()
=
(
2
+ 2
1
+ 1)
2
2
(2
2
6 + 4) +
1
(4
2
8) + (2
2
+ 6 + 4)
()
(eq. 4-69)
2 ()
=
(
1
+ 1)
( 2)
1
+ (2 + )
()
(eq. 4-70)
1 () =
(
1
+ 1)
1
( 2 2) + (2 + 2)
()
(eq. 4-71)
2 () =
(
1
+ 1)
1
( 2 2) + (2 + 2)
()
(eq. 4-72)
where T is the sampling rate, equal to 300s.
Using difference equations obtained after Z-Transformation:
()
=
(2)+
(1)+
()
2
0
(1)4
2
8
0
(2)(2
2
6+4)
(2
2
+ 6+4)
(eq. 4-73)
()
=
()+
(1)+
(2)
2
0
(1)4
2
8
0
(2)(2
2
6+4)
(2
2
+ 6+4)
(eq. 4-74)
Mechatronics Project Fachhochshule Aachen 35
2()
=
()
( 1)
2
( 1) ( 2)
( + 2)
(eq. 4-75)
1 ()
=
() +
( 1)
1
( 1) (2 2)
2 + 2
(eq. 4-76)
2 ()
=
()+
(1)
2
(1)(22)
2+2
(eq. 4-77)
Finally:
0
=
(0,0)
0
(eq. 4-78)
1
=
(1,1)
1
(eq. 4-79)
2
=
(2,2)
2
(eq. 4-80)
0
=
0
(1)
+
0
()
+
0
( 1)
(eq. 4-81)
:
=
1
1
+
2
2
+
0
2
(0,0)
+
1
2
(1,1)
+
2
2
(2,2)
(eq. 4-82)
MRACs code was compiled on the CompactRIO
TM
using FPGA only as input and output ports
and to filter the signals. The controller relies on the set value, three feedbacks (control signal,
error signal, and the position signal), and the adaptive law to calculate the control signal.
Mechatronics Project Fachhochshule Aachen 36
5. Results
The objective of this project was to actively damp vibrations in the beam and to compare which
of the three controllers has the best result. PID, LQR, or MRAC. The most important frequency
is the natural frequency because the vibrations have the most energy and amplitude. The
controllers were tested primarily to dampen those vibrations of 19.7 Hz.
The two graph below show the excitation signal in green and the position of the beam in red
before any control is implemented.
Figure 5-1: Sinusoidal excitation signal with the frequency of 19.7 Hz and Amplitude 2V.
Figure 5-2: Oscillations of the beam without dampening with a sinusoidal excitation.
Mechatronics Project Fachhochshule Aachen 37
Figure 5-3: Oscillations of the beam without dampening with an impulse excitation.
5.1 PID Controller.
The gains of the PID controller were 20 for K
P
, 0 for K
I
, and 0.0117 for K
D
.
Figure 5-4: System response to a sine wave with the frequency of 19.7 Hz and amplitude 1 V.
Mechatronics Project Fachhochshule Aachen 38
Disturbance cancellation was 90% with the settling time of 0.7 seconds.
Figure 5-5: Impulse response of the system.
Settling time of impulse response was 1.6 seconds.
5.2 LQR Controller
The graph below shows a response of the system to the excitation by the function generator in
real time. The input signal is sine wave with frequency of 19.7 Hz and amplitude of 1 V. At time
t =1 s the controller is turned on, and the vibrations begin to diminish.
Figure 5-6: Response of the LQR Controller to the sinusoidal excitation in real time.
Mechatronics Project Fachhochshule Aachen 39
For sinusoidal signal the settling time was 1.8 seconds and the disturbance cancellation was
90.5 %.
This graph shows the impulse response in real time to two distinct signals. The beam was
disturbed by hand.
Figure 5-7: Response of the LQR Controller to the impulse signal in real time.
Settling time of the impulse response was 0.7 seconds.
5.3 MRAC
The graph below shows a response of the system to the excitation by the function generator in
real time. The input signal is a sine wave with frequency of 19. 7 Hz and amplitude of 1 V.
Figure 5-8: Response of the MRAC Controller to the sinusoidal signal in real time.
Mechatronics Project Fachhochshule Aachen 40
Disturbance cancellation was 92% and the settling time was 1.1 seconds.
Figure 5-9: ImpulseResponse of the MRAC controller in real time.
As for the impulse response, settling time was about 0.95 seconds.
Controller
PID LQR MRAC
Disturbance cancelation (%) 90 90.5 92
Settling time
(sec)
Sine wave 0.7 1.8 1.1
Impulse 1.6 0.7 0.95
Table 5-1: Summary
Mechatronics Project Fachhochshule Aachen 41
6. Conclusion
This study showed how Direct MRAC compares with well-known non adaptivecontrollers, LQR
and PID, in cancelling the vibration induced in the flexible cantilever beam when it is excited at
its natural frequency with a sinusoidal signal, and when it is subjected to an impulse signal. No
single controller was found to be entirely advantageous over its counterparts.
The non-adaptive controllers were tuned to the natural frequency of the beam. Unlike many other
studies, the results here are not simulated but tested on a physical system. From the results it
follows that the MRAC had slightly better disturbance cancellation than the other two, but had a
longer settling time. The PID controller settled faster than the others when excited by a
sinusoidal signal, and LQR had the fastest impulse response.
It was stated before, that the low frequency of the plant allows more time for executing a
program, and a larger and more complex algorithms can be implemented using this hardware
setup. Future works should focus on even more sophisticated adaptive methods to take the full
advantage of this hardware setup.
Mechatronics Project Fachhochshule Aachen 42
Appendix A
Main features and characteristics of the equipment and software used:
Products from National Instruments, Inc.
LabView version 12.
The CompactRIO NI 9074.
400 MHz industrial real-time processor for control, data logging and analysis.
2M gate, 8-slot FPGA chassis for custom I/O timing, control and processing.
Two 10/100 BASE-T Ethernet ports; RS232 serial port for connection to peripherals.
-20 to 55 C operating temperature range.
19 to 30 VDC power supply input.
Analog Output Module NI 9263.
4 analog outputs channel - 100 kS/s and with 16-bit of resolution.
Analog Input Module NI 9205:
32 analog input channels, 200mV to 10V, 16 Bit, 250 kS /s
Laser Triangulation Sensor OptoNCDT 1302 from Micro Epsilon Messtechnik GmbH & Co.
KG.
Measuring rate: 750Hz
Measuring range: 20 mm
Analogue output: 4 to 20 mA (30 to 50 mm)
Products from PI Inc.:
P-876.A11 DuraAct
TM
Peak output power: 50 W
Gain: 50
Output voltage range: -100 to 400 V
Adjustable offset of the output voltage
Dimensions: 61mm x 35mm x 0.4mm
Minimal Bending radius: 12mm
Piezoceramic height: 100mm
Electrical capacitance: 150nF +/- 20%
Mechatronics Project Fachhochshule Aachen 43
Relative lateral contraction: 1.6 m/V
Minimum lateral contraction: 400 m/V
E-413.D2 Piezo Amplifier:
Peak output power: 50 W
Gain: 50
Output voltage range: -100 to 400 V
Adjustable offset of the output voltage
Mechatronics Project Fachhochshule Aachen 44
Appendix B
Pole Placement and Observer
() = () + () +
() = () ()
() () = (
) () (eq. B-4)
When Full State Observer is implemented in the feedback loop, the control law changes to:
() =
() = ( ) () + (()
) () (eq. B-8)
Mechatronics Project Fachhochshule Aachen 46
Combining the two, an equation that describes the system dynamics is obtained:
(eq. B-9)
With the following characteristic equation:
+
+
= 0 (eq. B-10)
Or
+ +
= 0 (eq. B-11)
Lyapunovs Second Method
If for some system defined by the equation
() = (, ) (eq. B-12)
With the unique solution with the given initial conditions x
0
and t
0
is
(
0
;
,
0
) =
(eq. B-13)
Mechatronics Project Fachhochshule Aachen 47
Where the equilibrium state is defined as
(
(, ) is negative definite.
Note: a common mistake is forgetting the Chain Rule of differentiation.
Then the equilibrium state at the origin is uniformly asymptotically stable.
In addition, if V(x,t) as ||x||, then the system is uniformly asymptotically stable at
large. [OGA-97-p897-p902].
For Linear Time Invariant Systems (LTI), Lyapunovs function can be represented with vectors
and matrices by the following two equations:
=
(eq. B-15)
(eq. B-16)
Where the stability criterion is that P and Q are positive - definite matrices. [OGA-97-p908]
Ogata recommends to specify a positive definite matrix Q first, then to check if P is positive
definite from
+ = (eq. B-18)
Mechatronics Project Fachhochshule Aachen 48
Where A* is a Hermitian transpose.
Ioannou and Sun use a slightly different form of those equations. Since few other references use
that notation as well, they are listed here as well:
+ =
(eq. B-19)
Where q is a vector that must be specified, similarly to the matrix Q in Lyapunovs Function.
Kalman-Yakubovich-Popov (KYP) lemma extends this definition for a stable and controllable
system by introducing another equation to the one above:
= 2 (eq. B-20)
If both equations are true, then the transfer function of the system is strictly positive real (SPR):
() = + ( )
(eq. B-21)
It is also useful to introduce the Meyer-Kalman-Yakubovich (MKY) Lemma.
Given a stable matrix A, vectors B, C and a scalar d 0 :
If () = + ( )
+ =
(eq. B-22)
= 2 (eq. B-23)
Mechatronics Project Fachhochshule Aachen 49
The abovementioned lemmas are used to show the stability of systems, similarly to the Second
Method of Lyapunov, but when the Lyapunov-like functions are used.
Mechatronics Project Fachhochshule Aachen 50
Appendix C
Theoretical Background on Direct MRAC with Unnormalized Adaptive Gains
The plant was described with the following assumptions [FEI-04]:
P1.
(s) is known.
P2. The relative degree n
= n
p
m
p
of G
p
(s), and the sign of the high frequency gain k
p
are
known.
Reference Model Assumptions [FEI-04]:
M1.
() and
and
respectrly, where
.
M2. The relative degree
of
() i.e.
.
Where Hurwitz polynomial is a polynomial whose coefficients are positive real numbers and
whose zeros are located in the left half-plane of the complex plane, that is, the real part of every
zero is negative.
A monic polynomial is a polynomial whose term with the highest degree has coefficient of 1
(ex.: s
2
- 3s + 1)
The definitions and derivations of the mathematical equations of MRAC controller used in the
project were taken from the paper by Fei and Song [FEI-04]:
Through some mathematical substitution and derivation, we can get error equations as follows:
=
c
e + B
c
(eq. C - 1)
1
= y
p
y
m
=
e =
()
(eq. C - 2)
Mechatronics Project Fachhochshule Aachen 51
Where
and
0 0
c
=
p
+
p
g
3
+g
1
g
2
0
(eq. C - 3)
c
=
p
0
0
(eq. C - 4)
Use the identity ( +
0
) ( +
0
)
1
= 1 for some
0
> 0 , we get
=
c
e +
c
( +
0
)
(eq. C - 5)
e
1
=
c
e +
e =
m
() ( +
0
)
(eq. C - 6)
Where
c
=
c
and
u
f
=
1
s+ p
0
u
p
, =
1
s+ p
0
w
We use
and
(eq. C - 7)
Mechatronics Project Fachhochshule Aachen 52
We choose
so that
(eq. C - 8)
Since we have
1
=
m
() ( +
0
) [0] (eq. C - 9)
And substituting
in
1
,
c
,
c
, we obtain the error equations:
=
c
e +
( +
0
)
(eq. C - 10)
1
=
=
m
() ( +
0
)
(eq. C - 11)
where
(eq. C - 12)
The equation for
1
, can be transformed into the desired form:
+
1
(eq. C - 13)
1
=
(eq. C - 14)
Mechatronics Project Fachhochshule Aachen 53
Where
1
=
c
+
c
p
0
(eq. C - 15)
= 0 (eq. C - 16)
due to n
= 2.
The Lyapunov-like function considered
, =
2
+
2
(eq. C-17)
Where
> 0 and
> 0
satisfies the MKY lemma
(eq. C-18)
(eq. C-19)
Where is vector,
> 0 and
2
+
| 0 (eq. C-20)
Since
and
= |
| (
), by choosing
Mechatronics Project Fachhochshule Aachen 54
=
1
(
) (eq. C-21)
We get
2
0 (eq. C-22)
Mechatronics Project Fachhochshule Aachen 55
Appendix D
Butterworth Filter
Is a type of signal processing filter designed to have as flat a frequency response as possible in
the passband.
Gain low pass filter equation:
() =
1
1+
2
(eq. D-1)
Where:
1. G(w) is the angular frequency in radians per second.
2. N is the number of poles.
.
The Gain G(w) of an n-order Butterwoth low pass filter is given in terms of the transfer function
H(s) as:
2
(w) = [H(jw)
2
] =
G
o
2
1+
W
W
c
2n
(eq. D-2)
Where
n=order of filter.
=
(0,0)
(eq. E-1)
=
(1,1)
(eq. E-2)
=
(2,2)
(eq. E-3)
=
0
(1)
+
0
()
+
0
( 1)
(eq. E-4)
Mechatronics Project Fachhochshule Aachen 61
=
1
1
+
2
2
+
0
2
(0,0)
+
1
2
(1,1)
+
2
2
(2,2)
(eq. E-5)
These five equations were implemented in five subVIs and impelemented similarly to the state
space equations of LQR. However, instead of vector multiplication, the vectors were
decomposed into elements to reduce the computing power.
After obtaining
one must integrate them to find the adaptive gains. The Riemann
integration was used to accomplish that. Once the gains were updated, the control law will add
all the signals to calculate the control signal U
p
in the final equation.
Figure E-6: Main subVI.
Mechatronics Project Fachhochshule Aachen 62
APPENDIX F
LabView System Identification Toolkit
The following text is from the LabViews manual on the System identification Toolkit that
documents the General Liner Polynomial Model of a transfer function. This tool and this specific
method were used to obtain a plant transfer function in this project.
General-Linear Polynomial Model is defined by the following equation:
() =
(
1
, )() +(
1
, )() (eq. F-1)
() and () are the input and output of the system respectively.
() is zero-mean white noise, or the disturbance of the system.
(
1
, ) is the transfer function of the deterministic part of the system.
(
1
, ) is the transfer function of the stochastic part of the system.
The deterministic transfer function specifies the relationship between the output and the input
signal, while the stochastic transfer function specifies how the output is affected by the
disturbance.
The term
1
is the backward shift operator, which is defined by the following equation.
1
() = ( 1) (eq. F-2)
defines the number of delay samples d between the input and the output.
(
1
, ) and (
1
, ) are rotational polynomials as defined by the following equations.
(
1
, ) =
(, )
(, ) (, )
(eq. F-3)
(
1
, ) =
(, )
(, ) (, )
(eq. F-4)
Mechatronics Project Fachhochshule Aachen 63
The following equations define A(q), B(q), C(q), D(q) and F(q):
() = 1 +
1
1
+
2
2
+ +
(eq. F-5)
( ) =
0
+
1
1
+ +
1)
(eq. F-6)
() = 1 +
1
1
+
2
2
+ +
(eq. F-7)
() = 1 +
1
1
+
2
2
+ +
(eq. F-8)
() = 1 +
1
1
+
2
2
+ +
(eq. F-4)
Where
, and