ELET442 - Artificial Neural Networks (ANNs)
ELET442 - Artificial Neural Networks (ANNs)
ELET442 - Artificial Neural Networks (ANNs)
o X1 , X2 , … , Xn (Inputs to neuron).
o Y (Output of neuron)
o wi (Adjustable Weights)
o Bi (Given Bias Value)
Bi
Bi
Most Used types of Activation functions
X2
∑ Yin
X3
X4
Solution 1
Processing ➔ Yin = { 30.15 + 2 (-0.1) + 0 0.8 + (-2) (-0.75) }
( )
3. For Hyperbolic Tangent Function :
e 1.75
− e −1.75
➔ Y = f (1.75) = = 0.9414
(e )
Y = f(1.75)
−1.75
1.75
+e
Example 3: Find the output of the given ANN . If the hidden layer neuron has
a logistic Sigmoid as its activation function and the output neuron has a
binary activation function?
0.45
-0.25
0.6 Yout
0.85
0.015
Yin_H1 =
Yo_H1 = f ( Yin ) =
Yout_in = { 0.3775 0.45 + 0.6 (-0.25) + 0.015 0.85 } = 0.033
✓ When outputs are directed back as inputs to same or preceding layer nodes
it results in the formation of feedback networks
TYPES OF ARTIFICIAL NEURAL NETWORKS
Feedback and Recurrent Networks
• How many input and output neurons? • Ans: 4 input and 2 output neurons
• How many hidden layers does this network have? • Ans: 3 Layers
Inputs
neurons
LEARNING & TRAINING OF ANNs
Once a network has been structured for a particular application, it is ready for
training. At the beginning, the initial weights are chosen randomly and then
the training or learning begins. There are two approaches in training;
Supervised and Unsupervised.
In all of the neural paradigms, the application of an ANN involves two phases:
(1) Learning & Training phase
(2) Recall phase
▪ In the learning & training phase (usually offline) the ANN are trained
until it has learned its task (through the adaptation of its weights)
▪ In the recall phase (usually online), ANN are used to solve the similar
task they have trained on.
LEARNING & TRAINING OF ANNs
✓ An ANN solves a task when its weights are adapted through a learning
phase.
✓ They are given training patterns and their weights are adjusted
iteratively until the output error function is minimized.
BB is used to
update the
weights to
minimize the
error function
Back-propagation Training Algorithm (BP)
1
Ok = f ( x) =
1 + e−x
Back-propagation Training Algorithm (BP)
k = Ok (1 − Ok ) (t − Ok )
w( jk ) = O j k
w( jk ) (t +1) = w( jk ) t + w( jk )
w(ij ) = xi j
✓ Small values of the learning rate will not correct the current error as
quickly, but if small steps are taken in correcting errors, there is a
good chance of arriving at the best minimum convergence.
Flow diagram of the ANN with back propagation algorithm
Summary of Back-propagation Training Algorithm
Error
https://www.youtube.com/watch?v=WZDMNM36PsM
Example 4:
Learning example for AND gate using Delta Rule:
Input
Step function
X2 to H2 = 0.9
Learning rate α = 0.6
Target Output = 1.0
0.3
0.8 X1 H1
0.6
0.7
O1 Target = 1.0
0.4
0.9
0.5 X2 H2
0.9
1. Calculate the outputs of all neurons in the hidden layer:
There are three popular architectures for prediction and control that
have been implemented using Neural Networks :
1) Inverse Modelling and Control .
2) Model Predictive Control
3) Model Reference Adaptive Control .
Artificial Neural Networks in Control
There are typically two steps involved when using neural networks for
control:
✓ System Identification Stage
✓ Control Design Implementation
▪ In the system identification stage, we train and develop a neural
network to act similar to the model of the plant that we want to control.
▪ In the control design stage, we use the trained neural network model to
design (or fine-tune) the controller of the system.
▪ In each of the three control architectures stated earlier , the system
identification stage is identical.
▪ While, The control design stage, is different for each architecture.
1. Inverse Modelling and Control
There are two basic design approaches for inverse control:
1) Generalized Training (off-line) –
• In this architecture, the input signal (u) is applied to the system input ,
• Output signal (y) is obtained at the system output and forwarded to the
proposed neural network model which produce a signal (uN) .
• The different between the incoming signal (u) and the neural model
output (uN) is the error (eN = u - uN) which can be used and utilized for
the neural network learning to identify the inverse model of the system.
1. Inverse Modelling and Control
2) Specialized Training (on-line) –
• Now, the already trained Inverse neural model is implemented in the
system and used as a controller for the process.
• The neural controller is now fine-tuned and adjusted online by utilizing
the error (ec) .
• The error (ec) is here obtained as the difference between the desired
signal (yr ) and the signal ( y ) that represents the actual system output
(ec = yr - y) .
2. The Neural Network Predictive Controller
i. System Identification (off-line) –
• The first stage of model predictive control is to train a neural network to
represent the forward dynamics of the plant.
• The prediction error between the plant output and the neural network output
(e = yp - ym ) is used as the neural network training signal.
• The neural network plant model uses previous plant outputs to predict future values
of the plant output.
Model
Reference
+
Learning
−
Ref e Neural
. System
+ Controller
−
Here, the neural controller is trained and fine tuned by the learning algorithm
based on minimizing the error between actual output of the system and the
output of the well-known reference model of the controlled process. So the
closed loop system behaves like the model (desired rise time, overshoot, etc.)
Example of Using the NN Predictive Controller
This example shows the application of the NN predictive controller in a Catalytic
Continuous Stirred Tank Reactor Process (file name: predcstr )
Graph to show the plant output and the reference signal after the
neural network trained and learned its task