Artificial Neural Networks (ch7)
Artificial Neural Networks (ch7)
Artificial Neural Networks (ch7)
make a computer model of the brain. The main objective is to develop a system to
perform various computational tasks faster than the traditional systems. These tasks
include pattern recognition and classification, approximation, optimization, and
data clustering.
Every neuron is connected with other neuron through a connection link. Each
connection link is associated with a weight that has information about the input
signal. This is the most useful information for neurons to solve a particular problem
because the weight usually excites or inhibits the signal that is being communicated.
Each neuron has an internal state, which is called an activation signal. Output
signals, which are produced after combining the input signals and activation rule,
may be sent to other units.
1949 − Donald Hebb’s book, The Organization of Behavior, put forth the fact
that repeated activation of one neuron by another increases its strength each
time they are used.
1958 − A learning method for McCulloch and Pitts neuron model named
Perceptron was invented by Rosenblatt.
1988 − Kosko developed Binary Associative Memory (BAM) and also gave
the concept of Fuzzy Logic in ANN.
The historical review shows that significant progress has been made in this field.
Neural network based chips are emerging and applications to complex problems are
being developed. Surely, today is a period of transition for neural network
technology.
Model of Artificial Neural Network
The following diagram represents the general model of ANN followed by its
processing.
For the above general model of artificial neural network, the net input can be
calculated as follows −
yin=x1.w1+x2.w2+x3.w3…xm.wmyin=x1.w1+x2.w2+x3.w3…xm.wm
Y=F(yin)Y=F(yin)
Network Topology
A network topology is the arrangement of a network along with its nodes and
connecting lines. According to the topology, ANN can be classified as the following
kinds −
Feedforward Network
It is a non-recurrent network having processing units/nodes in layers and all the
nodes in a layer are connected with the nodes of the previous layers. The connection
has different weights upon them. There is no feedback loop means the signal can
only flow in one direction, from input to output. It may be divided into the following
two types −
Supervised Learning
As the name suggests, this type of learning is done under the supervision of a
teacher. This learning process is dependent.
During the training of ANN under supervised learning, the input vector is presented
to the network, which will give an output vector. This output vector is compared
with the desired output vector. An error signal is generated, if there is a difference
between the actual output and the desired output vector. On the basis of this error
signal, the weights are adjusted until the actual output is matched with the desired
output.
Unsupervised Learning
As the name suggests, this type of learning is done without the supervision of a
teacher. This learning process is independent.
During the training of ANN under unsupervised learning, the input vectors of
similar type are combined to form clusters. When a new input pattern is applied,
then the neural network gives an output response indicating the class to which the
input pattern belongs.
There is no feedback from the environment as to what should be the desired output
and if it is correct or incorrect. Hence, in this type of learning, the network itself
must discover the patterns and features from the input data, and the relation for the
input data over the output.
Reinforcement Learning
As the name suggests, this type of learning is used to reinforce or strengthen the
network over some critic information. This learning process is similar to supervised
learning, however we might have very less information.
During the training of network under reinforcement learning, the network receives
some feedback from the environment. This makes it somewhat similar to supervised
learning. However, the feedback obtained here is evaluative not instructive, which
means there is no teacher as in supervised learning. After receiving the feedback,
the network performs adjustments of the weights to get better critic information in
future.
PERCEPTRON
• A perceptron can learn any linearly separable functions, given enough training.
• The weighted sum of the inputs is applied to the activation function, which
produces an output equal to +1 if its input is positive and -1 if it is negative.
Hopfield Network
Hopfield neural network was invented by Dr. John J. Hopfield in 1982. It consists
of a single layer which contains one or more fully connected recurrent neurons. The
Hopfield network is commonly used for auto-association and optimization tasks.
This model consists of neurons with one inverting and one non-inverting
output.
The output of each neuron should be the input of other neurons but not the
input of self.