Artificial Neural Networks (ch7)

Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

Neural networks are parallel computing devices, which is basically an attempt to

make a computer model of the brain. The main objective is to develop a system to
perform various computational tasks faster than the traditional systems. These tasks
include pattern recognition and classification, approximation, optimization, and
data clustering.

What is Artificial Neural Network?


Artificial Neural Network (ANN) is an efficient computing system whose central
theme is borrowed from the analogy of biological neural networks. ANNs are also
named as “artificial neural systems,” or “parallel distributed processing systems,”
or “connectionist systems.” ANN acquires a large collection of units that are
interconnected in some pattern to allow communication between the units. These
units, also referred to as nodes or neurons, are simple processors which operate in
parallel.

Every neuron is connected with other neuron through a connection link. Each
connection link is associated with a weight that has information about the input
signal. This is the most useful information for neurons to solve a particular problem
because the weight usually excites or inhibits the signal that is being communicated.
Each neuron has an internal state, which is called an activation signal. Output
signals, which are produced after combining the input signals and activation rule,
may be sent to other units.

A Brief History of ANN


The history of ANN can be divided into the following three eras −

ANN during 1940s to 1960s


Some key developments of this era are as follows −
 1943 − It has been assumed that the concept of neural network started with
the work of physiologist, Warren McCulloch, and mathematician, Walter
Pitts, when in 1943 they modeled a simple neural network using electrical
circuits in order to describe how neurons in the brain might work.

 1949 − Donald Hebb’s book, The Organization of Behavior, put forth the fact
that repeated activation of one neuron by another increases its strength each
time they are used.

 1956 − An associative memory network was introduced by Taylor.

 1958 − A learning method for McCulloch and Pitts neuron model named
Perceptron was invented by Rosenblatt.

 1960 − Bernard Widrow and Marcian Hoff developed models called


"ADALINE" and “MADALINE.”

ANN during 1960s to 1980s


Some key developments of this era are as follows −

 1961 − Rosenblatt made an unsuccessful attempt but proposed the


“backpropagation” scheme for multilayer networks.

 1964 − Taylor constructed a winner-take-all circuit with inhibitions among


output units.

 1969 − Multilayer perceptron (MLP) was invented by Minsky and Papert.

 1971 − Kohonen developed Associative memories.

 1976 − Stephen Grossberg and Gail Carpenter developed Adaptive resonance


theory.
ANN from 1980s till Present
Some key developments of this era are as follows −

 1982 − The major development was Hopfield’s Energy approach.

 1985 − Boltzmann machine was developed by Ackley, Hinton, and


Sejnowski.

 1986 − Rumelhart, Hinton, and Williams introduced Generalised Delta Rule.

 1988 − Kosko developed Binary Associative Memory (BAM) and also gave
the concept of Fuzzy Logic in ANN.

The historical review shows that significant progress has been made in this field.
Neural network based chips are emerging and applications to complex problems are
being developed. Surely, today is a period of transition for neural network
technology.
Model of Artificial Neural Network
The following diagram represents the general model of ANN followed by its
processing.

For the above general model of artificial neural network, the net input can be
calculated as follows −

yin=x1.w1+x2.w2+x3.w3…xm.wmyin=x1.w1+x2.w2+x3.w3…xm.wm

i.e., Net input yin=∑mixi.wiyin=∑imxi.wi


The output can be calculated by applying the activation function over the net input.

Y=F(yin)Y=F(yin)

Output = function (net input calculated)

Network Topology
A network topology is the arrangement of a network along with its nodes and
connecting lines. According to the topology, ANN can be classified as the following
kinds −
Feedforward Network
It is a non-recurrent network having processing units/nodes in layers and all the
nodes in a layer are connected with the nodes of the previous layers. The connection
has different weights upon them. There is no feedback loop means the signal can
only flow in one direction, from input to output. It may be divided into the following
two types −

 Single layer feedforward network − The concept is of feedforward ANN


having only one weighted layer. In other words, we can say the input layer is
fully connected to the output layer.

 Multilayer feedforward network − The concept is of feedforward ANN


having more than one weighted layer. As this network has one or more layers
between the input and the output layer, it is called hidden layers.
Feedback Network
As the name suggests, a feedback network has feedback paths, which means the
signal can flow in both directions using loops. This makes it a non-linear dynamic
system, which changes continuously until it reaches a state of equilibrium. It may
be divided into the following types −

 Recurrent networks − They are feedback networks with closed loops.


Following are the two types of recurrent networks.

 Fully recurrent network − It is the simplest neural network architecture


because all nodes are connected to all other nodes and each node works as
both input and output.

 Jordan network − It is a closed loop network in which the output will go to


the input again as feedback as shown in the following diagram.
Adjustments of Weights or Learning
Learning, in artificial neural network, is the method of modifying the weights of
connections between the neurons of a specified network. Learning in ANN can be
classified into three categories namely supervised learning, unsupervised learning,
and reinforcement learning.

Supervised Learning
As the name suggests, this type of learning is done under the supervision of a
teacher. This learning process is dependent.

During the training of ANN under supervised learning, the input vector is presented
to the network, which will give an output vector. This output vector is compared
with the desired output vector. An error signal is generated, if there is a difference
between the actual output and the desired output vector. On the basis of this error
signal, the weights are adjusted until the actual output is matched with the desired
output.
Unsupervised Learning
As the name suggests, this type of learning is done without the supervision of a
teacher. This learning process is independent.

During the training of ANN under unsupervised learning, the input vectors of
similar type are combined to form clusters. When a new input pattern is applied,
then the neural network gives an output response indicating the class to which the
input pattern belongs.

There is no feedback from the environment as to what should be the desired output
and if it is correct or incorrect. Hence, in this type of learning, the network itself
must discover the patterns and features from the input data, and the relation for the
input data over the output.
Reinforcement Learning
As the name suggests, this type of learning is used to reinforce or strengthen the
network over some critic information. This learning process is similar to supervised
learning, however we might have very less information.

During the training of network under reinforcement learning, the network receives
some feedback from the environment. This makes it somewhat similar to supervised
learning. However, the feedback obtained here is evaluative not instructive, which
means there is no teacher as in supervised learning. After receiving the feedback,
the network performs adjustments of the weights to get better critic information in
future.
PERCEPTRON

• A Perceptron is the simplest kind of feed forward neural network invented by


Frank Rosenblatt

• A perceptron can learn any linearly separable functions, given enough training.

• The model consists of a linear combiner followed by an activation function.

• The weighted sum of the inputs is applied to the activation function, which
produces an output equal to +1 if its input is positive and -1 if it is negative.

Hopfield Network

Hopfield neural network was invented by Dr. John J. Hopfield in 1982. It consists
of a single layer which contains one or more fully connected recurrent neurons. The
Hopfield network is commonly used for auto-association and optimization tasks.

Discrete Hopfield Network


A Hopfield network which operates in a discrete line fashion or in other words, it
can be said the input and output patterns are discrete vector, which can be either
binary (0,1) or bipolar (+1, -1) in nature. The network has symmetrical weights with
no self-connections i.e., wij = wji and wii = 0.
Architecture
Following are some important points to keep in mind about discrete Hopfield
network −

 This model consists of neurons with one inverting and one non-inverting
output.

 The output of each neuron should be the input of other neurons but not the
input of self.

 Weight/connection strength is represented by wij.

 Connections can be excitatory as well as inhibitory. It would be excitatory, if


the output of the neuron is same as the input, otherwise inhibitory.

 Weights should be symmetrical, i.e. wij = wji


The output from Y1 going to Y2, Yi and Yn have the
weights w12, w1i and w1n respectively. Similarly, other arcs have the weights on
them.

Continuous Hopfield Network


In comparison with Discrete Hopfield network, continuous network has time as a
continuous variable. It is also used in auto association and optimization problems
such as travelling salesman problem.

You might also like