CHP 9
CHP 9
Neural Network
Biological Neural N/W to Artificial Neural N/W
• The nervous system is constituted of a special type of cell, called
neuron or nerve cell, which has special structures allowing it to
receive or send signals to other neurons.
The only difference is that the biological form of neuron is replicated in the
electronic or digital form of neuron.
Structure of Artificial Neuron Network
A threshold activation function (or simply activation function, also called squashing function)
results in an output signal only when an input signal exceeding a specific threshold value comes as
an input.
Activation Function
• Introduce non-linearities in the network.
Linear activation function produce Non-Linear activation function
linear decision. allows us to approximate
arbitrary complex function.
Types of Activation Function
• Step Function:
- Gives 1 as output if the input is either 0 or positive.
- If the input is negative, the step function gives 0 as output.
• Threshold Function:
- Almost like the step function, with the only difference being the fact
that θ is used as a threshold value instead of 0.
• ReLU (Rectified Linear Unit) function:
- Most popularly used activation function in the areas of convolutional
neural networks and deep learning.
- f(x) is zero when x is less than zero and f(x) is equal to x when x is
above or equal to zero.
• Sigmoid function:
- Most commonly used activation function in neural
networks.
Situations:
Situation 1 – It is not raining nor is it sunny.
Situation 2 – It is not raining, but it is sunny.
Situation 3 – It is raining, and it is not sunny.
Situation 4 – Wow, it is so strange! It is raining as well as it is sunny.
• To analyse the situations using the McCulloch–Pitts neural model, we
can consider the input signals as follows:
x1 → Is it raining?
x2 → Is it sunny?
• So, the value of both x1 and x2 can be either 0 or 1. We can use the
value of both weights x1 and x2 as 1 and a threshold value of the
activation function as 1.
• Formally, we can write,
• This was exactly the point driven by Minsky and Papert in their work
(1969).
Major Highlights:
- The neural network contains one or more intermediate layers b/w
the i/p and the o/p nodes, which are hidden from both i/p and o/p
nodes.
- Each neuron in the n/w includes a non-linear activation function
that is differentiable.
- The neurons in each layer are connected with some or all the
neurons in the previous layer.
MLP
ADALINE network model (Adaptive Linear Neural
Element )
• An early single-layer ANN developed by Prof. Bernard
Widrow of Stanford University.
• It has only output neuron.
• The output value can be +1 or −1.
• The activation function is such that if the weighted sum is
positive or 0, then the output is 1, else it is −1.