Module 1
Module 1
Module 1
(Deemed to be University)
19EID331
Artificial Neural Networks
By
Venkata Kranthi B
Department of EECE
GITAM School of Technology(GST)
Bengaluru-561203
Email: [email protected]
• Nonlinearity
• Input–Output Mapping
• Adaptivity
• Evidential Response
• Contextual Information
• Fault Tolerance
• VLSI Implementability
• Uniformity of Analysis and Design
• Neurobiological Analogy
Neural Network works the same as the human nervous system functions.
There are several types of neural network. These networks implementation
are based on the set of parameter and mathematical operation that are
required for determining the output.
• F N N is the purest form of A N N in which input and data travel in only one
direction.
• Data flows in an only forward direction; that's why it is known as the
Feedforward Neural Network.
• The data passes through input nodes and exit from the output nodes.
• The nodes are not connected cyclically. It doesn't need to have a hidden
layer.
• In FNN, there doesn't need to be multiple layers. It may have a single layer
also.
• It has a front propagate wave that is achieved by using a
classifying
activation function.
• All other types of neural network use backpropagation, but FNN can't.
• In FNN, the sum of the product's input and weight are calculated, and
then it is fed to the output.
• Technologies such as face recognition and computer vision are used FNN.
• RBFNN find the distance of a point to the centre and considered it to work
smoothly.
• There are two layers in the RBF Neural Network.
• In the inner layer, the features are combined with the radial basis function.
• Features provide an output that is used in consideration.
• Other measures can also be used rather than Euclidean.
• Both inputs and outputs are independent of each other, but in some cases, it
required to predict the next word of the sentence.
• Then it will depend on the previous word of the sentence.
• R N N is famous for its primary and most important feature, i.e., Hidden
State. Hidden State remembers the information about a sequence.
• R N N has a memory to store the result after calculation.
• R N N uses the same parameters on each input to perform the same task
on all the hidden layers or data to produce the output.
• Unlike other neural networks, R N N parameter complexity is less.
•A ‘teacher’ though available, does not present the expected answer but
only indicates if the computed output is correct or incorrect
•The information provided helps the network in its learning process
•A reward is given for a correct answer computed and a penalty for a
wrong answer
• Stands for Rectified linear unit. It is the most widely used activation
function. Chiefly implemented in hidden layers of Neural network.
• Equation :- A(x) = max(0,x). It gives an output x if x is positive and
0
otherwise.
• Value Range :- [0, inf)
• Nature :- non-linear, which means we can easily backpropagate the
errors and have multiple layers of neurons being activated by the
ReLU function.
• Uses :- ReLu is less computationally expensive than tanh and
sigmoid because it involves simpler mathematical operations. At a
time only a few neurons are activated making the network sparse
making it efficient and easy for computation.
• In simple words, RELU learns much faster than sigmoid and Tanh
function.
6/27/2022 Department of EECE 19EID331 65
ANN
Softmax Function G I TA M
(Deemed to be University)
• MSE loss is used for regression tasks. As the name suggests, this
loss is calculated by taking the mean of squared differences between
actual(target) and predicted values.
• Example
• For Example, we have a neural network which takes house data and
predicts house price. In this case, you can use the MSE loss.
Basically, in the case where the output is a real number, you should
use this loss function.