(IJCST-V6I4P17) :P T V Lakshmi
(IJCST-V6I4P17) :P T V Lakshmi
(IJCST-V6I4P17) :P T V Lakshmi
I. INTRODUCTION
Back propagation was created by generalizing the propagate it to the next layer. The hidden layer receives the
Widrow-Hoff learning rule to multiple-layer networks and weighted sum of incoming signals sent by the input units
nonlinear differentiable transfer functions. Standard back (Eq. 1), and processes it by means of an activation
propagation is a gradient descent algorithm, as is the function. The activation functions most commonly used
Widrow-Hoff learning rule, in which the network weights are the saturation, sigmoid (Eq. 4) and hyperbolic
are moved along the negative of the gradient of the tangent(Eq.5) functions. The hidden units in turn send an
performance function. The term back propagation refers to output signal towards the neurons in the next layer. This
the manner in which the gradient is computed for adjacent layer could be either another hidden layer of
nonlinear multilayer network. arranged processing elements or the output layer. The units
Neural networks are typically arranged in layers. in the output layer receive the weighted sum of incoming
Each layer in a layered network is an array of processing signals and process it using an activation function.
elements or neurons. Information flows through each Information is propagated forwards until the network
element receives an input signal, manipulates it and Input Layer Hidden Layer Output Layer
n
SUM xiwi [1]
i 1
n
y f xiwi [2]
i 0
n
1 if xw 0 i i
f ( x) i 1
n [3] Hyperbolic tangent function
0 if
xw 0
i 1
i i
X1 w1 w5 Y1
H1
w2 w6
w3 w7
w4 w8
H2
X2
Y2
H1=x1*w1+x2*w2+b1
Sigmoid function
particular appropriate places.Similarly in the same way [5] Hamid N.A, Nawi, N.M, Ghazali, R. &Saleh, M N.M.
update the values of w1,w2,w3,w4.This process of (2011) Accelerating Learning Performance of
updating and doing backward process is called Backpropagation Algorithm by Using Adaptive Gain
Backpropagation. Together with Adaptive 0 Momentum and Adaptive
Learning Rate on Classification Problems.
IV. CONCLUSION International Journal of Software Engineering and
Application.8(4) : 31 – 43.
[6] Huang, D., Wu, Z. 2017. Forecasting Out Patient Visits
Thus the Artificial neural network involves two
Using Empirical Mode Decomposition Coupled With
passes. In the forward pass the input signals propagate
Backpropagation Artificial Neural Networks
from the network input to output.In the reverse pass,the
Optimized By Particle Swarm Optimization. Journal
calculated error signals propagate backwards through the
Plos One 12(2): 1-18.
network where they are used to adjust the weights.The
[7] Shanmuganathan, S. &Samarasinghe, S. (Editor).
calculation of output is carried out layer by layer in the
2016. Studies in Computational Intelligence: Artificial
forward direction.The output of one layer in weighted
Neural Network Modelling. Springer: Switzerland
manner will be the input of next layer. In the reverse
[8] Sitanggang, I.S., Hermadi,I., Edward. 2007.
pass,the weights of the output neuron layer are adjusted
Implementation of Neural Networks in Predicting the
first since the target value of each output neuron is
Understanding Level of Students Subject.
availableto guide the adjustment of associated weights.
JurnalIlmiahIlmukomputer. 5(2): 124-143
[9] Sumijan, Windarto, P.A., Muhammad, A. &Budiharjo.
REFERENCE
2016. Implementation of Neural Networks in
[1] Andrian, Y. & Putra, H.P. 2014. Analysis of Addition of
Predicting the Understanding Level of Students
Momentum to Prediction of Rainfall in Medan Using
Subject. International Journal of Software Engineering
Backpropagation Neural Network.Seminar National
and Its Applications10(10): 189-204.
Informatika 2014: 165 – 172.
[10] Sumijan, Windarto, P.A., Muhammad, A. &Budiharjo.
[2] Brian, T. 2016.Analysis Learning Rates On
2016. Implementation of Neural Networks in
Backpropagation Algorithm For Classification of
Predicting the Understanding Level of Students
Diabetes.JurnalIlmiahEdutic3(1):21 – 27.
Subject. International Journal of Software Engineering
[3] Dhanewara, G. &Moertini, V. 2004. Artificial Neural
and Its Applications10(10): 189-204.
Networks Back Propagation For Integral Data
Classification9(3):117 – 131.
[4] Hamid N.A. &Nawi, N.M(2011).The Effect of Adative
Gain And Adative Momentum in Improving Training
Time Of Gradient Descent BackpropagationAlgorith
on Classification Problems. Proceedings of the
Internasional Conference on Advanced Sciense,
Engineering And Information Technology
2011.ISASEIT:pp. 178 – 184