Unit 1
Unit 1
Unit 1
By
Deep learning using python and tensor flow by Venkata Reddy Konasani
T
By "summing up" the inputs. This happens because all the inputs are
electrical voltages, and they all merge at a single point. So, if the total
input (after passing and getting amplified/reduced through the synapse)
is greater than some particular value, the cell fires an event. Again, this
happens through a complex chemical reaction.
Once the cell fires, it must restore its chemical equilibrium (and prepare
for future signals). So the neuron remains inactive for a small amount of
time (called the refractory period). Thus several neurons may remain
inactive at any given time while several others are active
ANN: Artificial Neural Network
● 1943 − It has been assumed that the concept of neural network started
with the work of physiologist, Warren McCulloch, and mathematician,
Walter Pitts, when in 1943 they modeled a simple neural network using
electrical circuits in order to describe how neurons in the brain might
work.
● 1949 − Donald Hebb’s book, The Organization of Behavior, put forth the
fact that repeated activation of one neuron by another increases its
strength each time they are used.
● 1956 − An associative memory network was introduced by Taylor.
● 1958 − A learning method for McCulloch and Pitts neuron model named
● 1961 − Rosenblatt made an unsuccessful attempt but proposed the
“backpropagation” scheme for multilayer networks.
● 1964 − Taylor constructed a winner-take-all circuit with inhibitions
among output units.
● 1969 − Multilayer perceptron
● MLP
● MLP was invented by Minsky and Papert.
● 1971 − Kohonen developed Associative memories.
● 1976 − Stephen Grossberg and Gail Carpenter developed Adaptive
● 1982 − The major development was Hopfield’s Energy approach.
● 1985 − Boltzmann machine was developed by Ackley, Hinton, and Sejnowski.
● 1986 − Rumelhart, Hinton, and Williams introduced Generalised Delta Rule.
● 1988 − Kosko developed Binary Associative Memory
● BAM
● BAM and also gave the concept of Fuzzy Logic in ANN.
Mc Culloch and Pitts model
aggregation and based on the aggregated value the second part, f makes a decision.
Lets suppose that I want to predict my own decision, whether to watch a random football game or not on
TV. The inputs are all boolean i.e., {0,1} and my output variable is also boolean {0: Will watch it, 1: Won’t
watch it}.
● So, x_1 could be isPremierLeagueOn (I like Premier League more)
● x_3 could be isNotHome (Can’t watch it when I’m running errands. Can
I?)
and so on.
These inputs can either be excitatory or inhibitory. Inhibitory inputs are those
that have maximum effect on the decision making irrespective of other inputs i.e.,
Mathematically this is happening
We can see that g(x) is just doing a sum of the inputs — a simple aggregation. And theta here is called thresholding parameter. For example, if I always watch the game
when the sum turns out to be 2 or more, the theta is 2 here. This is called the Thresholding Logic.
Question :
Biological Neurons
Major components: Axions, Dendrites, Synapse
Artificial Neurons
Major Components: Nodes, Inputs, Outputs, Weights, Bias
Biological Neurons working:Information from other neurons, in the form of
electrical impulses, enters the dendrites at connection points called synapses.
The information flows from the dendrites to the cell where it is processed. The
output signal, a train of impulses, is then sent down the axon to the synapse of
other neurons.
ANN working :The arrangements and connections of the neurons made up the
network and have three layers. The first layer is called the input layer and is the
only layer exposed to external signals. The input layer transmits signals to the
Different types of Neural Network Architectures
Perceptron
Types of Perceptron models
This is one of the easiest Artificial neural networks (ANN) types. A single-layered perceptron model consists feed-forward network and also includes a threshold
transfer function inside the model. The main objective of the single-layer perceptron model is to analyze the linearly separable objects with binary outcomes.
In a single layer perceptron model, its algorithms do not contain recorded data, so it begins with inconstantly allocated input for weight parameters. Further, it
sums up all inputs (weight). After adding all inputs, if the total sum of all inputs is more than a pre-determined value, the model gets activated and shows the
output value as +1.
If the outcome is same as pre-determined or threshold value, then the performance of this model is stated as satisfied, and weight demand does not change.
Multilayer Perceptron model
The multi-layer perceptron model is also known as the Backpropagation algorithm, which executes
in two stages as follows:
● Forward Stage: Activation functions start from the input layer in the forward stage and
terminate on the output layer.
● Backward Stage: In the backward stage, weight and bias values are modified as per the
model's requirement. In this stage, the error between actual output and demanded originated
backward on the output layer and ended on the input layer.
Define Deep Learning
Think???
Difference between Deep Learning and
machine learning?