ML Trends
ML Trends
ML Trends
net/publication/342880643
CITATIONS READS
0 1,746
1 author:
SEE PROFILE
All content following this page was uploaded by Hamed Daei Kasmaei on 12 July 2020.
Use cases:
Some applications of the reinforcement learning algorithms are
computer played board games (Chess, Go), robotic hands,
and self-driving cars.
What are Convolutional Neural Networks?
A Convolutional Neural Network (ConvNet/CNN) is a Deep
Learning algorithm which can take in an input image, assign
importance (learnable weights and biases) to various
aspects/objects in the image and be able to differentiate
one from the other. The pre-processing required in a ConvNet
is much lower as compared to other classification algorithms.
While in primitive methods filters are hand-engineered, with
enough training, ConvNets have the ability to learn these
filters/characteristics.
The architecture of a ConvNet is analogous to that of the
connectivity pattern of Neurons in the Human Brain and was
inspired by the organization of the Visual Cortex. Individual
neurons respond to stimuli only in a restricted region of the
visual field known as the Receptive Field. A collection of such
fields overlap to cover the entire visual area.
A CNN sequence to classify handwritten digits
Applications of Convolutional Neural Networks
Decoding Facial Recognition
Analyzing Documents
Historic and Environmental Collections
Understanding Climate
Grey Areas(Better Image resolution about what human being
sees)
Advertising
brain cancer detection and many issues in Healthcare
Recurrent neural networks
A recurrent neural network (RNN) is a type of artificial neural
network commonly used in speech recognition and natural
language processing (NLP). RNNs are designed to recognize a
data's sequential characteristics and use patterns to predict
the next likely scenario.
RNN converts the independent activations into dependent
activations by providing the same weights and biases to all the
layers, thus reducing the complexity of increasing parameters
and memorizing each previous outputs by giving each output
as input to the next hidden layer.
Hence these three layers can be joined together such that the
weights and bias of all the hidden layers is the same, into a
single recurrent layer.
Recurrent neural networks