0% found this document useful (0 votes)
38 views15 pages

Deep Learning: Presented By:-Anuj Trehan (003) Deepak Dhingra (008) Divyanshu Sharma

Deep learning is a type of machine learning that is inspired by the human brain. It uses neural networks that can learn representations from large amounts of data. Deep learning networks are capable of supervised, unsupervised, and reinforcement learning. Deep learning has outperformed humans in some tasks and traditional machine learning algorithms due to its ability to learn representations from large datasets. Popular deep learning models include restricted Boltzmann machines, autoencoders, and generative adversarial networks.

Uploaded by

Deepak Dhingra
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
Download as pptx, pdf, or txt
0% found this document useful (0 votes)
38 views15 pages

Deep Learning: Presented By:-Anuj Trehan (003) Deepak Dhingra (008) Divyanshu Sharma

Deep learning is a type of machine learning that is inspired by the human brain. It uses neural networks that can learn representations from large amounts of data. Deep learning networks are capable of supervised, unsupervised, and reinforcement learning. Deep learning has outperformed humans in some tasks and traditional machine learning algorithms due to its ability to learn representations from large datasets. Popular deep learning models include restricted Boltzmann machines, autoencoders, and generative adversarial networks.

Uploaded by

Deepak Dhingra
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1/ 15

DEEP LEARNING

Presented by:-
Anuj Trehan (003)
Deepak Dhingra (008)
Divyanshu Sharma (011)
WHAT IS DEEP
LEARNING ?
 Deep Learning is an artificial intelligence function that imitates the workings
of the human brain in processing data and creating patterns for use in
decision making.

 Deep learning is a subset of machine learning in Artificial Intelligence that


has networks capable of learning representations from data.

 Deep Learning can be applied for all types of learning tasks i.e. Supervised ,
Unsupervised and Reinforcement Learning.

 Deep Learning takes its inspiration from the most fundamental part of the
human brain i.e. “THE NEURON” . The most basic part of a deep learning
system is the “ARTIFICIAL NEURON” .
WHY USE DEEP LEARNING ?
•Deep Learning has achieved and outperformed human level
performance in many tasks .

•Availability of huge amount of data and computation power is the


key to train deep learning algorithms, at this time we have both.

•Once trained and processed , can be exported to any type of


platform like Computer Systems, Smart Phones, IOT devices etc.
THE MAJOR
REASON
• They have also outperformed
traditional learning algorithms in
many tasks as they learn the
representation in data better .

• Moreover their performance


increases with the amount of data
delivered to them whereas the
traditional methods remains
constant.
HOW IT ALL STARTED AND
EVOLVED ?
• 1957 : Setting the foundation for deep neural network
• 1960-70s: Backpropagation emerges
• 1980s: Convolution emerges
• 1990s: Unsupervised deep learning
• 1990s-2000s: Supervised deep learning
• 2006s-present: Modern deep learning
Basic Artificial Neuron
MAPPING PATTERNS IN DATA
WITH DEEP LEARNING
• So far we have seen that deep neural networks can efficiently work
with the labeled data or they can do unsupervised learning well.
• But they are not only restricted to the supervised learning task. They
are also very good at unsupervised learning tasks like data
compression , pattern recognition etc.
• Many models have been developed in these areas but the prominent
ones are Restricted Boltzmann Machines, Autoencoders, and the
most recent Generative adversarial nets.
RESTRICTED
BOLTZMANN
MACHINES
• Restricted Boltzmann Machines, are shallow neural
networks that only have two layers . They are used
to find patterns in data by reconstructing the input .
• We say that they are "restricted" because neurons
within the same layer are not connected . RBMs
were first created by Paul Smolensky in 1986, and
they were further developed by Geoffrey Hinton in
2002.
• RBMs are useful in many applications like
dimensionality reduction, feature extraction , and
collaborative filtering, just to name a few.
RESTRICTED BOLTZMANN MACHINES
• RBMs are useful in many applications like
dimensionality reduction, feature extraction ,
and collaborative filtering, just to name a few.
• So let's say that we provide an image as input
to an RBM . The pixels are processed by the
input layer, which is also known as the visible
layer. The learning process consists of several
forward and backward passes, where the RBM
tries to reconstruct the input data.
• The weights of the neural net are adjusted in
such a way that the RBM can find the
relationships among input features, and
determine which features are relevant . After
training is complete, the net is able to
reconstruct the input based on what it learned
AUTOENCODERS
• RBMs sometimes cannot encode and learn some
advanced features (like human faces , backgrounds etc. )
because of their shallowness , so a bigger class of them
was derived known as Autoencoders which are simply
RBMs with more than 1 hidden layer.
• An autoencoder is a type of artificial neural network
used to learn efficient data encodings in an unsupervised
manner. The aim of an autoencoder is to learn a
representation (encoding) for a set of data, typically for
the purpose of dimensionality reduction . Recently, the
autoencoder concept has become more widely used for
learning generative models of data.
AUTOENCODERS
• There are various types of autoencoders the most used ones are
:-Stacked Autoencoders , Convolutional Autoencoders , Sparse
Autoencoders , Variational Autoencoder
• The training process for the Autoencoders is same as the RBMs ,but
here instead of cost we calculate loss functions and then
backpropagate to minimize the loss generated in each epoch
AUTOENCODERS
GENERATIVE ADVERSARIAL NEURAL
NETWORKS
• GANs were introduced in a paper by Ian Goodfellow and other
researchers at the University of Montreal, including Yoshua Bengio, in
2014.
• Generative adversarial networks (GANs) are a class of artificial
intelligence algorithms used in unsupervised machine learning,
implemented by a system of two neural networks contesting with
each other in a zero-sum game framework . This technique can
generate photographs that look at least superficially authentic to
human observers, having many realistic characteristics (though in
tests people can tell real from generated in many cases)
GENERATIVE ADVERARIAL
NEURAL NETWORKS

• Generative Adversarial Net is a combination of two neural


networks one called the Generator and the other called
Discriminator.
• The job of the Generator is to produce an image/data
which looks very much alike the dataset images.
• The job of the discriminator is to distinguish between the
images/data coming from the generator and the dataset.
• Together they are tied in a mini-max game in which the
discriminator tries to lower the probability of the
images/data coming from the generator to be classified
as real
• But the generator tries to minimize the probability that
the discriminator will identify the generated data as fake.
• And this game goes on until the discriminator is finally
unable to differentiate between the real and fake data.
Thank you….!

You might also like