NN

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 4

1. Perceptorn 4.

Deep Feed-forward
A single-layer neural network is another name A deep feed-forward network that employs
for the perceptron model. This neural multiple hidden layers is referred to as a deep
network has only two layers: input and feed-forward network. Overfitting is the
output. fundamental issue with employing only one
There are no hidden layers in this form of hidden layer; consequently, by incorporating
neural network. It receives an input and additional hidden layers, we may (though not
computes the weighted input for each node. always) lessen overfitting and increase
Following that, it employs an activation generalisation.
function (often a sigmoid function) 5. Recurrent Neural Network
2. Feed Forward A kind of feed-forward (FF) networks are
A feed-forward neural network is a type of recurrent neural networks (RNNs). In this
artificial neural network where cycles type, every neuron in the hidden layers
between the nodes never occur. All of the receives an input with a particular time delay.
perceptrons in this neural network are When we need to access historical data
organised in layers, with the input layer during current iterations, we employ this kind
receiving input and the output layer of neural network. For instance, we need to
producing output. The reason the hidden know the words that came before in order to
layers are named hidden is because they are predict the next word in a sentence. RNNs
disconnected from the outside world. Every have the ability to process inputs and use
perceptron in one layer of a feed-forward various lengths and weights over time. This
neural network is linked to every node in the model's computations take historical data into
following layer. As a result, every node has consideration and the model size does not
complete connectivity. Another thing to note increase with the size of the input. However,
is that the nodes in the same layer are not this neural network has a problem with its
connected, either visibly or invisibly. The feed- sluggish processing speed. Also, it is unable to
forward network is free of back-loops. take into account any upcoming input.
Therefore, in order to reduce prediction error, 6. Long / Short Term Memory (LSTM):
we typically use A memory cell is introduced by LSTM
3. Radial Basis Network networks. With memory gaps, they can still
In order to approximate functions, radial basis process data. As we saw above, time delay is a
function networks are frequently employed. factor that RNNs can take into account.
They vary from other neural networks However, if our RNN fails when faced with a
because they learn more quickly and big amount of relevant data and we want to
approximate everything. RBNs use a Radial extract relevant data from it, LSTMs are the
Basis Function as an activation function, which best option. Additionally, unlike LSTMs, RNNs
is the main difference between RBNs and are not capable of recalling data from a long
Feed-forward networks. A logistic function time ago.
(sigmoid function), which determines whether 7. Gated Recurrent Unit (GRU):
the response is yes or no, returns a value Gated Recurrent Units are a variant of LSTMs
between 0 and 1. The problem is that an RBN since they both have comparable designs and
cannot be used when the data are typically yield results that are just as good.
continuous. RBIs show how far our produced Only three gates are present in GRUs, and an
output diverges from the desired result. These internal cell state is not preserved.
are quite helpful when dealing with a. Update Gate: Chooses how much historical
continuous values. RBIs typically exhibit FF information to transmit to the future.
network behaviour with varied degrees of b. Reset Gate: Selects how much prior
complexity. knowledge should be forgotten.
c. Subpart of the reset fate particular probability criteria. The only factors
8. Auto Encoder (AE): influencing the likelihood of changing states
An algorithm for unsupervised machine are the current state and the passage of time.
learning is an autoencoder neural network. 13. Hopfield Network (HN):
The number of hidden cells in an autoencoder Every neuron in a Hopfield neural network has
is less than the number of input cells. In a direct connection to every other neuron. A
autoencoders, the number of input cells and neuron in this network is either ON or OFF.
the number of output cells are equal. We train Receiving input from other neurons allows
an AE network to show the output that is neurons to alter their state. Hopfield
identical to the input it is fed, which forces networks (HNs) are typically used to store
AEs to identify patterns in the data and patterns and memories. A neural network can
generalise them. For the more compact recognise a pattern even if it is slightly
representation of the input, we employ deformed or unfinished after being trained on
autoencoders. From compressed data, we can a set of patterns. When we give it incomplete
recreate the original data. The procedure is data, it can recognise the full pattern and
rather straightforward because AE demands deliver the best guess.
that the result match the input. 14. Boltzmann Machine (BM):
9. Variational Autoencoder (VAE): Using a Boltzmann machine network, one can
A probabilistic technique is used by a infer information about previously
variational autoencoder (VAE) to describe unobserved data by learning a probability
observations. It displays the probability distribution from an initial dataset. In BMs,
distribution input nodes and hidden nodes both have
10. Denoising Autoencoder (DAE): different states; as soon as all of our hidden
Because the input additionally contains nodes change, our input nodes become
random noise, the network of this output nodes. For illustration: Imagine that
autoencoder is unable to simply copy the we work at a nuclear power plant where
input to the output. On DAEs, we are safety must always come first. Our
developing it to filter out the noise and responsibility is to make sure that every
produce useful data. As a result, the output is component of the power plant is safe to use.
a more honed version of the noisy input. In Each component will have a state, denoted by
this situation, the algorithm forces the hidden a boolean value (1 for usable and 0 for
layer to acquire more robust features. unusable) for simplicity. There will be some
11. Sparse Autoencoder (SAE): components, though, for which we won't be
On sparse autoencoder networks, we would able to consistently measure the states.
create our loss function by penalising hidden Furthermore, we lack information that would
layer activations, resulting in a small number allow us to predict when the power plant will
of nodes being activated for each sample that explode if the secret component ceases
is fed into the network. This strategy is based working. In that instance, we create a model
on the premise that, for instance, if someone that keeps track of when the component's
declares themselves an expert in subjects A, state changes. We will therefore be informed
B, C, and D, they may actually be more of a when it does so that we may check on that
generalist in these fields. However, it is likely component and guarantee the safety of the
to expect insights from the person's powerplant.
understanding of subject D if they just 15. Restricted Boltzmann Machine (RBM):
indicate that they are dedicated to subject D. An alternative to BMs is an RBM. This
12. Markov Chain (MC): paradigm allows for the possibility of
A mathematical system called a Markov chain symmetric connections between neurons in
experiences state changes according to the input layer and the hidden layer. There
are no internal connections within each layer, various convolutions using max pooling, and
which is one thing to note. In contrast, then decodes using unspooling in following
internal connections inside Boltzmann layers.
machines may exist in the hidden layer. Since 20. Generative Adversarial Network (GAN):
BMs are constrained, the model can be GANs learn to produce new data with the
trained effectively. same statistics as the training data when given
16. Deep Belief Network (DBN): training data. For example, if we train our
There are multiple hidden levels in Deep GAN model on images, a trained model can
Belief Networks. DBNs using an unsupervised create new photographs that appear
method first learn without supervision. The authentic to the human eye. The goal of GANs
layers of a DBN serve as a feature detector. is to discriminate between actual and
We can utilise supervision approaches to synthetic findings in order to produce more
teach our model to categorise after authentic outcomes.
unsupervised training. DBNs are a hybrid of 21. Liquid State Machine (LSM) :
Restricted Boltzmann Machines (RBM) and A Liquid State Machine (LSM) is one type of
Autoencoders (AE), the latter of which spiking neural network. An LSM is made up of
employs a a large number of neurons. Each node
17. Deep Convolutional Network (DCN): receives input from an external source as well
Convolutional Neural Networks are neural as other nodes, which can change over time.
networks that are mostly used for image It's worth noting that the nodes on LSMs
classification, picture grouping, and object connect to each other at random. Threshold
recognition. DNNs allow for the unsupervised levels replace activation functions in LSMs.
creation of hierarchical image Only when LSMs reach a certain threshold
representations. DNNs are used to augment it level does a certain neuron broadcast its
with far more sophisticated features, allowing output.
it to do the task with more precision. 22. Extreme Learning Machine (ELM):
18. Deconvolutional Neural Networks (DN): ELMs select hidden nodes at random and then
Convolutional neural networks (CNNs) that calculate the output weights analytically. As a
work in reverse are known as deconvolutional result, these algorithms are much faster than
networks. Although the nature of a DN is standard neural network methods.
similar to that of a CNN, its use in AI is Furthermore, arbitrarily assigned weights in
considerably different. Deconvolutional extreme learning machine networks are rarely
networks aid in the discovery of lost features modified. ELMs can learn the output weights
or signals in previously useful networks. A DN in a single step.
may lose a signal as a result of being mixed up 23. Echo State Network (ESN):
with other signals. A Deconvolutional network The Echo State Network (ESN) is a recurrent
can take a vector and turn it into an image. neural network subtype. Each input node
19. Deep Convolutional Inverse Graphics Network receives a non-linear signal in this case. The
(DC-IGN): hidden nodes in ESN are only loosely
Deep Convolutional Inverse Graphics connected. The connections and weights of
Networks (DC-IGN) are designed to connect hidden nodes are assigned at random. The
graphics representations to images. For more final output weights on ESNs are trainable and
advanced picture processing, it employs can be changed.
components such as lighting, object position, 24. Deep Residual Network (DRN):
texture, and other characteristics of image Deep neural networks with many layers can
design. It processes input and output across be tough to train and take much time during
multiple levels. The deep convolutional the training phase. It may also lead to the
inverse graphics network encodes using degradation of results. Deep Residual
Networks (DRNs) prevent degradation of
results, even though they have many layers.
With DRNs, some parts of its inputs pass to
the next layer. Therefore, these networks can
be quite deep (It may contain around 300
layers).
25. Kohonen Networks (KN):
An unsupervised algorithm is a Kohonen
network. The Kohonen Network, also known
as self-organizing maps, is highly effective
when we have data distributed over multiple
dimensions and just want it in one or two
dimensions. It can be thought of as a
dimensionality reduction method. Kohonen
networks are used to visualise high-
dimensional data. They prefer competitive
learning to error repair learning.
26. Support Vector Machines (SVM):
The Support Vector Machines neural network
is a support vector machine and neural
network hybrid technique. It always attempts
to categorise fresh cases into two categories:
Yes or No (1 or 0). SVMs are commonly used
in binary classification. These are not
commonly referred to as neural networks.
27. Neural Turing Machine (NTM):
The controller in this neural network interacts
with the outside world through input and
output vectors. It also interacts with the
memory matrix to perform selective read and
write (R/W) operations. It is said that a Turing
machine is computationally equivalent to a
modern computer. As a result, by interacting
with external memory, NTMs augment the
capabilities of regular neural networks.

You might also like