ANN

Download as txt, pdf, or txt
Download as txt, pdf, or txt
You are on page 1of 3

Artifical Neural Network:

1. Introduction to ANN:

The human brain has billions of interconnected neurons that process information.
ANN is a simplified attempt to mimic this process for learning and making
predictions.
ANN consists of input nodes, hidden layers with hidden nodes, and output nodes.
It's trained on a part of given data, and its predictions are compared to actual
data. Errors in predictions guide adjustments to the network.
2. Mathematical Neuron Operations:

In ANNs, each neuron performs three key operations: weight function, net input
function, and transfer function.
These operations are used to process data. Let's break down the math:
Inputs (x1, x2, ..., xn) are multiplied by initial weights (w1, w2, ..., wn), and
the results are summed up: Σ(wi * xi).
A bias (bi) is added to this sum: bi + Σ(wi * xi).
The net input is then processed through a transfer function (activation function)
to produce the output (y).
This can be represented as: y = f(bi + Σ(wi * xi)), where f represents the transfer
function.
Example: Imagine you have a simple model to predict if it's going to rain or not
(output) based on temperature and humidity (inputs). The model calculates a
weighted sum of these inputs, adds a bias, and applies a function to determine the
prediction.
3. Transfer Functions:

Transfer functions (activation functions) define how a neuron processes its input
to produce an output. Two common ones are the log-sigmoid and hyperbolic tangent
sigmoid functions.
These functions have specific ranges (e.g., log-sigmoid outputs values between 0
and 1).
They're popular because their derivatives are easy to compute, which is essential
for adjusting weights during training.
So, the equation becomes: y = sigmoid(bi + Σ(wi * xi)) or y = tanh(bi + Σ(wi * xi).
Example: Think of the transfer function as a filter that decides whether to "fire"
(output 1) or not (output 0) based on the input.
4. Error Minimization and Training:

After the ANN generates an output, it's compared to the actual output to calculate
an error.
The goal is to minimize this error by adjusting the weights (wi) in the direction
that reduces the error. This process is called back-propagation.
This explanation is for a single-layer ANN. To handle more complex patterns,
multilayer neural networks with hidden layers are used.
5. Multilayer Neural Networks:

Multilayer neural networks have one or more hidden layers between the input and
output layers.
Each hidden layer has a predetermined number of neurons.
The process in a hidden unit is similar to that in the output layer: weighted input
sum, bias addition, and application of the activation function.
Example: Consider a multilayer network that predicts whether a given image contains
a cat. The hidden layers process features of the image (like edges and textures),
and the output layer makes the final prediction.
6. Final Forecast:

The final output of the hidden layer (Gj) becomes the input for the output layer,
where a similar process occurs to make the forecast.
In simple terms, ANNs are a way to model and mimic the learning process in the
human brain using mathematical operations. They are used to make predictions based
on data by adjusting the strengths of connections between nodes (neurons). The
process involves calculating weighted sums, applying functions to the sums, and
minimizing errors through training. Multilayer networks with hidden layers help
handle complex patterns, and the final output is the forecast.

User
Do this for me with an example: The model calculates a weighted sum of these
inputs, adds a bias, and applies a function to determine the prediction.
ChatGPT
Sure, let's break down the process of how a neural network calculates a prediction
using a simple example:

Example: Predicting Whether You'll Pass an Exam

Imagine you have a neural network that's trying to predict whether a student will
pass an exam based on two factors: the number of hours the student studied (input
1) and the number of practice tests they took (input 2).

Calculating the Weighted Sum:

Your neural network starts by taking the inputs (hours studied and practice tests)
and assigning a weight to each input. These weights represent the importance of
each input.
Let's say the weights for hours studied and practice tests are 0.6 and 0.4,
respectively.
Now, the network calculates a weighted sum of the inputs: Weighted Sum = (0.6 *
Hours Studied) + (0.4 * Practice Tests).
Adding a Bias:

To make the prediction more flexible and account for factors that the network might
not learn from the inputs alone, a bias term (like a constant) is added.
Let's say the bias is 0.2.
So, the network adds the bias to the weighted sum: Total Input = Weighted Sum +
Bias.
Applying the Activation Function:

The total input is then passed through an activation function. This function
determines whether the prediction should be a "yes" (1) or "no" (0) in this
example.
In this case, let's use a simple step function where if the total input is greater
than or equal to 0, the prediction is "yes" (1); otherwise, it's "no" (0).
The activation function in this case might be: Prediction = 1 if Total Input >= 0,
else Prediction = 0.
Making the Prediction:

Based on the activation function's output, the neural network makes a prediction.
If the prediction is 1, it means the network believes the student will pass the
exam. If the prediction is 0, it means the network predicts the student will not
pass.
So, the neural network calculates a prediction by first calculating a weighted sum
of the inputs, adding a bias, and then applying an activation function to determine
the final prediction. In this simple example, it's deciding whether a student will
pass an exam based on the hours studied and practice tests taken.

You might also like