Intro To Neural Networking - Levi Bahlmann ICS4U ISP B
Intro To Neural Networking - Levi Bahlmann ICS4U ISP B
Intro To Neural Networking - Levi Bahlmann ICS4U ISP B
Applied examples:
- Spam email vs Good emails
- High risk vs. Low risk (Credit Scores)
- Good guy vs. Bad guy (Fraud Detection)
1948: Warren McCulloch and Walter Pitts published “A Logical Calculus of the Ideas
Immanent in Nervous Activity”.
This research looked at how the brain produced complex patterns and how those
patterns could be simplified down to a binary logic structure by simply using true/false
connections.
1982: The paper Hopfield Net was presented by Jon Hopfield. This paper was on the
subject of recurrent neural networks. Additionally, the concept of backpropagation
resurfaced, and many researchers started realizing the potential for neural networks.
Most recent: Nowadays neural networks are being created for very specific purposes.
Deep Blue by IBM pushed what we thought was possible for computers handling
complex calculations and proceeded to dominate the chess world as a result. Neural
networking is also used to help machines discover new medicine, identify financial
market trends, and perform massive scientific calculations.
Neural Networks are born in ignorance, like a child learning about the world
from scratch. As a result, processing information isn’t going to be perfect the first
time around. The network doesn’t know what variables to apply to the inputs to
make the correct output guess. As a result the network makes a guess, finds the
error, and adjusts until the desired output is reached.
Basic pseudocode:
input * weight = guess → ground truth - guess = error
→ error * weight's contribution to error = adjustment
Neural networks are corrective feedback loops. They reward weights and
biases that support the desired outcome and punish ones that lead to error.
Feed Forward (NN) and Backpropagation
Feed Forward NN is one of the
simplest types of processing.
Information continues in one
direction through input nodes until it
reaches an output. There are often
many hidden layers that aid in
functionality but these hidden layers
do not affect the direction of the
information.
Backpropagation is an algorithm
used to calculate errors by working
backwards from the output nodes to
the input nodes. It’s responsible for
finding the weights and biases that
need to be adjusted for future
inputs, and creating a desired output with minimal error.
To oversimplify it, backpropagation goes back and checks the variables (weights and
biases) of each connection between nodes and adjusts them until the desired outcome
is reached. This additional algorithm is necessary in feed-forward networks to find errors
and adjust weights because as discussed earlier the information in FFNNs only travel in
one direction.
Recurrent (NN)
Recurrent NNs are more
complex. The use of back
propagation is essentially useless
here because this network fixes
errors by itself. It does this by
taking the output and feeding it
back into the network, storing
error calculations and
adjustments as historical
processes that are reused in the
processing of future inputs.
Neural Networks in Finance and Business
Neural networks have developed a broad market for financial operations, being
used in areas like fraud detection, risk assessment, marketing research solutions
and predicting stock markets (to an extent).
When built properly, these networks can easily detect subtle non-linear
interdependencies or data discrepancies that humans simply cannot. Using
neural networks to analyze price data to pinpoint trade opportunities has proven
incredibly useful.
Studies have shown that some models can accurately predict stock prices
50-60% of the time. Other models are accurate 70% of the time no matter what
the instance.
Disadvantages:
Reliance on local hardware - Some networks still rely on local
hardware that will require maintenance. (I personally see this as a
job opportunity for those who might have their jobs taken by neural
networks as their capabilities expand.)
Complex - Algorithms for specific purposes will take lots of time
and money to develop.
Error detection - Error detection can become difficult with a self
learning algorithm that isn’t transparent enough.
Vague Outputs - Outputs are often a range and not specified or
actualized values.
WORK CITED:
A Beginner's Guide to Neural Networks and Deep Learning
What are Neural Networks? - IBM
What is a Neural Network? - Investopedia
The Hopfield Network
https://youtu.be/oPhxf2fXHkQ
Recurrent NN Image
Stock Market Prediction Image