Intro To Neural Networking - Levi Bahlmann ICS4U ISP B

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

Intro to Neural Networks

What is Neural Networking (NN)?


Neural Networking (NN) is a form of programming that mimics the behavior of a
brain by attempting to recognize underlying relationships within sets of data. It is often
noted that these softwares mimic the look of neurons and synapses found in the brain.
The goal is to create a system that can adapt to changing inputs, much like how
mammalian brains consistently adapt to the world around them. These networks aim to
generate the best result possible without changing the output criteria. Neural
Networking is the root of artificial intelligence, but it’s also gaining popularity in trading
systems.

Applied examples:
- Spam email vs Good emails
- High risk vs. Low risk (Credit Scores)
- Good guy vs. Bad guy (Fraud Detection)

Deep learning works by clustering and classifying information. Mapping


inputs to outputs, and often known as a “Universal Aproximator”. These
algorithms can approximate an unknown function between any input and any
output, assuming they are related either by correlation or causation.

NOTE: Neural networks with several


processing layers are known as deep
networks and are useful for algorithms that
allow for more complex machine learning.

Supervised Learning: Humans must transfer their knowledge to a dataset in


order for the neural network to learn the correlation between given labels and
data. This type of machine learning helps algorithms learn to classify data as
labels.
Unsupervised Learning: Learning without labels. Deep learning does not
require learning with labels and instead clusters data based on similarities.
History
The biggest leaps in neural networks have been made within the last 100 years:

1948: Warren McCulloch and Walter Pitts published “A Logical Calculus of the Ideas
Immanent in Nervous Activity”.
This research looked at how the brain produced complex patterns and how those
patterns could be simplified down to a binary logic structure by simply using true/false
connections.

1958: Frank Rosenblatt was credited with the development of perceptron.


Rosenblatt’s research added to the credibility of McColloch’s and Pitt’s work. Rosenblatt
furthered his research to demonstrate how neural networks could detect images or
make inferences based on inputs.

1982: The paper Hopfield Net was presented by Jon Hopfield. This paper was on the
subject of recurrent neural networks. Additionally, the concept of backpropagation
resurfaced, and many researchers started realizing the potential for neural networks.

Most recent: Nowadays neural networks are being created for very specific purposes.
Deep Blue by IBM pushed what we thought was possible for computers handling
complex calculations and proceeded to dominate the chess world as a result. Neural
networking is also used to help machines discover new medicine, identify financial
market trends, and perform massive scientific calculations.

Neural Networks are born in ignorance, like a child learning about the world
from scratch. As a result, processing information isn’t going to be perfect the first
time around. The network doesn’t know what variables to apply to the inputs to
make the correct output guess. As a result the network makes a guess, finds the
error, and adjusts until the desired output is reached.

Basic pseudocode:
input * weight = guess → ground truth - guess = error
→ error * weight's contribution to error = adjustment

Neural networks are corrective feedback loops. They reward weights and
biases that support the desired outcome and punish ones that lead to error.
Feed Forward (NN) and Backpropagation
Feed Forward NN is one of the
simplest types of processing.
Information continues in one
direction through input nodes until it
reaches an output. There are often
many hidden layers that aid in
functionality but these hidden layers
do not affect the direction of the
information.
Backpropagation is an algorithm
used to calculate errors by working
backwards from the output nodes to
the input nodes. It’s responsible for
finding the weights and biases that
need to be adjusted for future
inputs, and creating a desired output with minimal error.
To oversimplify it, backpropagation goes back and checks the variables (weights and
biases) of each connection between nodes and adjusts them until the desired outcome
is reached. This additional algorithm is necessary in feed-forward networks to find errors
and adjust weights because as discussed earlier the information in FFNNs only travel in
one direction.

Recurrent (NN)
Recurrent NNs are more
complex. The use of back
propagation is essentially useless
here because this network fixes
errors by itself. It does this by
taking the output and feeding it
back into the network, storing
error calculations and
adjustments as historical
processes that are reused in the
processing of future inputs.
Neural Networks in Finance and Business
Neural networks have developed a broad market for financial operations, being
used in areas like fraud detection, risk assessment, marketing research solutions
and predicting stock markets (to an extent).

When built properly, these networks can easily detect subtle non-linear
interdependencies or data discrepancies that humans simply cannot. Using
neural networks to analyze price data to pinpoint trade opportunities has proven
incredibly useful.

However, when it comes to neural networks predicting stock trades, results of


studies conducted vary drastically. This makes sense because there are many
different models of neural networks that each have their own way of “learning”.

Studies have shown that some models can accurately predict stock prices
50-60% of the time. Other models are accurate 70% of the time no matter what
the instance.

Lastly, there’s a handful of research that has concluded a 10% increase in


efficiency is the best you should expect as an investor.
Advantages and Disadvantages
Advantages:
Multitasking - Neural networks can perform multiple tasks at once with minimal
error; they can also work continuously for longer periods of time compared to
humans.
Better memory - Neural networks can be programmed to learn from their
outputs and use what they learn in future input processing. As humans we do this
too, but certain information gets forgotten if it’s not used for long periods of time.
Continual expansion to new applications - Neural networks are constantly
being applied to new areas of work, and will most certainly be successful
compared to their early theoretical versions.

Disadvantages:
Reliance on local hardware - Some networks still rely on local
hardware that will require maintenance. (I personally see this as a
job opportunity for those who might have their jobs taken by neural
networks as their capabilities expand.)
Complex - Algorithms for specific purposes will take lots of time
and money to develop.
Error detection - Error detection can become difficult with a self
learning algorithm that isn’t transparent enough.
Vague Outputs - Outputs are often a range and not specified or
actualized values.
WORK CITED:
A Beginner's Guide to Neural Networks and Deep Learning
What are Neural Networks? - IBM
What is a Neural Network? - Investopedia
The Hopfield Network
https://youtu.be/oPhxf2fXHkQ
Recurrent NN Image
Stock Market Prediction Image

You might also like