Learning Process: CS/CMPE 537 - Neural Networks
Learning Process: CS/CMPE 537 - Neural Networks
Learning Process: CS/CMPE 537 - Neural Networks
Learning
Learning? Learning is a process by which the free parameters of a neural network are adapted through a continuing process of stimulation by the environment in which the network is embedded The type of learning is determined by the manner in which the parameter changes take place Types of learning
Error-correction, memory-based,
Hebbian, competitive,
Learning Process
Learning Algorithms
Learning algorithm: a prescribed set of well-defined rules for the solution of a learning problem
In the context of synaptic weight updating, the learning algorithm prescribes rules for w Error-correction Memory based Boltzmann Hebbian Competitive Supervised Reinforced Self-organizing (unsupervised)
4
Learning rules
Learning paradigms
The goal of error-correction learning is to minimize a cost function based on the error function Least-mean-square error as cost function J = E[0.5kek2(n)] E = expectation operator
Minimizing
How do we find the expectation of the process? We avoid its computation, and use an instantaneous value of the sum of squared errors as the error function (as an approximation) (n) = 0.5kek2(n) Error correction learning rule (or delta rule) wkj(n) = ek(n)xj(n) = learning rate A plot of error function and weights is called an error surface. The minimization process tries to find the minimum point on the surface through an iterative procedure.
6
All (or most) of the past experiences are stored explicitly in memory of correctly classified input-output examples: {(xi, di)}i = 1, N Given a test vector xtest , the algorithm retrieves the classification of the xi closest to xtest in the training examples (and memory) Ingredients
Definition of
what is closest or local neighborhood Learning rule applied to the training examples in the local neigborhood
Nearest neighbor rule K-nearest neighbor rule Radial-basis function rule (network)
Hebb, a neuropsychologist, proposed a model of neural activation in 1949. Its idealization is used as a learning rule in neural network learning. Hebbs postulate (1949)
If
the axon of cell A is near enough to excite cell B and repeatedly or perseistently takes part in firing it, some growth process or metabolic change occurs in one or both cells such that As efficiency as one of the cells firing B is increased.
2.
If two neurons on either side of a synapse are activated simultaneously, then the strength of that synapse is selectively increased If two neurons on either side of synapse are activated asynchronously, then that synapse is selectively weakened or eliminated Time-dependent mechanism Local mechanism Interactive mechanism Correlational mechanism
10
General form of Hebbian rule wkj(n) = F[yk(n), xj(n)] F is a function of pre-synaptic and post-synaptic activities. A specific Hebbian rule (activity product rule) wkj(n) = yk(n)xj(n) = learning rate Is there a problem with the above rule?
No bounds
11
12
13
where = proportionality constant and x and y are respective means After simplification wkj(n) = {E[yk(n)xj(n)] xy}
14
The output neurons of a neural network (or a group of output neurons) compete among themselves for being the one to be active (fired)
At any given time, only one
neuron in the group is active This behavior naturally leads to identifying features in input data (feature detection)
Neurobiological basis
Competitive behavior was
Early self-organizing and topographic map neural networks were also proposed in the 1970s (e.g. cognitron by Fukushima)
15
of neurons A limit on the strength of each neuron A mechanism that permits the neurons to compete for the right to respond to a given input, such that only one neuron is active at a time
16
17
Standard competitive learning rule wji = (xi wji) if neuron j wins the competition 0 otherwise Each neuron is allotted a fixed amount of synaptic weight which is distributed among its input nodes i wji = 1 for all j
18
19
Boltzmann Learning
Stochastic learning algorithm based on informationtheoretic and thermodynamic principles The state of the network is captured by an energy function, E E = -1/2 k j wkjsisk where sj = state of neuron j [0, 1] (i.e. binary state) Learning process
At each step,
choose a neuron at random (say k) and flip its state sk (to - sk ) by the following probability until thermal equilibrium is achieved
20
Credit-Assignment Problem
How to assign credit and blame for a neural networks output to its internal (free) parameters ? This is basically the credit-assignment problem
The learning
system (rule) must distribute credit or blame in such a way that the network evolves to the correct outcomes a sequence of actions, are responsible for certain outcomes of the network components behavior should be
21
22
Conceptually, supervised learning involves a teacher who has knowledge of the environment and guides the training of the network In practice, knowledge of the environment is in the form of input-output examples
When viewed as
23
Reinforcement learing is supervised learning in which limited information of the desired outputs is known
Complete knowledge of
the environment is not available; only basic benefit or reward information In other words, a critic rather than a teacher guides the learning process
dog by positive (good dog, something to eat) and negative (bad dog, nothing to eat) reinforcement
24
Reinforcement learning is the online learning of an input-output mapping through a process of trail and error designed to maximize a scalar performance index called reinforcement signal Types of reinforcement learning
Non-associative:
selecting one action instead of associating actions with stimuli. The only input received from the environment is reinforcement information. Examples include genetic algorithms and simulated annealing. Associative: associating action and stimuli. In other words, developing a action-stimuli mapping from reinforcement information received from the environment. This type is more closely related to neural network learning.
CS/CMPE 537 - Neural Networks (Sp 2006-2007) - Asim Karim @ LUMS 25
to this measure
CS/CMPE 537 - Neural Networks (Sp 2006-2007) - Asim Karim @ LUMS
network develops an internal representation of the inputs without any specific information Once it is trained it can identify features in the input, based on the task-independent (or general) criterion
28
29
Learning Tasks
30
Stationary environment
Learning
one time procedure in which environment knowledge is built-in (memory) and later recalled for use continually update the free parameters to reflect the changing environment
31
Non-stationary environment
Adaptation
32
means that the knowledge encoded in the neural network has become outdated requiring modification to reflect the new environment
Learning can be viewed as a stochastic process Stochastic process? when there is some element of randomness (e.g. neural network encoding is not unique for the same environment that is temporal)
Also, in general,
neural network represent just one form of representation. Other representation forms are also possible.
Regression model d = g(x) + where g(x) = actual model; = statistical estimate of error
34