Competitive Learning

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 17
At a glance
Powered by AI
The key takeaways are the different types of learning in neural networks including supervised, unsupervised and reinforcement learning. Competitive learning involves neurons competing to be activated in response to an input. MAXNET is a specific example of a competitive neural network that selects the neuron with the largest input.

The different types of learning in neural networks are supervised learning, unsupervised learning and reinforcement learning.

Competitive learning involves neurons competing among themselves to be activated in response to an input. Only one output neuron is active at a time, called the winner-takes-all neuron.

COMPETITIVE LEARNING

Learning
• The main property of a neural network is an
ability to learn from its environment, and to
improve its performance through learning.
• Learning may be viewed as the change in
behaviour acquired due to practice or experience
and it lasts for relatively long time.
• In case of artificial neural network, it is a process
of modifying neural network by updating its
weight, biases and other parameters if any.
Definition of learning:-
“Learning is a process by which the free parameters of
a neural network are adapted through a process of
stimulation by the environment in which the network is
embedded. The type of learning is determined by the
manner in which the parameter changes take place.”
- By Mendel and McClaren(1970)
*stimulation – encouragement of something to make it
develop or become more active.
Types of Learning

learning

Supervised Unsupervised Reinforcement


Unsupervised Learning
• Learning without a teacher:-
only input is given , no output is told.
during the training session, the neural network
receives a number of different input patterns, discovers
significant features in these patterns and learns how to
classify input data into appropriate categories.
Unsupervised learning tend to follow the neuro-
biological organization of the brain.
It’s aim is to learn rapidly and can be used in real-time.
Competitive Learning
• In competitive learning, neurons compete among
themselves to be activated.
• Only a single output neuron is active at any time.
• The output neuron that wins the “competition” is called
the winner-takes-all neuron.
• The basic idea of competitive learning was introduced in
the early 1970s.
• In the late 1980s, Teuvo kohenen introduced a special
class of artificial neural networks called self-organizing
feature maps. These maps are based on competitive
learning..
#Basic Elements
• –A set of neurons that are all same except
synaptic weight distribution
• •respond differently to a given set of input
pattern
• •A mechanism to compete to respond to a
given input
• •The winner that wins the competition is
called“winner-takes-all”
Input layer Output layer (Single)
Competitive learning rule:-
the output signal yk of the winning neuron k is set equal to one .the output signal
of all the neurons that loose the competition are set equal to zero.

yk = 1 if vk > vj for all j , j≠k


0 otherwise
Here
vk represent all the combined action of all the forward and feedback input to
neuron k.
W(kj) synaptic weight connecting input node j to neuron k.
∑w(kj)=1 for all k
Competitive Learning Rule:-
∆w(kj) = ἠ(x(j)- w(kj)) if neuron k wins
0 if neuron k looses
Applicaton of competitive learning :-
• In action-selection for intelligent agents:-
Winner-takes-all is a concept that has been widely applied in behavior-
based robotics as a method of action selection for intelligent agents.
Winner take-all systems work by connecting modules(task designated areas)
In such a way that when one action is performed it stops all other actions
from being performed, so only one action is occurring at a time.
For example:- if agent’s goal were to reduce thirst, the behaviour “drink”
would require the pre-condition of having water available, so the module
would activate the module in charge of “find water”.
So the activations organize the behaviours into a sequence, even though only
one action is performed at a time.
Maxnet(fixed weight competitive net)
• MAXNET [Lippmann, 1987] is a specific example
of a neural net based on competition.
• It can be used as a subnet to pick the node
whose input is the largest.
• The m nodes in this subnet are completely
interconnected, with symmetric weights.
• There is no training algorithm for the MAXNET;
the weights are fixed.
• The architecture of MAXNET is as shown in the
figure
Maxnet Architecture
• The activation function for the MAXNET is
x if x ≥ 0
f(x) =
0 if x < 0

• The application procedure is as follows:


1)Initialize activations and weights ( 0 < ε < 1/m)
aj(0) input to node Aj
wij = 1 if i = j
- ε if i ≠ j
2- Update the activation of each node: j = 1, … , m
aj(new) = f [aj(old) – ε Σ ak(old)] k≠ j

3- Save activation for use in next iteration


aj(old) = aj(new), j = 1, … , m

4- If more than one node has a nonzero activation, then updating the
activation will continue, otherwise we should stop.
• Example -1: Consider the action of a MAXNET with four neurons
and inhibitory weights ε = 0.2. The neurons given initial activation
(input signal)
al(0) = 0.2, a2(0) = 0.4, a3(0) = 0.6, a4(0) = 0.8
Solution:-
aj(new) = f [aj(old) – ε Σ ak(old)]

a1(new) = f [a1(old) – 0.2(a2(old)+ a3(old)+ a4(old)]


a2(new) = f [a2(old) – 0.2(a1(old)+ a3(old)+ a4(old)]
a3(new) = f [a3(old) – 0.2(a2(old)+ a2(old)+ a4(old)]
a4(new) = f [a4(old) – 0.2(a1(old)+ a2(old)+ a3(old)]
• a1(1) = f [a1(0) – 0.2(a2(0)+ a3(0)+ a4(0)]
= f [0.2 - 0.2(0.4 + 0.6 + 0.8)] = f [- 0.16] = 0
• a2(1) = f [a2(0) – 0.2(a1(0)+ a3(0)+ a4(0)]
= f [0.4 - 0.2(0.2 + 0.6 + 0.8)] = f [0.08] =0.08
• a3(1) = f [a3(0) – 0.2(a1(0)+ a2(0)+ a4(0)]
= f [0.6 - 0.2(0.2 + 0.4 + 0.8)] = f [0.32] =0.32
• a4(1) = f [a4(0) – 0.2(a1(0)+ a2(0)+ a3(0)]
= f [0.8 - 0.2(0.2 + 0.4 + 0.6)] = f [0.56] =0.56
• The activations found as the net iterates are
• al(l) = 0.0, a2(l) = 0.08 ,a3(l) = 0.32, a4(l) = 0.56
• al(2) = 0.0 ,a2(2) = 0.0 ,a3(2) = 0.192 ,a4(2) = 0.48
• al(3) = 0.0, a2(3) = 0.0, a3(3) = 0.096 a4(3) =0.442
• al(4) = 0.0 a2(4) = 0.0 a3(4) = 0.008 a4(4) = 0.422
• a1(5) = 0.0 a2(5) = 0.0 a3(5) = 0.0 a4(5) = 0.421,

You might also like