Matlab Programs
Matlab Programs
Matlab Programs
1 a) Write a MATLAB program for Hebb net to classify two dimensional input patterns
in bipolar with given targets.
Hebb Network
The Hebb learning rule is a simple one. Donald Hebb stated in 1949 that in the brain,
the learning is performed by the change in the synaptic gap. According to the Hebb rule, the
weight vector is found to increase proportionately to the product of the input and the learning
signal. Here the learning signal is equal to the neuron's output. In Hebb learning, if two
interconnected neurons are 'on' simultaneously then the weights associated w1ih these neurons
can be increased by the modification made in their synaptic gap (strength). The weight update
in Hebb rule is given by
Wi(new)=Wi(old)+xiy
w[i](new)=w[i](old)+x[i]*y
bias(new) = bias(old)+y
As a result,
w(ne w) = w(old) + 𝚫w
The Hebb rule can be used for pattern association, pattern categorization, pattern
classification and over a range of other areas.
W(1:20)=0;
t=[1 -1];
b=0;
for i=1:2
W=W+X(i,1:20)*t(i);
b=b+t(i);
end
disp('Weight matrix');
disp(W);
disp('Bias');
disp(b);
Output:
Weight matrix
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 2 2
Bias
0
1 b) Generate XOR function and AND NOT function using MCCulloch-Pitts Neural
Network.
McCulloch-Pitts Neural Network
The early model of an artificial neuron is introduced by Warren McCulloch and Walter Pitts in
1943. The McCulloch-Pitts neural model is also known as linear threshold gate. It is a neuron
of a set of inputs I1 , I2 ,I3……Im and one output y. The linear threshold gate simply classifies
the set of inputs into two different classes. Thus the output y is binary. Such a function can be
W1 , W2 , W3 ,……Wm are weight values normalized in the range of either (0,1) or (-1,1) and
associated with each input line, is the weighted sum, and is a threshold constant. The
function f is a linear step function at threshold .
x2=[0 1 0 1];
z=[0 0 1 0];
con=1;
while con
zin=x1*W1+x2*W2;
for i=1:4
if zin(i)>=theta
y(i)=1;
else
y(i)=0;
end
end
disp('Output of Net');
disp(y);
if y==z
con=0;
else
disp('Net is not learning enter another sets of weights and threshold value');
W1=input('Weight W1=');
W2=input('Weight W2=');
theta=input('theta=');
end
end
disp('McCuloch-Pitts Net for ANDNOT function');
disp('Weights of Neuron');
disp(W1);
disp(W2);
disp('Threshold Value');
disp(theta);
Output:
Enter Weights
Weight W1=1
Weight W2=1
Enter Threshold Value
theta=0.1
Output of Net
0 1 1 1
Net is not learning enter another sets of weights and threshold value
Weight W1=1
Weight W2=-1
theta=1
Output of Net
0 0 1 0
Mcculloch-Pitts Net for ANDNOT function
Weights of neuron
1
-1
Threshold Value
1
v1=input('Weight v1=');
v2=input('Weight v2=');
theta=input('theta=');
end
end
disp('McCulloch Pitts Net for XOR function' );
disp('Weights of neuron Z1');
disp(w11);
disp(w21);
disp('Weights of neuron Z2');
disp(w12);
disp(w22);
disp('Weights of neuron ');
disp(v1);
disp(v2);
disp('Threshold value =');
disp(theta);
Output:
Enter the weights
Weight w11=1
Weight w12=-1
Weight w21=-1
Weight w22=1
Weight v1=1
Weight v2=1
Enter the threshold value
theta=1
output of net=
0 1 1 0
McCulloch Pitts Net for XOR function
Weights of neuron Z1
1
-1
Weights of neuron Z2
-1
1
Weights of neuron
1
1
Threshold value =
1
The Back Propagation learning algorithm is one of the most important developments in
neural networks (Bryson and Ho, 1969; Werbos, 1974; Lecun, 1985; Parker, 1985; Rumelha n,
associated with back-propagation learning algorithm are also called Back Propagation
Networks (BPN’s).
Step 0: Initialize weights and learning rate (take some small random values).
Feed-Forward Phase 1
Step 3: Each input unit receives input signal x; and sends it to the hidden unit (i = l to n).
Step 4: Each hidden unit Zi(j = 1 top) sums its Weighted inputs signals to calculate net input:
Calculate output of the hidden unit by applying its activation functions over Zinj (binary or
bipolar sigmoidal activation function): Zj = f(Zinj). And send the output signal from the
Step 5: For each output unit yk (k = I to m), calculate the net input:
Step 6: Each output unit yk (k = 1 to m) receives a target pattern corresponding to the input
The derivative f1 (yink ) can be calculated as in Section 2.3.3. On the basis of the calculated
Step 7: Each hidden unit (zi,j = I top) sums its delta inputs from the output units:
The term δinj gets multiplied with the derivative of f(Zinj) to calculate the error term:
temp=0;
end
for k=1
fdy(k)=fy(k)*(1- fy(k));
delk(k)=(t(k)-y(k))*fdy(k);
end
for k=1
for j=1:2
dw(j,k)=alpha*delk(k)*z(j);
end
dwb(k)=alpha*delk(k);
end
for j=1:2
for k=1
delin(j)=delk(k)*w(j,k);
end
delj(j)=delin(j)*fdz(j);
end
for i=1:2
for j=1:2
dv(i,j)=alpha*delj(j)*x(i);
end
dvb(i)=alpha*delj(i);
end
for k=1
for j=1:2
w(j,k)=w(j,k)+dw(j,k);
end
wb(k)=wb(k)+dwb(k);
end
w,wb
for i=1:2
for j=1:2
v(i,j)=v(i,j)+dv(i,j);
end
vb(i)=vb(i)+dvb(i);
end
v,vb
te(e)=e;
e=e+1;
end
Output:
v = 0.7000 -0.4000
-0.2000 0.3000
x=0 1
t=1
w = 0.5000
0.1000
t1 = 0
wb = -0.3000
vb = 0.4000 0.6000
alpha = 0.2500
e =1
w = 0.5167
0.1274
wb = -0.2699
v = 0.7000 -0.4000
-0.1963 0.3002
vb = 0.4037 0.6002
e =2
w = 0.5300
0.1450
wb = -0.2329
v = 0.7000 -0.4000
-0.1919 0.3014
vb = 0.4081 0.6014
e=3
w = 0.5392
0.1563
wb = -0.1978
v = 0.7000 -0.4000
-0.1883 0.3025
vb = 0.4117 0.6025
3) Develop a Kohnen Self Organizing feature map for image Recognition Problem.
The KSOM (also called a feature map or Kohonen map) is an unsupervised ANN
algorithm (Kohonen et al., 1996). It is usually presented as a dimensional grid or map whose
units (nodes or neurons) become tuned to different input data patterns. The principal goal of
the KSOM is to transform an incoming signal pattern of arbitrary dimension into a two-
dimensional discrete map. This mapping roughly preserves the most important topological and
metric relationship of the original data elements, implying that not much information is lost
during the mapping
Step 1 − Initialize the weights, the learning rate α and the neighborhood topological scheme.
Step 2 − Continue step 3-9, when the stopping condition is not true.
Step 6 − Calculate the new weight of the winning unit by the following relation
clear all;
clc;
disp('Kohonen Self organizing feature maps');
disp('The input patterns are');
x=[1 1 0 0; 0 0 0 1; 1 0 0 0;0 0 1 1]
t=1;
alpha(t)=0.6;
e=1;
disp('Since we have 4 input pattern and cluster unit to be formed is 2, the weight matrix is');
Output:
x=1 1 0 0
0 0 0 1
1 0 0 0
0 0 1 1
Since we have 4 input pattern and cluster unit to be formed is 2, the weight matrix is
alpha = 0.6000
Epoch =
e=1
D = 0.2000
D = 0.2000 0.2000
J =1
Weight updation
w=
0.6800 0.8000
0.8400 0.4000
0.2000 0.7000
0.3600 0.3000
D =1.0800 0.2000
D = 1.0800 1.2000
J =1
Weight updation
w=
0.2720 0.8000
0.3360 0.4000
0.0800 0.7000
0.7440 0.3000
D = 0.4320 1.2000
D = 0.4320 1.2000
J=1
Weight updation
w =0.7088 0.8000
0.1344 0.4000
0.0320 0.7000
0.2976 0.3000
D = -0.8272 1.2000
D =-0.8272 0.2000
J= 1
Weight updation
w = 0.2835 0.8000
0.0538 0.4000
0.6128 0.7000
0.7190 0.3000
ans = 0.3000
Epoch =
e= 2
D = -0.3309 0.2000
D = -0.3309 0.2000
J =1
Weight updation
w = 0.4985 0.8000
0.3376 0.4000
0.4290 0.7000
0.5033 0.3000
D = 0.7684 0.2000
D = 0.7684 1.2000
J=1
Weight updation
w = 0.3489 0.8000
0.2363 0.4000
0.3003 0.7000
0.6523 0.3000
D = 0.5379 1.2000
D =0.5379 1.2000
J=1
Weight updation
w = 0.5442 0.8000
0.1654 0.4000
0.2102 0.7000
0.4566 0.3000
D = -0.6235 1.2000
D = -0.6235 0.2000
J =1
Weight updation
w = 0.3810 0.8000
0.1158 0.4000
0.4471 0.7000
0.6196 0.3000
ans = 0.1500
Epoch =
e =3
D = -0.4364 0.2000
D = -0.4364 0.2000
J=1
Weight updation
w = 0.4738 0.8000
0.2484 0.4000
0.3801 0.7000
0.5267 0.3000
D = 0.6290 0.2000
D = 0.6290 1.2000
J =1
Weight updation
w = 0.4028 0.8000
0.2112 0.4000
0.3231 0.7000
0.5977 0.3000
D = 0.5347 1.2000
D = 0.5347 1.2000
J= 1
Weight updation
w = 0.4923 0.8000
0.1795 0.4000
0.2746 0.7000
0.5080 0.3000
D = -0.5455 1.2000
D = -0.5455 0.2000
J=1
Weight updation
w = 0.4185 0.8000
0.1526 0.4000
0.3834 0.7000
0.5818 0.3000
ans = 0.0750
4) Write a MATLAB program to implement Discrete Hopfield Network and test the
input Pattern.
%discrete hopfield
clear all;
clc;
disp('Discrete Hopfield Network');
theta=0;
X=[1 -1 -1 -1;-1 1 1 -1;-1 -1 -1 1];
%calculating Weight Matrix
W=X'*X
%calculating Energy
k=1;
while(k<=3)
temp=0;
for i=1:4
for j=1:4
temp=temp+(X(k,i)*W(i,j)*X(k,j));
end
end
E(k)=(-0.5)*temp;
k=k+1;
end
%Energy Function for 3sampls
E
%Test for given pattern s[-1 1 -1 -1]
disp('Given input pattern for testing');
x1=[-1 1 -1 -1]
temp=0;
for i=1:4
for j=1:4
temp=temp+(x1(i)*W(i,j)*x1(j));
end
end
SE=(-0.5)*temp
disp('By Synchronous Updation method');
disp('Th net input calculated is');
yin=x1*W
for i=1:4
if(yin(i)>theta)
y(i)=1;
elseif(yin(i)==theta)
y(i)=yin(i);
else
y(i)=-1;
end
end
disp('The output calculated for net input is');
y
temp=0;
for i=1:4
for j=1:4
temp=temp+(y(i)*W(i,j)*y(j));
end
end
SE=(-0.5)*temp
n=0;
for i=1:3
if (SE==E(i))
n=0;
k=i;
else
n=n+1;
end
end
if(n==3)
disp('Pattern is not associated with any input pattern');
else
disp('The test pattern');
x1
disp('is associated with');
X(k,:)
End
Output:
Discrete Hopfield Network
W=
3 -1 -1 -1
-1 3 3 -1
-1 3 3 -1
-1 -1 -1 3
E=
-10 -12 -10
Given input pattern for testing
x1 =
-1 1 -1 -1
SE =
-2
By Synchronous Updation method
Th net input calculated is
yin =
-2 2 2 -2
The output calculated for net input is
y=
-1 1 1 -1
SE =
-12
The test pattern
x1 =
-1 1 -1 -1
is associated with
ans =
-1 1 1 -1
5) Develop a simple Ant Colony Optimization Problem with MATLAB to find the
optimum path.
times=times+1;
for i=1:1:total_ants
startingptx(i)=startingptxxx+rand*0.5; %each ant randomly takes any
startingpty(i)=startingptyyy+rand*0.5; % position near its starting point
for i=1:1:total_ants
dist(i)=sqrt((5-startingptx(i))^2+(5-startingpty(i))^2);
e(i)=dist(i); %greater the distance greater the error
end
for i=1:1:total_ants
pheromone(i)=1/e(i);
end
bestpath=find(pheromone==max(pheromone));
if e(bestpath)<prev_error
startingptxxx=startingptx(bestpath);
startingptyyy=startingpty(bestpath);
plot(startingptxxx, startingptyyy, 'ro', 'LineWidth',4)
hold on;
savex(times)=startingptxxx;
savey(times)=startingptyyy;
end
error=e(bestpath);
end
plot(savex, savey, 'r', 'LineWidth',2)
Output:
antcolonyoptimization
bestpath
bestpath = 77
% Main.m
abc;
%RouletteWheelSelection
function i=RouletteWheelSelection(P)
r=rand;
C=cumsum(P);
i=find(r<=C,1,'first');
end
% ABC.m
clc;
clear;
close all;
%% Problem Definition
CostFunction=@(x) Sphere(x); % Cost Function
nVar=5; % Number of Decision Variables
VarSize=[1 nVar]; % Decision Variables Matrix Size
VarMin=-10; % Decision Variables Lower Bound
VarMax= 10; % Decision Variables Upper Bound
%% ABC Settings
MaxIt=200; % Maximum Number of Iterations
nPop=100; % Population Size (Colony Size)
nOnlooker=nPop; % Number of Onlooker Bees
L=round(0.6*nVar*nPop); % Abandonment Limit Parameter (Trial Limit)
a=1; % Acceleration Coefficient Upper Bound
%% Initialization
% Empty Bee Structure
empty_bee.Position=[];
empty_bee.Cost=[];
% Abandonment Counter
C=zeros(nPop,1);
for it=1:MaxIt
% Recruited Bees
for i=1:nPop
% Evaluation
newbee.Cost=CostFunction(newbee.Position);
% Comparision
if newbee.Cost<=pop(i).Cost
pop(i)=newbee;
else
C(i)=C(i)+1;
end
end
% Onlooker Bees
for m=1:nOnlooker
% Evaluation
newbee.Cost=CostFunction(newbee.Position);
% Comparision
if newbee.Cost<=pop(i).Cost
pop(i)=newbee;
else
C(i)=C(i)+1;
end
end
% Scout Bees
for i=1:nPop
if C(i)>=L
pop(i).Position=unifrnd(VarMin,VarMax,VarSize);
pop(i).Cost=CostFunction(pop(i).Position);
C(i)=0;
end
end
end
%% Results
figure;
%plot(BestCost,'LineWidth',2);
semilogy(BestCost,'LineWidth',2);
xlabel('Iteration');
ylabel('Best Cost');
grid on;
% Sphere.m
function z=Sphere(x)
z=sum(x.^2);
end
Output:
clc;
clear;
close all;
%% Problem Definition
model=CreateModel();
%% PSO Parameters
% Constriction Coefficients
% phi1=2.05;
% phi2=2.05;
% phi=phi1+phi2;
% chi=2/(phi-2+sqrt(phi^2-4*phi));
% w=chi; % Inertia Weight
% wdamp=1; % Inertia Weight Damping Ratio
% Velocity Limits
VelMax=0.1*(VarMax-VarMin);
VelMin=-VelMax;
%% Initialization
empty_particle.Position=[];
empty_particle.Cost=[];
empty_particle.Sol=[];
empty_particle.Velocity=[];
empty_particle.Best.Position=[];
empty_particle.Best.Cost=[];
empty_particle.Best.Sol=[];
particle=repmat(empty_particle,nPop,1);
BestSol.Cost=inf;
for i=1:nPop
% Initialize Position
particle(i).Position=unifrnd(VarMin,VarMax,VarSize);
% Initialize Velocity
particle(i).Velocity=zeros(VarSize);
% Evaluation
[particle(i).Cost, particle(i).Sol]=CostFunction(particle(i).Position);
BestSol=particle(i).Best;
end
end
BestCost=zeros(MaxIt,1);
for it=1:MaxIt
for i=1:nPop
% Update Velocity
particle(i).Velocity = w*particle(i).Velocity ...
+c1*rand(VarSize).*(particle(i).Best.Position-particle(i).Position) ...
+c2*rand(VarSize).*(BestSol.Position-particle(i).Position);
% Update Position
particle(i).Position = particle(i).Position + particle(i).Velocity;
% Evaluation
[particle(i).Cost, particle(i).Sol] = CostFunction(particle(i).Position);
% Mutation
for k=1:2
NewParticle=particle(i);
NewParticle.Position=Mutate(particle(i).Position, mu);
[NewParticle.Cost, NewParticle.Sol]=CostFunction(NewParticle.Position);
if NewParticle.Cost<=particle(i).Cost || rand < 0.1
particle(i)=NewParticle;
end
end
particle(i).Best.Position=particle(i).Position;
particle(i).Best.Cost=particle(i).Cost;
particle(i).Best.Sol=particle(i).Sol;
BestSol=particle(i).Best;
end
end
end
BestCost(it)=BestSol.Cost;
w=w*wdamp;
end
%% Results
figure;
plot(BestCost,'LineWidth',2);
xlabel('Iteration');
ylabel('Best Cost');
Output: