ML Assignment 3
ML Assignment 3
ML Assignment 3
Instructions
1. Thisassignmentcontains1Question.Attemptallquestions.
2. This assignment should be completed within assigned time, after the due date, assignment will not
be accepted.
3. Studentsof particularcourse will downloadassignmentexam and submit solution whichwill only be
accepted through CMS portal.
4. Please ensure that no part of your assignment should be copied from any other source without
acknowledgement of the source and proper referencing (IEEE).
5. Please note that copy-paste is a serious nature of academic dishonesty, it is called “Plagiarism” and
the penalties are attached to being found guilty of committing such offences.
6. It is allow using lecture notes, books and other sources, however needing to refer/cite properly,
Reference list must be given at end of the assignment.
7. This assignment should be submitted in PDF file for this purpose first take image of all hand written
pages and then merge using Smartphone app (from PC/Laptop put all images in word file and save as
PDF) including assignment paper in the start of submission.
8. Assignmentcanbecompressedorbreakintwopartsiffilesizeislargerthanuploadinglimit.
9. Thefont sizeshould12and Times New Romanshouldbe used. Allfigures and illustrationsshouldbe
THISTABLEISFOROFFICIALUSE.DONOTWRITEANYTHINGON IT
Questions Q -1
CLOs CLO-3
MaximumMarks 5
MarksObtained
Total Score
1
FacultyofComputingandInformationTechnology(FCIT) Department
of Computing Indus University, Karachi
Introduction
Neuralnetworkisamachinelearningprocessthatteachescomputerstoprocessdatainawaythatmimicsthehuman
brain.It'satypeofartificialintelligencethatusesinterconnectednodesorneuronsinalayeredstructure.Inanyneural
network:
Problem Statement
Objectives
Implementaneuralnetworkwithtwolayerstoaclassificationproblem
Implementforwardpropagationusingmatrixmultiplication
Performbackwardpropagation
Steps to be followed
2
Step-1:
ImportLibraries
→Importsklearn,matplotlib,w3_unittest
3
Step-2: (1.0)
LoadandPreprocessDataset
m=2000
samples,labels=make_blobs(n_samples=m,
centers=([2.5,3],[6.7,7.9],[2.1,7.9],[7.4,2.8]),
cluster_std=1.1,
random_state=0)
labels[(labels==0)|(labels==1)]=1
labels[(labels==2)|(labels==3)]=0 X =
np.transpose(samples)
Y=labels.reshape((1,m))
plt.scatter(X[0,:],X[1,:],c=Y,cmap=colors.ListedColormap(['blue','red']));
Printallrequiredvalues
Step-3:DefiningtheNeuralNetworkStructure (0.5)
n_x:thesizeoftheinputlayer
n_h:thesizeofthehiddenlayer(setit equalto2fornow)
n_y:thesizeoftheoutputlayer
Step-4:PerformForwardPropagation (0.5)
W1 = parameters["W1"]
b1 = parameters["b1"]
W2=parameters["W2"]
b2 = parameters["b2"]
Z1=np.matmul(W1,X)+b1 A1
= sigmoid(Z1)
Z2=np.matmul(W2,A1)+b2 A2
= sigmoid(Z2)
assert(A2.shape==(n_y,X.shape[1]))
Step-5:Performbackwardpropagation (1.0)
m=X.shape[1]
W1=parameters["W1"]
W2=parameters["W2"]
A1=cache["A1"]
A2=cache["A2"]
dZ2=A2-Y
dW2=1/m*np.dot(dZ2,A1.T)
db2=1/m*np.sum(dZ2,axis=1,keepdims=True)
4
dZ1=np.dot(W2.T,dZ2)*A1*(1-A1) dW1
= 1/m * np.dot(dZ1, X.T)
db1=1/m*np.sum(dZ1,axis=1,keepdims=True)
grads={"dW1":dW1,
"db1": db1,
"dW2":dW2,
"db2":db2}
returngrads
grads=backward_propagation(parameters,cache,X,Y)
Printrequiredvalues
defnn_model(X,Y,n_h,num_iterations=10,learning_rate=1.2,print_cost=False): n_x =
layer_sizes(X, Y)[0]
n_y=layer_sizes(X,Y)[2]
parameters=initialize_parameters(n_x,n_h,n_y)
foriinrange(0,num_iterations):
A2,cache=forward_propagation(X,parameters) cost
= compute_cost(A2, Y)
grads=backward_propagation(parameters,cache,X,Y)
ifprint_cost: print("Costafteriteration%i:
%f"%(i,cost))
returnparameters
PrintW1,W2,b1,andb2withsettingnum_iterations=100,learning_rate=1.2
5
Code
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import make_blobs
from sklearn.preprocessing import StandardScaler
from matplotlib import colors
# Number of samples
m = 2000
def sigmoid(z):
return 1 / (1 + np.exp(-z))
Z1 = np.dot(W1, X) + b1
A1 = sigmoid(Z1)
Z2 = np.dot(W2, A1) + b2
A2 = sigmoid(Z2)
dZ2 = A2 - Y
7
dW2 = 1/m * np.dot(dZ2, A1.T)
db2 = 1/m * np.sum(dZ2, axis=1, keepdims=True)
dZ1 = np.dot(W2.T, dZ2) * A1 * (1 - A1)
dW1 = 1/m * np.dot(dZ1, X.T)
db1 = 1/m * np.sum(dZ1, axis=1, keepdims=True)
return grads
# Calculate gradients
grads = backward_propagation(parameters, cache, X, Y)
print("dW1:", grads["dW1"])
print("db1:", grads["db1"])
print("dW2:", grads["dW2"])
print("db2:", grads["db2"])
return parameters
for i in range(num_iterations):
8
A2, cache = forward_propagation(X, parameters)
cost = compute_cost(A2, Y)
grads = backward_propagation(parameters, cache, X, Y)
if print_cost and i % 10 == 0:
print(f"Cost after iteration {i}: {cost}")
return parameters
9
1
0
1
1