Soft Computing Lab Manual1

Download as pdf or txt
Download as pdf or txt
You are on page 1of 27

An Autonomous Institution

Approved by AICTE, Affiliated to Anna University, Chennai.


ISO 9001:2015 Certified Institution, Accredited by NBA (BME, CSE, ECE, EEE, IT & MECH), Accredited by NAAC.
#42, Avadi-Vel Tech Road, Avadi, Chennai- 600062, Tamil Nadu, India.

DEPARTMENT OF ARTIFICIAL INTELLIGENCE AND DATA SCIENCE

191ITV18-SOFT COMPUTING [LAB INTEGRATED]

NAME :

REGISTER NO :

ROLL NO :

PROGRAMME : B.TECH – ARTIFICIAL INTELLIGENCE AND DATA SCIENCE

YEAR/SEMESTER :

DEPARTMENT OF ARTIFICIAL INTELLIGENCE AND DATA SCIENCE

VISION

• To promote centre of excellence through effectual Teaching and Learning, imparting the
contemporary knowledge centric education through innovative research in multidisciplinary
fields.
MISSION

• To impart quality technical skills through practicing, knowledge updating in recent technology
and produce professionals with multidisciplinary and leadership skills.
• To promote innovative thinking for design and development of software products of varying
complexity with intelligence to fulfil the global standards and demands
• To inculcate professional ethics among the graduates and to adapt the changing technologies
through lifelong learning
An Autonomous Institution
Approved by AICTE, Affiliated to Anna University, Chennai.
ISO 9001:2015 Certified Institution, accredited by NBA (BME, CSE, ECE, EEE, IT & MECH), Accredited by NAAC.
#42, Avadi-Vel Tech Road, Avadi, Chennai- 600062, Tamil Nadu, India.

CERTIFICATE

Name: ………………….…………………………….………….….………………………….…

Year: ……………… Semester: ……... Department: B. Tech - Artificial Intelligence & Data Science

University Register No:

College Roll No:

Certified that this is the bonafide record of work done by the above student in the 191ITVI8-SOFT
COMPUTING [LAB INTEGRATED] during the academic year 2024-2025.

Signature of Head of the Department Signature of Course In charge

Submitted for the University Practical Examination held on ………………... at VEL TECH MULTI
TECH Dr.Rangarajan Dr.Sagunthala ENGINEERING COLLEGE, #42,AVADI – VEL TECH
ROAD, AVADI,CHENNAI- 600062.

Signature of Examiners

Internal Examiner:…………………… , External Examiner:………………

Date:………
DEPARTMENT OF ARTIFICIAL INTELLIGENCE AND DATA SCIENCE
POs Programme Outcomes (POs)

PO1 Engineering Knowledge: Apply knowledge of mathematics, science, engineering fundamentals and an
Engineering Specialization to the solution of complex engineering problems.

PO2
Problem Analysis: Identify, formulate, review research literature and analyse complex engineering problems
reaching substantiated conclusions using first principles of mathematics, natural sciences, and engineering sciences.

PO3 Design / Development of solutions: Design solutions for complex engineering problems and design system
components or processes that meet specified needs with appropriate consideration for public health and safety,
cultural, societal, and environmental considerations.

PO4 Conduct Investigations of Complex Problems: Use research-based knowledge and research methods including
design of experiments, analysis and interpretation of data, and synthesis of the information to provide valid
conclusions.
PO5 Modern tool usage: Create, select, and apply appropriate techniques, resources, and modern engineering and IT tools
including prediction and modelling to complex engineering activities with an understanding of the limitations.
PO6 The Engineer and Society: Apply reasoning informed by the contextual knowledge to assess societal, health, safety,
legal and cultural issues and the consequent responsibilities relevant to the professional engineering practice.

PO7 Environment and sustainability: Understand the impact of the professional engineering solutions in societal and
environmental contexts, and demonstrate the knowledge of, and need for sustainable development.

PO8 Ethics: Apply ethical principles and commit to professional ethics and responsibilities and norms of the engineering
practice.

PO9 Individual and team work: Function effectively as an individual, and as a member or leader in diverse teams, and
in multidisciplinary settings.

PO10 Communication: Communicate effectively on complex engineering activities with the engineering community and
with society at large, such as, being able to comprehend and write effective reports and design documentation, make
effective presentations, and give and receive clear instructions

PO11 Project Management and Finance: Demonstrate knowledge and understanding of the engineering and management
principles and apply these to one’s own work, as a member and leader in a team, to manage projects and in
multidisciplinary environments.

PO12 Life-long learning: Recognize the need for, and have the preparation and ability to engage in independent and lifelong
learning in the broadest context of technological change.
COURSE OBJECTIVES

The student should be made to:


• Understand the ideas of fuzzy sets, fuzzy logic and use of heuristics based on human experience.
• Acquire the mathematical background for carrying out the optimization associated with neural
• network learning
• Explore various evolutionary Algorithms.
• Study neural networks to form appropriate rules for inference system
• Learn the importance of Intelligent behaviour of programs based on soft computing

COURSE OUTCOMES
At the end of the course, the student should be able to

CO1 Learn the fundamentals of fuzzy logic operators and inference mechanisms.
CO2 Illustrate the mechanism of neural network architecture for AI applications such as classification
and clustering.
CO3 Apply the functionality of Genetic Algorithms in Optimization problems.
CO4 Implement hybrid techniques involving Neural networks and Fuzzy logic.
CO5 Apply soft computing techniques in real world applications.
.

1 – Low 2 – Medium 3 – High


LIST OF EXPERIMENTS

S.NO DATE NAME OF EXPERIMENTS CO PAGE NO MARKS SIGN


1
Implementation of fuzzy controller CO1

2 Programming exercise on
classification with a discrete CO2
perceptron
3 Implementation of x-or with back
CO2
propagation algorithm
4 Implementation of self-organizing
CO2
maps for a specific application
5 Programming exercise on
maximizing a function using CO3
genetic algorithm
6 Implementation of two input sine
CO4
function
7 Implementation of three input
CO5
nonlinear function
EX NO:1
IMPLEMENTATION OF FUZZY CONTROLLER
DATE:

AIM:

To Implement a fuzzy controller involves creating a system that makes decisions based on fuzzy logic rules and
membership functions.

ALGORITHM:

Step 1: Define input and output variables to control and make decision.

Step 2: For each input and output variable, create membership functions that define their linguistic range.

Step 3: Define rules that connect combinations of inputs’ membership functions to outputs’ membership
functions.

Step 4: Convert crisp inputs (real-world values) into fuzzy sets based on the defined membership functions.

Step 5: Use the rules to infer the appropriate output membership functions based on the fuzzified inputs.

Step 6: Convert the fuzzy output back to a crisp value for the actual control action.

PROGRAM:

import numpy as np

import skfuzzy as fuzz

from skfuzzy import control as ctrl

# Antecedent variables (inputs)

distance = ctrl.Antecedent(np.arange(0, 101, 1), 'distance')

speed = ctrl.Antecedent(np.arange(0, 101, 1), 'speed')

# Consequent variable (output)

acceleration = ctrl.Consequent(np.arange(0, 101, 1), 'acceleration')

# Membership functions for distance

distance['near'] = fuzz.trimf(distance. Universe, [0, 0, 50])

distance['medium'] = fuzz.trimf(distance.universe, [0, 50, 100])

distance['far'] = fuzz.trimf(distance.universe, [50, 100, 100])

# Membership functions for speed

speed['slow'] = fuzz.trimf(speed.universe, [0, 0, 50])

speed['medium'] = fuzz.trimf(speed.universe, [0, 50, 100])

speed['fast'] = fuzz.trimf(speed.universe, [50, 100, 100])

# Membership functions for acceleration

1
acceleration['decelerate'] = fuzz.trimf(acceleration. Universe, [0, 0, 50])

acceleration['maintain'] = fuzz. trimf(acceleration.universe, [0, 50, 100])

acceleration['accelerate'] = fuzz.trimf(acceleration.universe, [50, 100, 100])

# Rules for the fuzzy logic

Rule1 = ctrl.Rule(distance['near'] | speed['slow'], acceleration['decelerate'])

Rule2 = ctrl.Rule(distance['medium'] | speed['medium'], acceleration['maintain'])

Rule3 = ctrl.Rule(distance['far'] | speed['fast'], acceleration['accelerate'])

# Control system

acceleration_ctrl = ctrl.ControlSystem([Rule1, Rule2, Rule3])

acceleration_simulation = ctrl.ControlSystemSimulation(acceleration_ctrl)

# Pass inputs to the controller and compute the output

acceleration_simulation.input['distance'] = 70 # Distance to the obstacle

acceleration_simulation.input['speed'] = 30 # Current speed

acceleration_simulation. Compute()

# Display the computed acceleration

print("Computed Acceleration:", acceleration_simulation.output['acceleration'])

# Visualize the membership functions (optional)

distance. View()

speed.view()

acceleration.view()

2
OUTPUT: Figure 1

Figure 2

3
Figure 3

Computed Acceleration: 49.999999999999986

INFERENCE:

RESULT:

4
EX NO:2
PROGRAMMING EXERCISE ON CLASSIFICATION WITH A
DATE: DISCRETE PERCEPTRON

AIM:

To Develop a Python program to implement a discrete perceptron for binary classification.

ALOGRITHM:

Step 1: Initialize weights (w) and bias (b) randomly or to zero.

Step 2: Iterate through the training dataset for a fixed number of epochs.

Step 3: Input the features (x) of the data point to the perceptron.

• Calculate the weighted sum of inputs: \text{weighted_ sum} = \sum_{i=1}^{n} (w_ i \times x_ i) + b,

where n is the number of features.

• Apply Step Function (Discrete Activation): \text{output} = \begin{cases} 1 & \text {if } \text{weighted_

sum} \ g e q \text{threshold} \\ 0 & \text{otherwise} \end{cases}

• Update Weights and Bias.

Step 4: If the output doesn’t match the expected label.

• Adjust weights: w _ i = w_ i + \ text {learning_ rate} \times (expected – output) \times x_ i for all I

features.

• Adjust bias: b = b + \ text {learning _rate} \times (expected – output)

Step 5: Repeat Until Convergence or Maximum Epochs Reached.

PROGRAM:

class DiscretePerceptron:

def __init__(self, input_size):

self.weights = [0] * input_size

self.bias = 0

def predict(self, inputs):

activation = self.bias

for i in range(len(inputs)):

activation += self.weights[i] * inputs[i]

5
return 1 if activation >= 0 else 0

def train(self, training_inputs, labels, epochs=10, learning_rate=1):

for epoch in range(epochs):

for inputs, label in zip(training_inputs, labels):

prediction = self.predict(inputs)

for i in range(len(self.weights)):

self.weights[i] += learning_rate * (label - prediction) * inputs[i]

self.bias += learning_rate * (label - prediction)

print(f"Epoch {epoch + 1}/{epochs} – Accuracy: {self.evaluate(training_inputs, labels)}")

def evaluate(self, inputs, labels):

correct = 0

for i in range(len(inputs)):

prediction = self.predict(inputs[i])

if prediction == labels[i]:

correct += 1

return correct / len(inputs)

# Training data for AND gate

training_inputs = [

[0, 0],

[0, 1],

[1, 0],

[1, 1]

labels = [0, 0, 0, 1]

# Creating a Discrete Perceptron and training it on AND gate data

perceptron = DiscretePerceptron(input_size=2)

perceptron.train(training_inputs, labels, epochs=10, learning_rate=0.1)

# Testing the trained model

6
test_inputs = [

[0, 0],

[0, 1],

[1, 0],

[1, 1]

print("\nTesting the model:")

for i, test_input in enumerate(test_inputs):

prediction = perceptron.predict(test_input)

print(f"Input: {test_input} Predicted Output: {prediction}")

OUTPUT:

Epoch 1/10 – Accuracy: 0.25

Epoch 2/10 – Accuracy: 0.5

Epoch 3/10 – Accuracy: 1.0

Epoch 4/10 – Accuracy: 1.0

Epoch 5/10 – Accuracy: 1.0

Epoch 6/10 – Accuracy: 1.0

Epoch 7/10 – Accuracy: 1.0

Epoch 8/10 – Accuracy: 1.0

Epoch 9/10 – Accuracy: 1.0

Epoch 10/10 – Accuracy: 1.0

Testing the model:

Input: [0, 0] Predicted Output: 0

Input: [0, 1] Predicted Output: 0

Input: [1, 0] Predicted Output: 0

Input: [1, 1] Predicted Output: 1

7
INFERENCE:

RESULT:

8
EX NO:3
IMPLEMENTATION OF XOR WITH BACK PROPAGATION
DATE: ALGORITHM

AIM:

The goal is to create a neural network capable of learning and predicting the XOR function’s outputs based on
given inputs.

ALGORITHM:

Step1.Randomly initialize weights and biases for connections between layers

Step 2. Define the XOR truth table dataset containing input-output pair

Step3. Input the XOR data values to the neural network.

• Compute the outputs for each input through forward propagation:

• Calculate the weighted sum of inputs and apply activation function for hidden layer(s) and output layer.

Step 4. Update weights and biases using the backpropagation algorithm

Step 5. Adjust weights and biases using backpropagation to minimize errors.

PROGRAM:

import numpy as np

class XORNeuralNetwork:

def __init__(self):

# Initialize weights and biases for the network

self.input_size = 2

self.hidden_size = 4

self.output_size = 1

self.hidden_weights = np.random.randn(self.input_size, self.hidden_size)

self.hidden_bias = np.zeros((1, self.hidden_size))

self.output_weights = np.random.randn(self.hidden_size, self.output_size)

self.output_bias = np.zeros((1, self.output_size))

def sigmoid(self, x):

return 1 / (1 + np.exp(-x))

def sigmoid_derivative(self, x):

9
return x * (1 - x)

def forward_propagation(self, inputs):

# Forward pass through the network

self.hidden_layer_activation = np.dot(inputs, self.hidden_weights) + self.hidden_bias

self.hidden_layer_output = self.sigmoid(self.hidden_layer_activation)

self.output_layer_activation = np.dot(self.hidden_layer_output, self.output_weights) + self.output_bias

self.predicted_output = self.sigmoid(self.output_layer_activation)

return self.predicted_output

def backward_propagation(self, inputs, targets, learning_rate):

# Backpropagation to update weights and biases

error = targets - self.predicted_output

output_delta = error * self.sigmoid_derivative(self.predicted_output)

hidden_layer_error = output_delta.dot(self.output_weights.T)

hidden_layer_delta = hidden_layer_error * self.sigmoid_derivative(self.hidden_layer_output)

self.output_weights += self.hidden_layer_output.T.dot(output_delta) * learning_rate

self.output_bias += np.sum(output_delta, axis=0, keepdims=True) * learning_rate

self.hidden_weights += inputs.T.dot(hidden_layer_delta) * learning_rate

self.hidden_bias += np.sum(hidden_layer_delta, axis=0, keepdims=True) * learning_rate

def train(self, training_inputs, training_outputs, epochs, learning_rate):

for epoch in range(epochs):

self.forward_propagation(training_inputs)

self.backward_propagation(training_inputs, training_outputs, learning_rate)

def predict(self, inputs):

return self.forward_propagation(inputs)

# Training data for XOR gate

XOR_inputs = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])

XOR_outputs = np.array([[0], [1], [1], [0]])

# Creating XORNeuralNetwork instance and training the network

Xor_nn = XORNeuralNetwork()

Xor_nn.train(XOR_inputs, XOR_outputs, epochs=10000, learning_rate=0.1)

# Testing the trained model

print("Predictions after training:")

for i in range(len(XOR_inputs)):

10
prediction = Xor_nn.predict(XOR_inputs[i])

print(f"Input: {XOR_inputs[i]} Predicted Output: {prediction}")

OUTPUT:

Predictions after training:

Input: [0 0] Predicted Output: [[0.04112069]]

Input: [0 1] Predicted Output: [[0.95660477]]

Input: [1 0] Predicted Output: [[0.95843628]]

Input: [1 1] Predicted Output: [[0.03766493]]

INFERENCE

RESULT:

11
EX NO:4
IMPLEMENTATION OF SELF ORGANIZING MAPS FOR A
DATE: SPECIFIC APPLICATION

AIM:

The objective is to create a SOM-based model that effectively clusters and represents complex data in a lower
dimensional space, providing insights and visualization of the data’s underlying structure.

ALGORITHM:

Step 1. Initialize weights for each node in the grid with random values or small random samples from the
dataset.

Step 2. Define the learning rate (α) and neighborhood radius .

Step 3. For each input vector, find the node in the SOM grid whose weights are closest (most similar) to the
input vector.

Step 4. Compute the Euclidean distance or another similarity measure to identify the Best-Matching Unit

Step 5. Adjust the weights of the BMU and its neighboring nodes based on the input vector and learning rate

Step 6. Decrease the learning rate (α) and neighborhood radius ® over time to gradually refine the map.

Step 7. Iterate through the dataset for a defined number of epochs, updating the SOM weights based on input
vectors

PROGRAM:

import numpy as np

import matplotlib.pyplot as plt

from minisom import MiniSom

from PIL import Image

# Load an image and convert it to a NumPy array

image = Image.open(r"C:\Users\FOUZIYA BANU\Downloads\New folder\eiffeltower.jpg") # Replace with


your image path

image = image.resize((100, 100)) # Resize for faster processing

data = np.array(image)

data = data.reshape(-1, 3) # Reshape to a 1D array of RGB values

# Define SOM parameters

width = 10

height = 10

input_len = data.shape[1]

12
sigma = 1.0

learning_rate = 0.5

iterations = 10000

# Initialize SOM

som = MiniSom(width, height, input_len, sigma=sigma, learning_rate=learning_rate)

som.random_weights_init(data)

print("Training SOM…")

som.train_random(data, iterations)

# Get the SOM’s weights and map input data to their closest neurons

mapped = som.win_map(data)

# Create a new image based on the SOM’s clusters

mapped_image = np.zeros((width * height, 3))

for i, x in enumerate(mapped):

mapped_image[i] = np.mean(x, axis=0)

mapped_image = mapped_image.reshape(width, height, 3).astype(np.uint8)

# Display the original and mapped images

fig, ax = plt.subplots(1, 2)

ax[0].imshow(image)

ax[0].set_title('Original Image')

ax[0].axis('off')

ax[1].imshow(mapped_image)

ax[1].set_title('SOM Mapped Image')

ax[1].axis('off')

plt.show()

13
OUTPUT:

INFERENCE:

RESULT:.

14
EX NO:5
PROGRAMMING EXERCISE ON MAXIMIZNG A FUNCTION
DATE: USING GENETIC ALGORITHM

AIM:

The objective is to create an evolutionary optimization technique capable of finding the global maximum of a
predefined function by evolving a population of potential solutions.

ALGORITHM:

Step 1. The objective is to create an evolutionary optimization technique capable of finding the global maximum
of a predefined function by evolving a population of potential solutions.

Step 2. Define a fitness function that evaluates the fitness (objective value) of each individual based on the given
function to be maximized.

Step 3. Valuate the fitness of each individual in the population using the defined fitness function

Step 4. Select individuals from the population for reproduction (mating pool) based on their fitness

Step 5. Perform crossover or recombination between selected individuals to create offspring.

Step 6. Apply mutation to some of the offspring individuals with a low probability to introduce diversity

Step 7. Update the population

PROGRAM:

def fitness_function(x):

return x**2 + 6*x + 5

# Genetic Algorithm parameters

population_size = 100

mutation_rate = 0.1

num_generations = 100

# Define the range for x values

min_x = -10

max_x = 10

15
# Function to create an initial population

def create_initial_population(population_size):

return [random.uniform(min_x, max_x) for _ in range(population_size)]

# Function to calculate fitness scores for the population

def calculate_fitness(population):

return [fitness_function(x) for x in population]

# Function for tournament selection

def tournament_selection(population, fitness_scores):

selected = []

for _ in range(len(population)):

idx1, idx2 = random.sample(range(len(population)), 2)

if fitness_scores[idx1] > fitness_scores[idx2]:

selected.append(population[idx1])

else:

selected.append(population[idx2])

return selected

# Function for single-point crossover

def crossover(parent1, parent2):

# This function assumes parent1 and parent2 are single values,

# so crossover is not applicable. For single values, return them as they are.

return parent1, parent2

# Function for mutation

def mutate(individual):

if random.random() < mutation_rate:

return random.uniform(min_x, max_x)

return individual

# Main genetic algorithm

population = create_initial_population(population_size)

16
for generation in range(num_generations):

fitness_scores = calculate_fitness(population)

# Select parents

selected_parents = tournament_selection(population, fitness_scores)

# Perform crossover

new_population = []

for i in range(0, len(selected_parents), 2):

if i + 1 < len(selected_parents):

child1, child2 = crossover(selected_parents[i], selected_parents[i + 1])

new_population.extend([child1, child2])

# Mutate

population = [mutate(individual) for individual in new_population]

# Find the best individual in the final population

fitness_scores = calculate_fitness(population)

best_individual_idx = fitness_scores.index(max(fitness_scores))

best_individual = population[best_individual_idx]

# Output results

print(f"The value of x that maximizes the function is: {best_individual}")

print(f"The maximum value of the function is: {fitness_function(best_individual)}")

OUTPUT:

The value of x that maximizes the function is: 9.997310646215844

The maximum value of the function is: 164.93008403423573

17
INFERENCE:

RESULT:

18
EX NO:6
IMPLEMENTATION OF TWO INPUT SINE FUNCTION
DATE:

AIM: The objective is to create a neural network model that can learn and predict the sine function based on
two input variables

ALGORITHM:

Step 1. Initialize Neural Network Weights and Biases.

Step 2. Split Dataset into Training and Validation Sets .

Step 3. Shuffle and iterate over the training dataset in batches.

Step 4. Calculate loss/error between predicted and actual outputs. Step 5. Back propagate the error to update
weights using optimization algorithms like gradient descent or Adam.

Step 6. Validate the model’s performance on the validation set to monitor for overfitting.

Step 7. Stop Training Based on Convergence Criteria

PROGRAM:

import math

# Function to calculate the sine of two inputs

def sine_of_two_inputs(x, y):

sin_x = math.sin(x)

sin_y = math.sin(y)

return sin_x, sin_y

# Taking input from the user

try:

x = float(input("Enter the first number: "))

y = float(input("Enter the second number: "))

# Calculate the sine of both inputs

sin_x, sin_y = sine_of_two_inputs(x, y)

19
# Output the results

print(f"Sine of {x} is: {sin_x}")

print(f"Sine of {y} is: {sin_y}")

except ValueError:

print("Please enter valid numbers.")

OUTPUT:

Enter the first number: 23

Enter the second number: 34

Sine of 23.0 is: -0.8462204041751706

Sine of 34.0 is: 0.5290826861200238

INFERENCE:

RESULT:

20
EX NO:7
IMPLEMENTATION OF THREE INPUT NON-LINEAR FUNCTION
DATE:

AIM: The objective is to create a neural network model that can learn and predict the sine function based on
two input variables.

ALGORITHM:

Step 1.Initialize weights and biases in the neural network (random initialization or predefined values).

Step 2. Split the generated dataset into training and validation sets for model evaluation.

Step 3. Shuffle and iterate over the training dataset in batches.

Step 4. Forward propagate input through the network to get predictions.

Step 5. Calculate loss/error between predicted and actual outputs. Step 6. Backpropagate the error to update
weights using optimization algorithms like gradient descent or Adam.

Step 7. Validate the model’s performance on the validation set to monitor for overfitting.

Step 8. Terminate training based on convergence criteria

PROGRAM:

# Function to calculate the non-linear function

def non_linear_function(x, y, z):

result = x**2 + y**3 - z**4

return result

# Taking input from the user

try:

x = float(input("Enter the first number (x): "))

y = float(input("Enter the second number (y): "))

z = float(input("Enter the third number (z): "))

# Calculate the non-linear function

result = non_linear_function(x, y, z)

# Output the result

print(f"Result of the non-linear function for inputs ({x}, {y}, {z}) is: {result}")

except ValueError:

21
print("Please enter valid numbers.")

OUTPUT:

Enter the first number (x): 12

Enter the second number (y): 23

Enter the third number (z): 34

Result of the non-linear function for inputs (12.0, 23.0, 34.0) is: -1324025.0

INFERENCE:

RESULT:

22

You might also like