Soft Computing Lab Manual1
Soft Computing Lab Manual1
Soft Computing Lab Manual1
NAME :
REGISTER NO :
ROLL NO :
YEAR/SEMESTER :
VISION
• To promote centre of excellence through effectual Teaching and Learning, imparting the
contemporary knowledge centric education through innovative research in multidisciplinary
fields.
MISSION
• To impart quality technical skills through practicing, knowledge updating in recent technology
and produce professionals with multidisciplinary and leadership skills.
• To promote innovative thinking for design and development of software products of varying
complexity with intelligence to fulfil the global standards and demands
• To inculcate professional ethics among the graduates and to adapt the changing technologies
through lifelong learning
An Autonomous Institution
Approved by AICTE, Affiliated to Anna University, Chennai.
ISO 9001:2015 Certified Institution, accredited by NBA (BME, CSE, ECE, EEE, IT & MECH), Accredited by NAAC.
#42, Avadi-Vel Tech Road, Avadi, Chennai- 600062, Tamil Nadu, India.
CERTIFICATE
Name: ………………….…………………………….………….….………………………….…
Year: ……………… Semester: ……... Department: B. Tech - Artificial Intelligence & Data Science
Certified that this is the bonafide record of work done by the above student in the 191ITVI8-SOFT
COMPUTING [LAB INTEGRATED] during the academic year 2024-2025.
Submitted for the University Practical Examination held on ………………... at VEL TECH MULTI
TECH Dr.Rangarajan Dr.Sagunthala ENGINEERING COLLEGE, #42,AVADI – VEL TECH
ROAD, AVADI,CHENNAI- 600062.
Signature of Examiners
Date:………
DEPARTMENT OF ARTIFICIAL INTELLIGENCE AND DATA SCIENCE
POs Programme Outcomes (POs)
PO1 Engineering Knowledge: Apply knowledge of mathematics, science, engineering fundamentals and an
Engineering Specialization to the solution of complex engineering problems.
PO2
Problem Analysis: Identify, formulate, review research literature and analyse complex engineering problems
reaching substantiated conclusions using first principles of mathematics, natural sciences, and engineering sciences.
PO3 Design / Development of solutions: Design solutions for complex engineering problems and design system
components or processes that meet specified needs with appropriate consideration for public health and safety,
cultural, societal, and environmental considerations.
PO4 Conduct Investigations of Complex Problems: Use research-based knowledge and research methods including
design of experiments, analysis and interpretation of data, and synthesis of the information to provide valid
conclusions.
PO5 Modern tool usage: Create, select, and apply appropriate techniques, resources, and modern engineering and IT tools
including prediction and modelling to complex engineering activities with an understanding of the limitations.
PO6 The Engineer and Society: Apply reasoning informed by the contextual knowledge to assess societal, health, safety,
legal and cultural issues and the consequent responsibilities relevant to the professional engineering practice.
PO7 Environment and sustainability: Understand the impact of the professional engineering solutions in societal and
environmental contexts, and demonstrate the knowledge of, and need for sustainable development.
PO8 Ethics: Apply ethical principles and commit to professional ethics and responsibilities and norms of the engineering
practice.
PO9 Individual and team work: Function effectively as an individual, and as a member or leader in diverse teams, and
in multidisciplinary settings.
PO10 Communication: Communicate effectively on complex engineering activities with the engineering community and
with society at large, such as, being able to comprehend and write effective reports and design documentation, make
effective presentations, and give and receive clear instructions
PO11 Project Management and Finance: Demonstrate knowledge and understanding of the engineering and management
principles and apply these to one’s own work, as a member and leader in a team, to manage projects and in
multidisciplinary environments.
PO12 Life-long learning: Recognize the need for, and have the preparation and ability to engage in independent and lifelong
learning in the broadest context of technological change.
COURSE OBJECTIVES
COURSE OUTCOMES
At the end of the course, the student should be able to
CO1 Learn the fundamentals of fuzzy logic operators and inference mechanisms.
CO2 Illustrate the mechanism of neural network architecture for AI applications such as classification
and clustering.
CO3 Apply the functionality of Genetic Algorithms in Optimization problems.
CO4 Implement hybrid techniques involving Neural networks and Fuzzy logic.
CO5 Apply soft computing techniques in real world applications.
.
2 Programming exercise on
classification with a discrete CO2
perceptron
3 Implementation of x-or with back
CO2
propagation algorithm
4 Implementation of self-organizing
CO2
maps for a specific application
5 Programming exercise on
maximizing a function using CO3
genetic algorithm
6 Implementation of two input sine
CO4
function
7 Implementation of three input
CO5
nonlinear function
EX NO:1
IMPLEMENTATION OF FUZZY CONTROLLER
DATE:
AIM:
To Implement a fuzzy controller involves creating a system that makes decisions based on fuzzy logic rules and
membership functions.
ALGORITHM:
Step 1: Define input and output variables to control and make decision.
Step 2: For each input and output variable, create membership functions that define their linguistic range.
Step 3: Define rules that connect combinations of inputs’ membership functions to outputs’ membership
functions.
Step 4: Convert crisp inputs (real-world values) into fuzzy sets based on the defined membership functions.
Step 5: Use the rules to infer the appropriate output membership functions based on the fuzzified inputs.
Step 6: Convert the fuzzy output back to a crisp value for the actual control action.
PROGRAM:
import numpy as np
1
acceleration['decelerate'] = fuzz.trimf(acceleration. Universe, [0, 0, 50])
# Control system
acceleration_simulation = ctrl.ControlSystemSimulation(acceleration_ctrl)
acceleration_simulation. Compute()
distance. View()
speed.view()
acceleration.view()
2
OUTPUT: Figure 1
Figure 2
3
Figure 3
INFERENCE:
RESULT:
4
EX NO:2
PROGRAMMING EXERCISE ON CLASSIFICATION WITH A
DATE: DISCRETE PERCEPTRON
AIM:
ALOGRITHM:
Step 2: Iterate through the training dataset for a fixed number of epochs.
Step 3: Input the features (x) of the data point to the perceptron.
• Calculate the weighted sum of inputs: \text{weighted_ sum} = \sum_{i=1}^{n} (w_ i \times x_ i) + b,
• Apply Step Function (Discrete Activation): \text{output} = \begin{cases} 1 & \text {if } \text{weighted_
• Adjust weights: w _ i = w_ i + \ text {learning_ rate} \times (expected – output) \times x_ i for all I
features.
PROGRAM:
class DiscretePerceptron:
self.bias = 0
activation = self.bias
for i in range(len(inputs)):
5
return 1 if activation >= 0 else 0
prediction = self.predict(inputs)
for i in range(len(self.weights)):
correct = 0
for i in range(len(inputs)):
prediction = self.predict(inputs[i])
if prediction == labels[i]:
correct += 1
training_inputs = [
[0, 0],
[0, 1],
[1, 0],
[1, 1]
labels = [0, 0, 0, 1]
perceptron = DiscretePerceptron(input_size=2)
6
test_inputs = [
[0, 0],
[0, 1],
[1, 0],
[1, 1]
prediction = perceptron.predict(test_input)
OUTPUT:
7
INFERENCE:
RESULT:
8
EX NO:3
IMPLEMENTATION OF XOR WITH BACK PROPAGATION
DATE: ALGORITHM
AIM:
The goal is to create a neural network capable of learning and predicting the XOR function’s outputs based on
given inputs.
ALGORITHM:
Step 2. Define the XOR truth table dataset containing input-output pair
• Calculate the weighted sum of inputs and apply activation function for hidden layer(s) and output layer.
PROGRAM:
import numpy as np
class XORNeuralNetwork:
def __init__(self):
self.input_size = 2
self.hidden_size = 4
self.output_size = 1
return 1 / (1 + np.exp(-x))
9
return x * (1 - x)
self.hidden_layer_output = self.sigmoid(self.hidden_layer_activation)
self.predicted_output = self.sigmoid(self.output_layer_activation)
return self.predicted_output
hidden_layer_error = output_delta.dot(self.output_weights.T)
self.forward_propagation(training_inputs)
return self.forward_propagation(inputs)
Xor_nn = XORNeuralNetwork()
for i in range(len(XOR_inputs)):
10
prediction = Xor_nn.predict(XOR_inputs[i])
OUTPUT:
INFERENCE
RESULT:
11
EX NO:4
IMPLEMENTATION OF SELF ORGANIZING MAPS FOR A
DATE: SPECIFIC APPLICATION
AIM:
The objective is to create a SOM-based model that effectively clusters and represents complex data in a lower
dimensional space, providing insights and visualization of the data’s underlying structure.
ALGORITHM:
Step 1. Initialize weights for each node in the grid with random values or small random samples from the
dataset.
Step 3. For each input vector, find the node in the SOM grid whose weights are closest (most similar) to the
input vector.
Step 4. Compute the Euclidean distance or another similarity measure to identify the Best-Matching Unit
Step 5. Adjust the weights of the BMU and its neighboring nodes based on the input vector and learning rate
Step 6. Decrease the learning rate (α) and neighborhood radius ® over time to gradually refine the map.
Step 7. Iterate through the dataset for a defined number of epochs, updating the SOM weights based on input
vectors
PROGRAM:
import numpy as np
data = np.array(image)
width = 10
height = 10
input_len = data.shape[1]
12
sigma = 1.0
learning_rate = 0.5
iterations = 10000
# Initialize SOM
som.random_weights_init(data)
print("Training SOM…")
som.train_random(data, iterations)
# Get the SOM’s weights and map input data to their closest neurons
mapped = som.win_map(data)
for i, x in enumerate(mapped):
fig, ax = plt.subplots(1, 2)
ax[0].imshow(image)
ax[0].set_title('Original Image')
ax[0].axis('off')
ax[1].imshow(mapped_image)
ax[1].axis('off')
plt.show()
13
OUTPUT:
INFERENCE:
RESULT:.
14
EX NO:5
PROGRAMMING EXERCISE ON MAXIMIZNG A FUNCTION
DATE: USING GENETIC ALGORITHM
AIM:
The objective is to create an evolutionary optimization technique capable of finding the global maximum of a
predefined function by evolving a population of potential solutions.
ALGORITHM:
Step 1. The objective is to create an evolutionary optimization technique capable of finding the global maximum
of a predefined function by evolving a population of potential solutions.
Step 2. Define a fitness function that evaluates the fitness (objective value) of each individual based on the given
function to be maximized.
Step 3. Valuate the fitness of each individual in the population using the defined fitness function
Step 4. Select individuals from the population for reproduction (mating pool) based on their fitness
Step 6. Apply mutation to some of the offspring individuals with a low probability to introduce diversity
PROGRAM:
def fitness_function(x):
population_size = 100
mutation_rate = 0.1
num_generations = 100
min_x = -10
max_x = 10
15
# Function to create an initial population
def create_initial_population(population_size):
def calculate_fitness(population):
selected = []
for _ in range(len(population)):
selected.append(population[idx1])
else:
selected.append(population[idx2])
return selected
# so crossover is not applicable. For single values, return them as they are.
def mutate(individual):
return individual
population = create_initial_population(population_size)
16
for generation in range(num_generations):
fitness_scores = calculate_fitness(population)
# Select parents
# Perform crossover
new_population = []
if i + 1 < len(selected_parents):
new_population.extend([child1, child2])
# Mutate
fitness_scores = calculate_fitness(population)
best_individual_idx = fitness_scores.index(max(fitness_scores))
best_individual = population[best_individual_idx]
# Output results
OUTPUT:
17
INFERENCE:
RESULT:
18
EX NO:6
IMPLEMENTATION OF TWO INPUT SINE FUNCTION
DATE:
AIM: The objective is to create a neural network model that can learn and predict the sine function based on
two input variables
ALGORITHM:
Step 4. Calculate loss/error between predicted and actual outputs. Step 5. Back propagate the error to update
weights using optimization algorithms like gradient descent or Adam.
Step 6. Validate the model’s performance on the validation set to monitor for overfitting.
PROGRAM:
import math
sin_x = math.sin(x)
sin_y = math.sin(y)
try:
19
# Output the results
except ValueError:
OUTPUT:
INFERENCE:
RESULT:
20
EX NO:7
IMPLEMENTATION OF THREE INPUT NON-LINEAR FUNCTION
DATE:
AIM: The objective is to create a neural network model that can learn and predict the sine function based on
two input variables.
ALGORITHM:
Step 1.Initialize weights and biases in the neural network (random initialization or predefined values).
Step 2. Split the generated dataset into training and validation sets for model evaluation.
Step 5. Calculate loss/error between predicted and actual outputs. Step 6. Backpropagate the error to update
weights using optimization algorithms like gradient descent or Adam.
Step 7. Validate the model’s performance on the validation set to monitor for overfitting.
PROGRAM:
return result
try:
result = non_linear_function(x, y, z)
print(f"Result of the non-linear function for inputs ({x}, {y}, {z}) is: {result}")
except ValueError:
21
print("Please enter valid numbers.")
OUTPUT:
Result of the non-linear function for inputs (12.0, 23.0, 34.0) is: -1324025.0
INFERENCE:
RESULT:
22