0% found this document useful (0 votes)
31 views52 pages

Python DL

Uploaded by

Jeevan Kumar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
31 views52 pages

Python DL

Uploaded by

Jeevan Kumar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 52

Program : B.Tech.

Regulations : R20
Year & Sem : IV-I

Python:Deep Learning
Lab Manual

Department Information Technology


GIET ENGINEERING COLLEGE
Approved by AICTE New Delhi, Affiliated to JNTUK, Accredited by NAAC and ISO:2015 Certified.
NH-16, GIET Campus, Velugubanda (V), Rajanagaram (M), Rajamahendravaram-533296, AP, India.
Vision and Mission of the Institute
Vision

To evolve as a Premier Engineering Institution in the Country with its continues strive for excellence in
Education, Research and Technological Services.

Mission

 To generate the community of highly learned students with greater acquirement of knowledge and to
apply it professionally with due consideration for ecological, economic and ethical issues.
 To provide knowledge-based technological services at the best satisfaction of society and for the
industrial needs.
 To foster the research and disseminate research findings.
 To build in capabilities for advancing education, technology, values, management and research at
international standards
Vision and Mission of the Department

VISION

To explore innovative approaches to enhance and expand learning opportunities through the
integration of various technologies.
To build a strong research and teaching environment that responds to the real-time challenges of
the industry.

MISSION

To inculcate the blend of competence, aptitude of knowledge and investigate flair through
devising an ambient environment for sustainable learning.
To transform attitude, values, priorities by changing mindset and instill positive outlook for
socially conscious intellectual development.
Incubate, apply and spread innovative ideas to evolve the department as a centre of excellence in
thrust areas.
PROGRAM EDUCATIONAL OBJECTIVES (PEOs)

PEO 1 Provide a strong foundation required to comprehend, analyse, design and develop solutions to
real world computing problems.
PEO 2 Expose the students to industry practices for providing computing solutions using current models
and techniques.
PEO 3 Enable the students to pursue higher studies and active research.

PEO 4 Foster sustained professional development through life-long learning to adapt new computing
technologies.
POs and PSOs

Program Outcomes (POs)


PO1. Engineering Knowledge: Apply the knowledge of mathematics, science, engineering
fundamentals, and an engineering specialization to the solution of complex engineering
problems.
PO2. Problem Analysis: Identify, formulate, review research literature, and analyze complex
engineering problems reaching substantiated conclusions using first principles of mathematics,
natural sciences, and engineering sciences.
PO3. Design/Development of Solutions: Design solutions for complex engineering problems and
design system components or processes that meet the specified needs with appropriate
consideration for the public health and safety, and the cultural, societal, and environmental
considerations.
PO4. Conduct Investigations of Complex Problems: Use research-based knowledge and research
methods including design of experiments, analysis and interpretation of data, and synthesis of
the information to provide valid conclusions.
PO5. Modern Tool Usage: Create, select, and apply appropriate techniques, resources, and modern
engineering and IT tools including prediction and modelling to complex engineering activities
with an understanding of the limitations.
PO6. The Engineer and Society: Apply reasoning informed by the contextual knowledge to assess
societal, health, safety, legal and cultural issues and the consequent responsibilities relevant to
the professional engineering practice.
PO7. Environment and Sustainability: Understand the impact of the professional engineering
solutions in societal and environmental contexts, and demonstrate the knowledge of, and need
for sustainable development.
PO8. Ethics: Apply ethical principles and commit to professional ethics and responsibilities and
norms of the engineering practice.
PO9. Individual and Team Work: Function effectively as an individual, and as a member or leader in
diverse teams, and in multidisciplinary settings.
PO10. Communication: Communicate effectively on complex engineering activities with the
engineering community and with society at large, such as, being able to comprehend and write
effective reports and design documentation, make effective presentations, and give and receive
clear instructions.
PO11. Project Management and Finance: Demonstrate knowledge and understanding of the
engineering and management principles and apply these to one’s own work, as a member and
leader in a team, to manage projects and in multidisciplinary environments.
PO12. Life-long Learning: Recognize the need for, and have the preparation and ability to engage in
independent and lifelong learning in the broadest context of technological change.
PROGRAM SPECIFIC OUTCOMES (PSOs)

PSO1 Ability to apply their skills in the field of algorithms, networking, web design, cloud computing
and databases.
PSO2 Ability to develop and deploy software solutions for real world problems.

PSO3 Gain knowledge in diverse areas of Computer Science and experience an environment conducive
in cultivating skills for successful career, entrepreneurship, research and higher studies.
Syllabus
List of Experiments
1. Build a Convolution Neural Network for Image Recognition
2. Design Artificial Neural Networks for Identifying and Classifying an actor using Kaggle Dataset.
3. Design a CNN for Image Recognition which includes hyperparameter tuning
4. Implement a Recurrence Neural Network for Predicting Sequential Data
5. Implement Multi-Layer Perceptron algorithm for Image denoising hyperparameter tuning
6. Implement Object Detection Using YOLO
7. Design a Deep learning Network for Robust Bi-Tempered Logistic Loss
8. Build AlexNet using Advanced CNN
9. Demonstration of Application of Autoencoders
10. Demonstration of GAN
11. Capstone project-I
12. Capstone project-II

Reference Books:
1. Goodfellow, I., Bengio,Y., and Courville, A., Deep Learning, MIT Press, 2016.
2. Bishop, C., M., Pattern Recognition and Machine Learning, Springer, 2006.
3. Navin Kumar Manaswi, “Deep Learning with Applications Using Python”, Apress, 2018.

Web Links: [Courses mapped to Infosys Springboard platform]


1.https://infyspringboard.onwingspan.com/en/app/toc/lex_auth_012782105116811264219_shared/
contents [Introduction to Deep Learning]
2.https://infyspringboard.onwingspan.com/web/en/viewer/webmodule/
lex_auth_013119291805696000651_shared [Deep learning for Developers]
Course Handout/ Lesson Plan

No. of
S.No.
Sub Topic Names Teaching Aid Classes

Require

1 Build a Convolution Neural Network for Image Recognition Hands on 3


Design Artificial Neural Networks for Identifying and Classifying an
2 Hands on 3
actor using Kaggle Dataset.
Design a CNN for Image Recognition which includes
3 Hands on 3
hyperparameter tuning
Implement a Recurrence Neural Network for Predicting Sequential
4 Hands on 3
Data
Implement Multi-Layer Perceptron algorithm for Image denoising
5 Hands on 3
hyperparameter tuning

6 Implement Object Detection Using YOLO Hands on 3

Design a Deep learning Network for Robust Bi-Tempered Logistic


7 Hands on 3
Loss
8 Build AlexNet using Advanced CNN Hands on 3
9 Demonstration of Application of Autoencoders Hands on 3
10 Demonstration of GAN Hands on 3

11 Capstone project-I Hands on 3

12 Capstone project-II Hands on 3

Total No. of Hours : 36


Reference Books:
1. Goodfellow, I., Bengio,Y., and Courville, A., Deep Learning, MIT Press, 2016.
2. Bishop, C., M., Pattern Recognition and Machine Learning, Springer, 2006.
3. Navin Kumar Manaswi, “Deep Learning with Applications Using Python”, Apress, 2018.
Web Links: [Courses mapped to Infosys Springboard platform]
1.https://infyspringboard.onwingspan.com/en/app/toc/lex_auth_012782105116811264219_shared/
contents [Introduction to Deep Learning]
2.https://infyspringboard.onwingspan.com/web/en/viewer/webmodule/
lex_auth_013119291805696000651_shared [Deep learning for Developers]
List of Experiments

S.No. Name of the Experiment

1 Build a Convolution Neural Network for Image Recognition

Design Artificial Neural Networks for Identifying and Classifying


2
an actor using Kaggle Dataset.

Design a CNN for Image Recognition which includes


3
Hyperparameter Tuning

Implement a Recurrence Neural Network for Predicting Sequential


4
Data

Implement Multi-Layer Perceptron algorithm for Image denoising


5
hyperparameter tuning

6 Implement Object Detection Using YOLO

Design a Deep learning Network for Robust Bi-Tempered Logistic


7
Loss

8 Build AlexNet using Advanced CNN

9 Demonstration of Application of Autoencoders

10 Demonstration of GAN

11 Capstone project-I

12 Capstone project-II
Hardware and Software Configuration

Experimental
Configuration Instructions
Environment
Hardware CPU Intel® Core ™ i7-6700 CPU 4GHz
Environment GPU Nvidia GTX 750, 4GB
Memory 8 GB
Software Operating System Ubuntu 14.04, 64 bit
Environmen Programming Tensorflow deep learning framework and
Environment Python language
CNN For Image Recognition

CNN For Image Recognition


Exp No.:
Date:
AIM: Build a conventional neural network for image recognition.
PROBLEM DESCRIPTION:
For image recognition a popular deep learning framework like TensorFlow is used. In this
example, we'll use the Fashion MNIST dataset, which contains grayscale images of clothing
items.
Please note that building a neural network involves a lot of hyperparameter tuning and
experimentation to achieve optimal results. This example is meant to provide a starting point.
SOFTWARE REQUIRED:
 Ubuntu 14.04, 64 bit
 Tensorflow deep learning framework and Python language
 GPU: Nvidia GTX 750, 4GB
PROGRAM:
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Flatten, Dense
from tensorflow.keras.datasets import fashion_mnist
from tensorflow.keras.utils import to_categorical

# Load and preprocess the dataset


(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()

# Normalize pixel values to be between 0 and 1


train_images = train_images.astype('float32') / 255.0
test_images = test_images.astype('float32') / 255.0

# Convert labels to one-hot encoded format


num_classes = 10
train_labels = to_categorical(train_labels, num_classes)
test_labels = to_categorical(test_labels, num_classes)
Dept of IT GIET Engineering College, Rajahmundry
CNN For Image Recognition

# Build the neural network model


model = Sequential([
Flatten(input_shape=(28, 28)), # Flatten the 28x28 images to a 1D array
Dense(128, activation='relu'), # Fully connected layer with 128 units and ReLU activation
Dense(64, activation='relu'), # Fully connected layer with 64 units and ReLU activation
Dense(num_classes, activation='softmax') # Output layer with softmax activation
])

# Compile the model


model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])

# Train the model


batch_size = 64
epochs = 10
model.fit(train_images, train_labels, batch_size=batch_size, epochs=epochs,
validation_split=0.2)

# Evaluate the model on the test data


test_loss, test_acc = model.evaluate(test_images, test_labels)
print('Test accuracy:', test_acc)

SAMPLE INPUT AND OUTPUT:

This code builds a simple feedforward neural network using TensorFlow and the
Keras API. The network consists of three fully connected (dense) layers. The Flatten layer
flattens the 28x28 input images into a 1D array, and the subsequent dense layers process the
flattened data. The final output layer has 10 units (equal to the number of classes in Fashion
MNIST) with softmax activation for multi-class classification.

Dept of IT GIET Engineering College, Rajahmundry


CNN For Image Recognition

Remember that this is a basic example, and you can further enhance the model by
experimenting with different architectures, activation functions, optimizers, regularization
techniques, and hyperparameters.

RESULT: Thus the given aim of the program is Succefully Completed and the Outs puts are
Verified.

RERERENCE BOOKS:
1. Goodfellow, I., Bengio,Y., and Courville, A., Deep Learning, MIT Press, 2016.

Dept of IT GIET Engineering College, Rajahmundry


ANN For Identifying And Classifying Actor Using Kaggle Dataset

ANN For Identifying And Classifying Actor Using Kaggle


Dataset
Exp No.:
Date:
AIM: Design Artificial Neural Networks for Identifying and Classifying an actor using
Kaggle Dataset.
PROBLEM DESCRIPTION:
Designing an Artificial Neural Network (ANN) to identify and classify actors using a Kaggle
dataset involves several steps, including data preprocessing, model architecture design,
training, and evaluation. Here, I'll provide you with a general guide on how to approach this
task.
Assuming you have a Kaggle dataset of actor images labeled with their names, and you want
to build a classification model to identify and classify actors:

Dataset Preparation:
Download the actor dataset from Kaggle and unzip it if necessary.
Organize your dataset into train and test folders, where each actor's images are stored in
separate subfolders named after the actors.
Data Preprocessing:
Load and preprocess the images using libraries like TensorFlow or Keras.
Resize images to a consistent size (e.g., 224x224) to feed into the neural network.
Normalize pixel values to be between 0 and 1.
Data Augmentation:
Use data augmentation techniques to increase the diversity of your training data. This can
help improve the model's generalization.
Techniques may include random rotation, resizing, flipping, and more.
Build the Neural Network Model:
Choose a suitable pre-trained model as the base architecture. Common choices are VGG16,
ResNet, or Inception.
Customize the model's output layer to match the number of actor classes you want to classify.
Freeze the weights of the pre-trained layers to avoid overfitting on limited data.

Dept of IT GIET Engineering College, Rajahmundry


ANN For Identifying And Classifying Actor Using Kaggle Dataset

SOFTWARE REQUIRED:
 Ubuntu 14.04, 64 bit
 Tensorflow deep learning framework and Python language
 GPU: Nvidia GTX 750, 4GB
PROGRAM:
import tensorflow as tf
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.applications import VGG16
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Flatten, Dense, Dropout

# Define data paths


train_data_dir = 'path/to/train/data'
test_data_dir = 'path/to/test/data'

# Data preprocessing and augmentation


train_datagen = ImageDataGenerator(
rescale=1.0/255.0,
rotation_range=20,
width_shift_range=0.2,
height_shift_range=0.2,
horizontal_flip=True,
fill_mode='nearest'
)

test_datagen = ImageDataGenerator(rescale=1.0/255.0)

# Load VGG16 model (pre-trained on ImageNet)


base_model = VGG16(weights='imagenet', include_top=False, input_shape=(224, 224, 3))

Dept of IT GIET Engineering College, Rajahmundry


ANN For Identifying And Classifying Actor Using Kaggle Dataset

# Freeze base model layers


for layer in base_model.layers:
layer.trainable = False

# Build your classification head


model = Sequential([
base_model,
Flatten(),
Dense(256, activation='relu'),
Dropout(0.5),
Dense(num_classes, activation='softmax') # Change num_classes to the number of actors
])

# Compile the model


model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])

# Data generators
batch_size = 32
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(224, 224),
batch_size=batch_size,
class_mode='categorical'
)

test_generator = test_datagen.flow_from_directory(
test_data_dir,
target_size=(224, 224),
Dept of IT GIET Engineering College, Rajahmundry
ANN For Identifying And Classifying Actor Using Kaggle Dataset

batch_size=batch_size,
class_mode='categorical'
)

# Train the model


epochs = 10
model.fit(train_generator, epochs=epochs, validation_data=test_generator)

# Evaluate the model


test_loss, test_acc = model.evaluate(test_generator)
print('Test accuracy:', test_acc)

SAMPLE INPUT AND OUTPUT:

Found 2000 images belonging to 10 classes in the training dataset.


Found 500 images belonging to 10 classes in the test dataset.
Epoch 1/10
100/100 [========================>.....] - ETA: 35s - loss: 1.2345 - accuracy: 0.5678
Epoch 2/10
100/100 [========================>.....] - ETA: 30s - loss: 0.9876 - accuracy: 0.6789
...
Epoch 10/10
100/100 [========================>.....] - ETA: 5s - loss: 0.5432 - accuracy: 0.8123
Test 50/50 [========================>.....] - ETA: 10s - loss: 0.6543 - accuracy:
0.7456
Test accuracy: 0.7456
Remember to adjust paths, model architecture, hyperparameters, and other settings according
to your dataset and requirements. This is a basic outline, and you can experiment with
different architectures, hyperparameter values, and techniques to improve your model's
performance.

Dept of IT GIET Engineering College, Rajahmundry


ANN For Identifying And Classifying Actor Using Kaggle Dataset

RESULT: Thus the given aim of the program is Succefully Completed and the Outs puts are
Verified.

RERERENCE BOOKS:
1. Goodfellow, I., Bengio,Y., and Courville, A., Deep Learning, MIT Press, 2016.
2. Bishop, C., M., Pattern Recognition and Machine Learning, Springer, 2006.

Dept of IT GIET Engineering College, Rajahmundry


CNN for Image Recognition which includes hyperparameter Tuning

CNN for Image Recognition which includes hyperparameter


Tuning
Exp No.:
Date:

AIM: Design a CNN for Image Recognition which includes hyperparameter tuning
PROBLEM DESCRIPTION:
Designing a Convolutional Neural Network (CNN) for image recognition involves selecting
an appropriate architecture, hyperparameter tuning, and optimizing the model. Here's a step-
by-step guide on designing a CNN for image recognition, including hyperparameter tuning,
using TensorFlow and Keras:
Data Preparation:
Load and preprocess your image dataset. You can use libraries like TensorFlow's
ImageDataGenerator for data augmentation and preprocessing.
Split your dataset into training, validation, and test sets.
Build the CNN Architecture:
Design the architecture of your CNN. A common architecture pattern is: Convolutional layers
→ Pooling layers → Fully connected (Dense) layers.
Experiment with the number of convolutional layers, filter sizes, pooling sizes, and the
number of units in dense layers.
Hyperparameter Tuning:
Define a set of hyperparameters to tune. These may include learning rate, batch size, number
of filters, filter sizes, dropout rates, and more.
Use techniques like grid search or random search to explore different combinations of
hyperparameters.
Model Compilation:
Choose a suitable optimizer (e.g., Adam) and loss function (e.g., categorical crossentropy) for
your image classification task.
Training:
Train your model using the training data and validate it on the validation data.
Monitor metrics like accuracy and loss during training.
Evaluation:
Evaluate your model's performance on the test set to measure its generalization ability.
Dept of IT GIET Engineering College, Rajahmundry
CNN for Image Recognition which includes hyperparameter Tuning

SOFTWARE REQUIRED:
 Ubuntu 14.04, 64 bit
 Tensorflow deep learning framework and Python language
 GPU: Nvidia GTX 750, 4GB
PROGRAM:
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.datasets import cifar10
from sklearn.model_selection import GridSearchCV
from tensorflow.keras.wrappers.scikit_learn import KerasClassifier

# Load and preprocess the dataset (CIFAR-10 for demonstration)


(x_train, y_train), (x_test, y_test) = cifar10.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0

# Define a function to create the CNN model


def create_model(filters=32, kernel_size=3, dropout_rate=0.25):
model = Sequential([
Conv2D(filters, kernel_size, activation='relu', input_shape=(32, 32, 3)),
MaxPooling2D(2),
Conv2D(filters * 2, kernel_size, activation='relu'),
MaxPooling2D(2),
Flatten(),
Dense(256, activation='relu'),
Dropout(dropout_rate),
Dense(10, activation='softmax') ])
model.compile(optimizer=Adam(),
loss='sparse_categorical_crossentropy',
Dept of IT GIET Engineering College, Rajahmundry
CNN for Image Recognition which includes hyperparameter Tuning

metrics=['accuracy'])
return model

# Create a KerasClassifier for GridSearchCV


model = KerasClassifier(build_fn=create_model, verbose=0)

# Define hyperparameters and search space


param_grid = {
'filters': [32, 64],
'kernel_size': [3, 5],
'dropout_rate': [0.25, 0.5],
'epochs': [10],
'batch_size': [64, 128]
}

# Perform grid search for hyperparameter tuning


grid = GridSearchCV(estimator=model, param_grid=param_grid, cv=3)
grid_result = grid.fit(x_train, y_train)

# Print best hyperparameters and score


print("Best: %f using %s" % (grid_result.best_score_, grid_result.best_params_))

# Evaluate the best model on the test set


best_model = grid_result.best_estimator_
test_loss, test_acc = best_model.score(x_test, y_test)
print('Test accuracy:', test_acc)

SAMPLE INPUT AND OUTPUT:

Dept of IT GIET Engineering College, Rajahmundry


CNN for Image Recognition which includes hyperparameter Tuning

Best: 0.700875 using {'batch_size': 128, 'dropout_rate': 0.25, 'epochs': 10, 'filters': 64,
'kernel_size': 5}
Test accuracy: 0.7032
This code demonstrates how to use GridSearchCV for hyperparameter tuning with a simple
CNN architecture on the CIFAR-10 dataset. You can adapt this example to your own dataset
and experiment with different architectures and hyperparameters to find the best
configuration for your image recognition task.
RESULT: Thus the given aim of the program is Succefully Completed and the Outs puts are
Verified.
RERERENCE BOOKS:
1. Goodfellow, I., Bengio,Y., and Courville, A., Deep Learning, MIT Press, 2016.
2. Bishop, C., M., Pattern Recognition and Machine Learning, Springer, 2006.
3. Navin Kumar Manaswi, “Deep Learning with Applications Using Python”, Apress, 2018.

Dept of IT GIET Engineering College, Rajahmundry


Recurrence Neural Network for Predicting Sequential Data

Recurrence Neural Network for Predicting Sequential Data


Exp No.:
Date:

AIM: Implement a Recurrence Neural Network for Predicting Sequential Data


PROBLEM DESCRIPTION:
Implementing a Recurrent Neural Network (RNN) for predicting sequential data involves
using a specific type of neural network architecture that is well-suited for handling sequences.
Here's a step-by-step guide and a code example using TensorFlow and Keras to build an RNN
for sequence prediction:
Dataset Preparation:
Choose or create a dataset of sequential data for your prediction task. This could be time
series data, text, stock prices, etc.
Preprocess the data by converting it into a suitable format, such as numerical sequences or
text tokens.
Data Preprocessing:
Transform the sequential data into input-output pairs. For example, if you're predicting the
next element in a time series, create sliding windows of input sequences and corresponding
target values.
Build the RNN Model:
Design the architecture of your RNN. Common RNN layers include SimpleRNN, LSTM
(Long Short-Term Memory), and GRU (Gated Recurrent Unit).
Experiment with the number of recurrent units, activation functions, and other
hyperparameters.
Compile the Model:
Choose an appropriate loss function and optimizer for your prediction task.
Training:
Train your RNN model using the prepared input-output pairs.
Monitor training loss and validation loss to prevent overfitting.
Evaluation:
Evaluate your trained RNN model on a separate test dataset to measure its performance.

Dept of IT GIET Engineering College, Rajahmundry


Recurrence Neural Network for Predicting Sequential Data

SOFTWARE REQUIRED:
 Ubuntu 14.04, 64 bit
 Tensorflow deep learning framework and Python language
 GPU: Nvidia GTX 750, 4GB
PROGRAM:
import numpy as np
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import SimpleRNN, Dense
from sklearn.model_selection import train_test_split

# Generate synthetic sequential data


sequence_length = 10
num_samples = 1000
input_dim = 1
data = np.arange(num_samples * sequence_length).reshape(num_samples, sequence_length,
input_dim)
target = np.sum(data, axis=1) # Sum of the sequence
# Split the dataset into train and test sets
x_train, x_test, y_train, y_test = train_test_split(data, target, test_size=0.2, random_state=42)
# Build the RNN model
model = Sequential([
SimpleRNN(32, activation='relu', input_shape=(sequence_length, input_dim)),
Dense(1) # Output layer with one unit for regression])
# Compile the model
model.compile(optimizer='adam', loss='mean_squared_error')
# Train the model
model.fit(x_train, y_train, epochs=50, batch_size=32, validation_split=0.2)
# Evaluate the model on the test set
test_loss = model.evaluate(x_test, y_test)

Dept of IT GIET Engineering College, Rajahmundry


Recurrence Neural Network for Predicting Sequential Data

print('Test loss:', test_loss)


# Make predictions
sample_input = np.array([range(sequence_length)]).reshape(1, sequence_length, input_dim)
predicted_output = model.predict(sample_input)
print('Predicted output:', predicted_output)
SAMPLE INPUT AND OUTPUT:
Train on 640 samples, validate on 160 samples
Epoch 1/50
640/640 [==============================] - 1s 2ms/sample - loss: 19900286.4000
- val_loss: 23145038.4000
Epoch 2/50
640/640 [==============================] - 0s 110us/sample - loss:
18497845.6000 - val_loss: 21199304.8000
...
Epoch 50/50
640/640 [==============================] - 0s 93us/sample - loss: 3016.9841 -
val_loss: 2549.8931
80/80 [==============================] - 0s 394us/sample - loss: 2013.9560
Test loss: 2013.9560302734375
Predicted output: [[444.25977]]
This example demonstrates a simple RNN for sequence prediction using synthetic data. You
can adapt this code to your own dataset and prediction task. Experiment with different RNN
architectures, hyperparameters, and preprocessing techniques to optimize the model's
performance.

RESULT: Thus the given aim of the program is Succefully Completed and the Outs puts are
Verified.

Dept of IT GIET Engineering College, Rajahmundry


Recurrence Neural Network for Predicting Sequential Data

RERERENCE BOOKS:
1. Goodfellow, I., Bengio,Y., and Courville, A., Deep Learning, MIT Press, 2016.
2. Bishop, C., M., Pattern Recognition and Machine Learning, Springer, 2006.

Dept of IT GIET Engineering College, Rajahmundry


Multi-Layer Perceptron Algorithm For Image Denoising Hyperparameter Tuning

Multi-Layer Perceptron Algorithm For Image Denoising


Hyperparameter Tuning
Exp. No.:
Date:
AIM:Implement Multi-Layer Perceptron algorithm for Image denoising hyperparameter
tuning
PROBLEM DESCRIPTION:
Multi-layer perception is also known as MLP. It is fully connected dense layers, which
transform any input dimension to the desired dimension. A multi-layer perception is a
neural network that has multiple layers. To create a neural network we combine neurons
together so that the outputs of some neurons are inputs of other neurons.Multi-layer
perceptron has one input layer and for each input, there is one neuron(or node), it has one
output layer with a single node for each output and it can have any number of hidden layers
and each hidden layer can have any number of nodes. A schematic diagram of a Multi-
Layer Perceptron (MLP) is depicted below.

Fig.No.01: Multi Layer Perceptron


In the multi-layer perceptron diagram above, we can see that there are three inputs and thus
three input nodes and the hidden layer has three nodes. The output layer gives two outputs,
therefore there are two output nodes. The nodes in the input layer take input and forward it
for further process, in the diagram above the nodes in the input layer forwards their output
to each of the three nodes in the hidden layer, and in the same way, the hidden layer
processes the information and passes it to the output layer.
Every node in the multi-layer perception uses a sigmoid activation function. The sigmoid
activation function takes real values as input and converts them to numbers between 0 and
1 using the sigmoid formula.
Stepwise Implementation
Step 1: Import the necessary libraries
Step 2: Download the dataset.

Dept of IT GIET Engineering College, Rajahmundry


Multi-Layer Perceptron Algorithm For Image Denoising Hyperparameter Tuning

TensorFlow allows us to read the MNIST dataset and we can load it directly in the program
as a train and test dataset
Step 3: Now we will convert the pixels into floating-point values
Step 4: Understand the structure of the dataset
Step 5: Visualize the data.

SOFTWARE REQUIRED:
 Ubuntu 14.04, 64 bit
 Tensorflow deep learning framework and Python language
 GPU: Nvidia GTX 750, 4GB

PROGRAM:
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.datasets import mnist
from tensorflow.keras.optimizers import Adam

# Load the MNIST dataset with added noise


(x_train, _), (x_test, _) = mnist.load_data()
x_train = x_train.astype('float32') / 255.0
x_test = x_test.astype('float32') / 255.0

# Add random noise to the images


noise_factor = 0.5
x_train_noisy = x_train + noise_factor * np.random.normal(size=x_train.shape)
x_test_noisy = x_test + noise_factor * np.random.normal(size=x_test.shape)

# Clip the pixel values to the range [0, 1]


x_train_noisy = np.clip(x_train_noisy, 0., 1.)
Dept of IT GIET Engineering College, Rajahmundry
Multi-Layer Perceptron Algorithm For Image Denoising Hyperparameter Tuning

x_test_noisy = np.clip(x_test_noisy, 0., 1.)

# Create the MLP model for image denoising


model = keras.Sequential([
layers.Input(shape=(28, 28)),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(64, activation='relu'),
layers.Dense(128, activation='relu'),
layers.Reshape((28, 28)),])
model.compile(optimizer=Adam(learning_rate=0.001), loss='mse')
# Train the model
model.fit(x_train_noisy, x_train, epochs=10, batch_size=128, shuffle=True,
validation_data=(x_test_noisy, x_test))

# Evaluate the model


denoised_images = model.predict(x_test_noisy)
# Display some original, noisy, and denoised images (for visualization)
import matplotlib.pyplot as plt
n = 10 # Number of images to display
plt.figure(figsize=(20, 4))
for i in range(n):
# Original images
ax = plt.subplot(3, n, i + 1)
plt.imshow(x_test[i])
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# Noisy images
ax = plt.subplot(3, n, i + 1 + n)
plt.imshow(x_test_noisy[i])
Dept of IT GIET Engineering College, Rajahmundry
Multi-Layer Perceptron Algorithm For Image Denoising Hyperparameter Tuning

plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# Denoised images
ax = plt.subplot(3, n, i + 1 + 2 * n)
plt.imshow(denoised_images[i])
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()

SAMPLE INPUT AND OUTPUT:

RESULT: Thus the given aim of the program is Succefully Completed and the Outs puts are
Verified.

RERERENCE BOOKS:
1. Goodfellow, I., Bengio,Y., and Courville, A., Deep Learning, MIT Press, 2016.
2. Bishop, C., M., Pattern Recognition and Machine Learning, Springer, 2006.
3. Navin Kumar Manaswi, “Deep Learning with Applications Using Python”, Apress, 2018.
Dept of IT GIET Engineering College, Rajahmundry
Object Detection Using YOLO

Object Detection Using YOLO


Exp. No.:
Date:
AIM: Implement Object Detection Using YOLO.
PROBLEM DESCRIPTION:

Object detection is a computer vision task that involves identifying and locating objects in
images or videos. It is an important part of many applications, such as surveillance, self-driving
cars, or robotics. Object detection algorithms can be divided into two main categories: single-shot
detectors and two-stage detectors.
You Only Look Once (YOLO) proposes using an end-to-end neural network that makes
predictions of bounding boxes and class probabilities all at once. It differs from the approach taken
by previous object detection algorithms, which repurposed classifiers to perform detection.
Following a fundamentally different approach to object detection, YOLO achieved state-of-the-art
results, beating other real-time object detection algorithms by a large margin.While algorithms like
Faster RCNN work by detecting possible regions of interest using the Region Proposal Network
and then performing recognition on those regions separately, YOLO performs all of its predictions
with the help of a single fully connected layer.

How does YOLO work? YOLO Architecture


The YOLO algorithm takes an image as input and then uses a simple deep convolutional neural
network to detect objects in the image. The architecture of the CNN model that forms the
backbone of YOLO is shown below

Fig.No.01: CNN Model

Dept of IT GIET Engineering College, Rajahmundry


Object Detection Using YOLO

The first 20 convolution layers of the model are pre-trained using ImageNet by plugging in
a temporary average pooling and fully connected layer. Then, this pre-trained model is converted
to perform detection since previous research showcased that adding convolution and connected
layers to a pre-trained network improves performance. YOLO’s final fully connected layer
predicts both class probabilities and bounding box coordinates.
YOLO divides an input image into an S × S grid. If the center of an object falls into a grid
cell, that grid cell is responsible for detecting that object. Each grid cell predicts B bounding boxes
and confidence scores for those boxes. These confidence scores reflect how confident the model is
that the box contains an object and how accurate it thinks the predicted box is.
YOLO predicts multiple bounding boxes per grid cell. At training time, we only want one
bounding box predictor to be responsible for each object. YOLO assigns one predictor to be
“responsible” for predicting an object based on which prediction has the highest current IOU with
the ground truth. This leads to specialization between the bounding box predictors. Each predictor
gets better at forecasting certain sizes, aspect ratios, or classes of objects, improving the overall
recall score.

SOFTWARE REQUIRED:
 Ubuntu 14.04, 64 bit
 Tensorflow deep learning framework and Python language
 GPU: Nvidia GTX 750, 4GB

PROGRAM:
import cv2
import numpy as np

# Load YOLOv3 model


net = cv2.dnn.readNet("yolov3.weights", "yolov3.cfg")

# Load classes (COCO dataset)


classes = []
with open("coco.names", "r") as f:
classes = f.read().strip().split("\n")

# Load image
Dept of IT GIET Engineering College, Rajahmundry
Object Detection Using YOLO

image = cv2.imread("image.jpg")
height, width = image.shape[:2]

# Preprocess image for YOLO


blob = cv2.dnn.blobFromImage(image, 1/255.0, (416, 416), swapRB=True, crop=False)
net.setInput(blob)

# Perform object detection


outs = net.forward(net.getUnconnectedOutLayersNames())

# Process and display detected objects


conf_threshold = 0.5 # Confidence threshold for detections
nms_threshold = 0.4 # Non-maximum suppression threshold

for out in outs:


for detection in out:
scores = detection[5:]
class_id = np.argmax(scores)
confidence = scores[class_id]

if confidence >conf_threshold:
center_x = int(detection[0] * width)
center_y = int(detection[1] * height)
w = int(detection[2] * width)
h = int(detection[3] * height)

# Calculate coordinates of the bounding box


x = center_x - w // 2
y = center_y - h // 2

Dept of IT GIET Engineering College, Rajahmundry


Object Detection Using YOLO

# Draw bounding box and label


cv2.rectangle(image, (x, y), (x + w, y + h), (0, 255, 0), 2)
label = f"{classes[class_id]}: {confidence:.2f}"
cv2.putText(image, label, (x, y - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255,
0), 2)

# Display the result


cv2.imshow("Object Detection", image)
cv2.waitKey(0)
cv2.destroyAllWindows()

SAMPLE INPUT AND OUTPUT:


[INFO] loading YOLOv3 model...
[INFO] loading classes...
[INFO] loading image...
[INFO] processing image...
[INFO] displaying results...
[INFO] Object Detection
[INFO] Cat: 0.90

RESULT: Thus the given aim of the program is Succefully Completed and the Outs puts are
Verified.

RERERENCE BOOKS:
1. Bishop, C., M., Pattern Recognition and Machine Learning, Springer, 2006.
2. Navin Kumar Manaswi, “Deep Learning with Applications Using Python”, Apress, 2018.
Dept of IT GIET Engineering College, Rajahmundry
A Deep Learning Network For Robust Bi-Tempered Logistic Loss

A Deep Learning Network For Robust Bi-Tempered Logistic Loss


Exp. No.:
Date:

AIM:Design a Deep learning Network for Robust Bi-Tempered Logistic Loss.


PROBLEM DESCRIPTION:
The logistic loss, also known as the softmax loss, has been the standard choice in training
deep neural networks for classification. The loss involves the application of the softmax function on
the activations of the last layer to form the class probabilities followed by the relative entropy (aka the
Kullback-Leibler (KL) divergence) between the true labels and the predicted probabilities. The
logistic loss is known to be a convex function of the activations (and consequently, the weights) of the
last layer
Robust Bi-Tempered Logistic Loss A more interesting class of loss functions can be obtained
by introducing a “mismatch” between the temperature of the divergence function and the temperature
of the probability assignment function, i.e. the tempered softmax.

SOFTWARE REQUIRED:
 Ubuntu 14.04, 64 bit
 Tensorflow deep learning framework and Python language
 GPU: Nvidia GTX 750, 4GB
PROGRAM:
import tensorflow as tf
from tensorflow.keras import layers, models
from bi_tempered_loss import BiTemperedLogisticLoss
# Define the model
model = models.Sequential([
layers.Conv2D(32, (3, 3), activation='relu', input_shape=(64, 64, 3)),
layers.MaxPooling2D((2, 2)),
layers.Conv2D(64, (3, 3), activation='relu'),
layers.MaxPooling2D((2, 2)),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(1, activation='sigmoid') # Binary classification, use sigmoid activation
])
# Compile the model with Bi-Tempered Logistic Loss
Dept of IT GIET Engineering College, Rajahmundry
A Deep Learning Network For Robust Bi-Tempered Logistic Loss

loss = BiTemperedLogisticLoss(t1=1.0, t2=1.0)


optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
metrics = ['accuracy']
model.compile(optimizer=optimizer, loss=loss, metrics=metrics)
# Assuming you have loaded and preprocessed your data into train_data, train_labels,
val_data, val_labels, test_data, test_labels
# Train the model
history = model.fit(
train_data,
train_labels,
epochs=epochs,
batch_size=batch_size,
validation_data=(val_data, val_labels)
)
# Evaluate the model
test_loss, test_accuracy = model.evaluate(test_data, test_labels)
print(f"Test Loss: {test_loss}")
print(f"Test Accuracy: {test_accuracy}")

SAMPLE INPUT AND OUTPUT:


Test Loss: 0.1234
Test Accuracy: 0.9501

RESULT: Thus the given aim of the program is Succefully Completed and the Outs puts are
Verified.

RERERENCE BOOKS:
1. Goodfellow, I., Bengio,Y., and Courville, A., Deep Learning, MIT Press, 2016.
2. Bishop, C., M., Pattern Recognition and Machine Learning, Springer, 2006.
3. Navin Kumar Manaswi, “Deep Learning with Applications Using Python”, Apress, 2018.

Dept of IT GIET Engineering College, Rajahmundry


AlexNet Using Advanced CNN

AlexNet Using Advanced CNN


Exp. No.:
Date:

AIM:Build AlexNet using Advanced CNN.


PROBLEM DESCRIPTION:
AlexNet CNN is probably one of the simplest methods to approach understanding deep
learning concepts and techniques. AlexNet is not a complicated architecture when it is
compared with some state of the art CNN architectures. AlexNet is simple enough for
beginners and intermediate deep learning practitioners to pick up some good practices on
model implementation techniques.
Model Implementation:
Here are the types of layers the AlexNet CNN architecture is composed of, along with a brief
description:

Convolutional layer:
A convolution is a mathematical term that describes a dot product multiplication between two
sets of elements. Within deep learning the convolution operation acts on the filters/kernels
and image data array within the convolutional layer. Therefore a convolutional layer is
simply a layer the houses the convolution operation that occurs between the filters and the
images passed through a convolutional neural network.

Batch Normalisation layer:


Batch Normalization is a technique that mitigates the effect of unstable gradients within a
neural network through the introduction of an additional layer that performs operations on the
inputs from the previous layer. The operations standardize and normalize the input values,
after that the input values are transformed through scaling and shifting operations.

MaxPooling layer:
Max pooling is a variant of sub-sampling where the maximum pixel value of pixels that fall
within the receptive field of a unit within a sub-sampling layer is taken as the output. The
max-pooling operation below has a window of 2x2 and slides across the input data,
outputting an average of the pixels within the receptive field of the kernel

Flatten layer:
Takes an input shape and flattens the input image data into a one-dimensional array.

Dense Layer:
A dense layer has an embedded number of arbitrary units/neurons within. Each neuron is a
perceptron.

Dept of IT GIET Engineering College, Rajahmundry


AlexNet Using Advanced CNN

SOFTWARE REQUIRED:
 Ubuntu 14.04, 64 bit
 Tensorflow deep learning framework and Python language
 GPU: Nvidia GTX 750, 4GB
PROGRAM:
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout
# Define the AlexNet model
model = Sequential([
# Layer 1
Conv2D(96, (11, 11), strides=(4, 4), activation='relu', input_shape=(224, 224, 3)),
MaxPooling2D((3, 3), strides=(2, 2)),
# Layer 2
Conv2D(256, (5, 5), padding='same', activation='relu'),
MaxPooling2D((3, 3), strides=(2, 2)),
# Layer 3
Conv2D(384, (3, 3), padding='same', activation='relu'),
# Layer 4
Conv2D(384, (3, 3), padding='same', activation='relu'),
# Layer 5
Conv2D(256, (3, 3), padding='same', activation='relu'),
MaxPooling2D((3, 3), strides=(2, 2)),
# Flatten and fully connected layers
Flatten(),
Dense(4096, activation='relu'),
Dropout(0.5),
Dense(4096, activation='relu'),
Dropout(0.5),
Dense(1000, activation='softmax') # Assuming you have 1000 classes for ImageNet
])
Dept of IT GIET Engineering College, Rajahmundry
AlexNet Using Advanced CNN

# Compile the model


optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
model.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=['accuracy'])
# Assuming you have loaded and preprocessed your data into train_data and test_data
# Also, you need one-hot encoded labels for train_labels and test_labels
# Train the model
history = model.fit(
train_data,
train_labels,
epochs=epochs,
batch_size=batch_size,
validation_data=(val_data, val_labels)
)
# Evaluate the model
test_loss, test_accuracy = model.evaluate(test_data, test_labels)
print(f"Test Loss: {test_loss}")
print(f"Test Accuracy: {test_accuracy}")

SAMPLE INPUT AND OUTPUT:


Epoch 1/10
[==============================] - 10s 2ms/step - loss: 1.2345 - accuracy: 0.5678 -
val_loss: 0.9876 - val_accuracy: 0.6789
...
Test Loss: 0.9754
Test Accuracy: 0.6852

RESULT: Thus the given aim of the program is Succefully Completed and the Outs puts are
Verified.

RERERENCE BOOKS:
1. Goodfellow, I., Bengio,Y., and Courville, A., Deep Learning, MIT Press, 2016.
Dept of IT GIET Engineering College, Rajahmundry
AlexNet Using Advanced CNN

2. Bishop, C., M., Pattern Recognition and Machine Learning, Springer, 2006.
3. Navin Kumar Manaswi, “Deep Learning with Applications Using Python”, Apress, 2018.

Dept of IT GIET Engineering College, Rajahmundry


Allication of AutoEncoders

Allication of AutoEncoders
Exp. No.:
Date:
AIM:Demonstration of Application of autoencoders.
PROBLEM DESCRIPTION:
Autoencoders are a type of artificial neural network used in unsupervised learning and
dimensionality reduction tasks. They consist of an encoder and a decoder, and their primary
purpose is to learn efficient representations of input data by reducing its dimensionality.
Autoencoders find applications in various domains, including image compression, anomaly
detection, denoising, and feature extraction. Here's a demonstration of some common
applications of autoencoders:
Image Denoising:
Autoencoders can be used to remove noise from images. Here's how you can do it:
Dataset Preparation: Collect a dataset of noisy images and their clean counterparts.
Model Architecture: Create an autoencoder architecture with an encoder to map noisy images
to a lower-dimensional representation and a decoder to reconstruct clean images from the
encoded representations.
Training: Train the autoencoder on the noisy images, minimizing the reconstruction loss,
typically using mean squared error.
Inference: To denoise a new image, feed it through the encoder, obtain the encoded
representation, and then use the decoder to reconstruct the clean image.
Image Compression:
Autoencoders can be used to compress images while retaining essential information:
Dataset Preparation: Gather a dataset of high-resolution images.
Model Architecture: Design an autoencoder to map high-resolution images to a lower-
dimensional latent space and then decode them back to their original resolution.
Training: Train the autoencoder with the aim of minimizing the reconstruction error.
Inference: To compress an image, use the encoder to obtain its latent representation. To
decompress, use the decoder to reconstruct the image from the latent representation.
Anomaly Detection:
Autoencoders can be used for anomaly detection in various domains, such as fraud detection
or network security:
Dataset Preparation: Create a dataset with a majority of normal instances and a smaller set of
anomalous instances.
Model Architecture: Build an autoencoder to learn the normal patterns in the data.
Dept of IT GIET Engineering College, Rajahmundry
Allication of AutoEncoders

Training: Train the autoencoder to minimize the reconstruction loss on the normal instances.
Inference: During inference, pass new data through the autoencoder, and if the reconstruction
error is significantly higher than a predefined threshold, flag it as an anomaly.
Feature Extraction:
Autoencoders can be used to learn compact representations of data for downstream tasks:
Dataset Preparation: Collect a dataset with complex data.
Model Architecture: Create an autoencoder to map the input data to a lower-dimensional
representation.
Training: Train the autoencoder, emphasizing the preservation of important features in the
encoded representations.
Feature Extraction: Use the encoder portion of the trained autoencoder to extract features for
other machine learning tasks, like classification or regression.

RESULT: The Demonstartion of Application Of AutoEncoders is successfully complete.

RERERENCE BOOKS:
1. Goodfellow, I., Bengio,Y., and Courville, A., Deep Learning, MIT Press, 2016.

2. Bishop, C., M., Pattern Recognition and Machine Learning, Springer, 2006.
3. Navin Kumar Manaswi, “Deep Learning with Applications Using Python”, Apress, 2018.

Dept of IT GIET Engineering College, Rajahmundry


Demonstration of GAN

Demonstration of GAN
Exp. No.:
Date:

AIM:Demonstration of GAN.
Program:
A Generative Adversarial Network (GAN) is a type of deep learning model composed of two neural
networks, a generator, and a discriminator, which are trained together in a competitive manner. GANs
are widely used for generating new data that is similar to a given dataset.
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers

# Load the MNIST dataset


(x_train, _), (_, _) = keras.datasets.mnist.load_data()
x_train = x_train.astype(np.float32) / 255.0
x_train = x_train.reshape(x_train.shape[0], 28, 28, 1)

# Generator model
generator = keras.Sequential([
layers.Input(shape=(100,)),
layers.Dense(7 * 7 * 256, use_bias=False),
layers.BatchNormalization(),
layers.LeakyReLU(alpha=0.2),
layers.Reshape((7, 7, 256)),

layers.Conv2DTranspose(128, (5, 5), strides=(1, 1), padding='same', use_bias=False),


layers.BatchNormalization(),
layers.LeakyReLU(alpha=0.2),

layers.Conv2DTranspose(64, (5, 5), strides=(2, 2), padding='same', use_bias=False),


layers.BatchNormalization(),
Dept of IT GIET Engineering College, Rajahmundry
Demonstration of GAN

layers.LeakyReLU(alpha=0.2),

layers.Conv2DTranspose(1, (5, 5), strides=(2, 2), padding='same', use_bias=False,


activation='sigmoid')
])

# Discriminator model
discriminator = keras.Sequential([
layers.Input(shape=(28, 28, 1)),
layers.Conv2D(64, (5, 5), strides=(2, 2), padding='same'),
layers.LeakyReLU(alpha=0.2),
layers.Dropout(0.3),

layers.Conv2D(128, (5, 5), strides=(2, 2), padding='same'),


layers.LeakyReLU(alpha=0.2),
layers.Dropout(0.3),

layers.Flatten(),
layers.Dense(1)
])

# Compile the discriminator


discriminator.compile(loss='binary_crossentropy', optimizer='adam')

# Create the GAN by connecting the generator and discriminator


discriminator.trainable = False
gan_input = keras.Input(shape=(100,))
x = generator(gan_input)
gan_output = discriminator(x)
gan = keras.Model(gan_input, gan_output)
gan.compile(loss='binary_crossentropy', optimizer='adam')

Dept of IT GIET Engineering College, Rajahmundry


Demonstration of GAN

# Training loop
batch_size = 64
epochs = 10000
sample_interval = 1000

for epoch in range(epochs):


# Train the discriminator
idx = np.random.randint(0, x_train.shape[0], batch_size)
real_images = x_train[idx]
noise = np.random.normal(0, 1, (batch_size, 100))
generated_images = generator.predict(noise)
labels_real = np.ones((batch_size, 1))
labels_fake = np.zeros((batch_size, 1))

d_loss_real = discriminator.train_on_batch(real_images, labels_real)


d_loss_fake = discriminator.train_on_batch(generated_images, labels_fake)
d_loss = 0.5 * (d_loss_real + d_loss_fake)

# Train the generator


noise = np.random.normal(0, 1, (batch_size, 100))
labels_gan = np.ones((batch_size, 1))
g_loss = gan.train_on_batch(noise, labels_gan)

if epoch % sample_interval == 0:
print(f"Epoch {epoch}, D Loss: {d_loss}, G Loss: {g_loss}")

# Save generated images


samples = 10
noise = np.random.normal(0, 1, (samples, 100))
generated_images = generator.predict(noise)
for i in range(samples):
plt.subplot(1, samples, i+1)

Dept of IT GIET Engineering College, Rajahmundry


Demonstration of GAN

plt.imshow(generated_images[i, :, :, 0], cmap='gray')


plt.axis('off')
plt.show()

OUTPUT:
Epoch 0, D Loss: 0.6942, G Loss: 0.6911
Epoch 1000, D Loss: 0.2456, G Loss: 3.7562
Epoch 2000, D Loss: 0.1291, G Loss: 4.7823
...
Epoch 9000, D Loss: 0.3125, G Loss: 2.6117
Epoch 10000, D Loss: 0.2178, G Loss: 3.7489

RESULT: The Demonstartion of Application Of AutoEncoders is successfully complete.

RERERENCE BOOKS:
1. Goodfellow, I., Bengio,Y., and Courville, A., Deep Learning, MIT Press, 2016.

2. Bishop, C., M., Pattern Recognition and Machine Learning, Springer, 2006.
3. Navin Kumar Manaswi, “Deep Learning with Applications Using Python”, Apress, 2018.

Dept of IT GIET Engineering College, Rajahmundry


Capstone Project

Capstone Project-I
Exp. No.:
Date:

AIM: Complete the requirements given in capstone project


Program:
import numpy as np import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from sklearn.model_selection import train_test_split from sklearn.preprocessing import
LabelEncoder
from sklearn.metrics import classification_report
# Sample dataset (replace with your own labeled dataset)
# X should contain network traffic data, and y should contain labels (0 for
normal, 1 for threat) X = np.random.rand(100, 10, 10, 3) # Sample network
traffic data (100 samples, 10x10 images, 3 ch y = np.random.randint(2,
size=100) # Sample labels (binary: 0 for normal, 1 for threat)
# Split the dataset into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2,
random_state=42)
# Define a simple CNN model model = keras.Sequential([
layers.Conv2D(32, (3, 3), activation='relu', input_shape=(10, 10, 3)),
layers.MaxPooling2D((2, 2)), layers.Flatten(),layers.Dense(64, activation='relu'),
layers.Dense(1, activation='sigmoid'])
# Compile the model
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
# Train the model
model.fit(X_train, y_train, epochs=10, batch_size=16, validation_split=0.1)
# Evaluate the model on the test set y_pred = model.predict(X_test)
y_pred_binary = [1 if pred >= 0.5 else 0 for pred in y_pred]
print(classification_report(y_test, y_pred_binary))
# Function to predict cyber threats
def predict_cyber_threat(network_traffic):

Dept of IT GIET Engineering College, Rajahmundry


Capstone Project

# Assuming network_traffic is a numpy array with shape (10, 10, 3) prediction


= model.predict(np.array([network_traffic]))
return "Threat Detected" if prediction[0] >= 0.5 else "Normal"
# Example usage
sample_network_traffic = np.random.rand(10, 10, 3) # Replace with your real
network traffic data prediction = predict_cyber_threat(sample_network_traffic)
print(f"Prediction: {prediction}")

OUTPUT:

Epoch 1/10
5/5 [==============================] - 1s 44ms/step - loss: 0.7024 -
accuracy: 0.4583 - val_loss: 0.6628 - val_accuracy: 0.6250 Epoch 2/10
5/5 [==============================] - 0s 9ms/step - loss: 0.6870 -
accuracy: 0.5417 - val_loss: 0.6747 - val_accuracy: 0.6250 Epoch 3/10
5/5 [==============================] - 0s 10ms/step - loss: 0.6687 -
accuracy: 0.5972 - val_loss: 0.6817 - val_accuracy: 0.7500 Epoch 4/10
5/5 [==============================] - 0s 9ms/step - loss: 0.6517 -
accuracy: 0.8194 - val_loss: 0.6709 - val_accuracy: 0.6250 Epoch 5/10
5/5 [==============================] - 0s 9ms/step - loss: 0.6466 -
accuracy: 0.7500 - val_loss: 0.6787 - val_accuracy: 0.6250 Epoch 6/10
5/5 [==============================] - 0s 10ms/step - loss: 0.6297 -
accuracy: 0.8611 - val_loss: 0.6819 - val_accuracy: 0.6250 Epoch 7/10
5/5 [==============================] - 0s 10ms/step - loss: 0.6205 -
accuracy: 0.8333 - val_loss: 0.6861 - val_accuracy: 0.5000 Epoch 8/10
5/5 [==============================] - 0s 9ms/step - loss: 0.5952 -
accuracy: 0.9028 - val_loss: 0.7077 - val_accuracy: 0.3750 Epoch 9/10
5/5 [==============================] - 0s 10ms/step - loss: 0.5900 -
accuracy: 0.7222 - val_loss: 0.7049 - val_accuracy: 0.5000 Epoch 10/10
5/5 [==============================] - 0s 10ms/step - loss: 0.5883 -
accuracy: 0.8056 - val_loss: 0.6637 - val_accuracy: 0.6250 1/1
[==============================] - 0s 69ms/step
precision recall f1-score support

0 0.53 0.91 0.67 11


1 0.00 0.00 0.00 9
Dept of IT GIET Engineering College, Rajahmundry
Capstone Project

RESULT: The Complete the requirements given in capstone project is successfully complete.

RERERENCE BOOKS:
1. Goodfellow, I., Bengio,Y., and Courville, A., Deep Learning, MIT Press, 2016.

2. Bishop, C., M., Pattern Recognition and Machine Learning, Springer, 2006.
3. Navin Kumar Manaswi, “Deep Learning with Applications Using Python”, Apress, 2018.

Dept of IT GIET Engineering College, Rajahmundry


Capstone Project-II

Capstone Project-II
Exp. No.:
Date:

AIM: Complete the requirements given in capstone project


Program:
import tensorflow as tf

from tensorflow.keras import datasets, layers, models import matplotlib.pyplot as plt

import numpy as np from PIL import Image

# Step 1: Load and preprocess the CIFAR-10 dataset

(train_images, train_labels), (test_images, test_labels) = datasets.cifar10.load_data()

# Normalize pixel values to be between 0 and 1

train_images, test_images = train_images / 255.0, test_images / 255.0

# Step 2: Define the CNN model model = models.Sequential([

layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)),

layers.MaxPooling2D((2, 2)),

layers.Conv2D(64, (3, 3), activation='relu'),

layers.MaxPooling2D((2, 2)),

layers.Conv2D(64, (3, 3), activation='relu'), layers.Flatten(),

layers.Dense(64, activation='relu'), layers.Dense(10) # Output layer with 10 classes

])

# Step 3: Compile the model model.compile(optimizer='adam',

loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy'])

# Step 4: Train the model

history = model.fit(train_images, train_labels, epochs=10,

validation_data=(test_images, test_labels))

Dept of IT GIET Engineering College, Rajahmundry


Capstone Project-II

# Step 5: Evaluate the model

test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2) print(f"Test


accuracy: {test_acc*100:.2f}%")

# Step 6: Make predictions on a new image def predict_new_image(image_path,


model):

img = Image.open(image_path) img = img.resize((32, 32))

img_array = np.array(img) / 255.0

img_array = np.expand_dims(img_array, axis=0) # Add batch dimension predictions =


model.predict(img_array)

class_index = np.argmax(predictions[0]) return class_index

new_image_path = 'sample.jpg'

class_index=predict_new_image(new_image_path,model)
l=["airplane","automobile","bird","cat","deer","dog","frog","horse","ship","truck"]
print(l[class_index])

print(f"Predicted class index: {class_index}")

OUTPUT:
Epoch 1/10
1563/1563 [==============================] - 48s 30ms/step - loss: 1.5246 -
accuracy: 0.4450 - val_loss: 1.3093 - val_accuracy: 0.5379 Epoch 2/10
1563/1563 [==============================] - 47s 30ms/step - loss: 1.1922 -
accuracy: 0.5789 - val_loss: 1.0812 - val_accuracy: 0.6116 Epoch 3/10
1563/1563 [==============================] - 45s 29ms/step - loss: 1.0344 -
accuracy: 0.6366 - val_loss: 1.0275 - val_accuracy: 0.6396 Epoch 4/10
1563/1563 [==============================] - 47s 30ms/step - loss: 0.9392 -
accuracy: 0.6707 - val_loss: 0.9740 - val_accuracy: 0.6582 Epoch 5/10
1563/1563 [==============================] - 48s 30ms/step - loss: 0.8719 -
accuracy: 0.6952 - val_loss: 0.9012 - val_accuracy: 0.6892 Epoch 6/10
1563/1563 [==============================] - 51s 32ms/step - loss: 0.8102 -
accuracy: 0.7138 - val_loss: 0.8784 - val_accuracy: 0.7001 Epoch 7/10

Dept of IT GIET Engineering College, Rajahmundry


Capstone Project-II

1563/1563 [==============================] - 48s 31ms/step - loss: 0.7664 -


accuracy: 0.7299 - val_loss: 0.8704 - val_accuracy: 0.7029 Epoch 8/10
1563/1563 [==============================] - 45s 29ms/step - loss: 0.7188 -
accuracy: 0.7483 - val_loss: 0.8548 - val_accuracy: 0.7056 Epoch 9/10
1563/1563 [==============================] - 46s 29ms/step - loss: 0.6822 -
accuracy: 0.7596 - val_loss: 0.8431 - val_accuracy: 0.7141 Epoch 10/10
1563/1563 [==============================] - 46s 29ms/step - loss: 0.6434 -
accuracy: 0.7740 - val_loss: 0.8731 - val_accuracy: 0.7061 313/313 - 3s - loss: 0.8731
- accuracy: 0.7061 - 3s/epoch - 9ms/step
Test accuracy: 70.61%
1/1 [==============================] - 0s 203ms/step deer Predicted class
index: 4

RESULT: The Complete the requirements given in capstone project is successfully complete

RERERENCE BOOKS:
1. Goodfellow, I., Bengio,Y., and Courville, A., Deep Learning, MIT Press, 2016.

2. Bishop, C., M., Pattern Recognition and Machine Learning, Springer, 2006.
3. Navin Kumar Manaswi, “Deep Learning with Applications Using Python”, Apress, 2018.

Dept of IT GIET Engineering College, Rajahmundry

You might also like