Presented By, Shobha C.Hiremath (01FE17MCS019)

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 25

Presented By,

Shobha C.Hiremath(01FE17MCS019)

Under the Guidance of


Dr.Meena S.M
 Research Domain
 Problem Statement
 Introduction to Machine Learning
 Applications
 Merits
 Classification Algorithms
 Literature Survey
 Implementation
 Machine Learning-Capsule Network
 Machine learning is an application of artificial intelligence (AI) that
provides systems the ability to automatically learn and improve from
experience without being explicitly programmed.

 Machine learning focuses on the development of computer


programs that can access data and use it learn for themselves.

 In
machine learning, a convolutional neural network (CNN) is a class of
deep, feed-forward artificial neural networks, most commonly applied to
analyzing visual imagery.

 Extremely effective for image recognition problem.


A Capsule Neural Network (CapsNet) is a machine learning system
that is a type of artificial neural network (ANN) that can be used to
better model hierarchical relationships.
 A capsule is a group of neurons whose outputs represent different
properties of the same entity.
 Each layer in a capsule network contains many capsules.
 We describe a version of capsules in which each capsule has a logistic
unit to represent the presence of an entity and a 4x4 matrix which could
learn to represent the relationship between that entity and the viewer
(the pose).
 Image and speech recognition.
 Classification.
 Prediction.
 Extraction.
 Regression.
 Medical Diagnosis.
 Learning Associations.
 One of the main advantages of a capsule network is that it preserves
the object location within an image. This feature is called
“equivariance”
 Pose matrix provides the relationship between that entity and the
viewer
SL.No Algorithm Name Best for Dis-advantages
1 Logistic Regression • Most useful for understanding • Works only when the
the influence of several predicted variable is
independent variables on a binary, assumes all
single outcome variable. predictors are
independent of each
other, and assumes data
is free of missing values.

2 Naive Bayes • This algorithm requires a small • Naive Bayes is known to


amount of training data to be a bad estimator.
estimate the necessary
parameters.
• Naive Bayes classifiers are
extremely fast compared to
more sophisticated methods
4 Stochastic Gradient Descent • Efficiency and ease of • Requires a number of
implementation hyper-parameters and
it is sensitive to
feature scaling

5 K-Nearest Neighbours • Robust to noisy training data, and • Need to determine the
effective if training data is large. value of K and the
computation cost is
high as it needs to
computer the distance
of each instance to all
the training samples.
Decision Tree • simple to understand and • Decision tree can
visualize, requires little data create complex trees
preparation, and can handle that do not generalize
both numerical and categorical well.
data. • Decision trees can be
unstable because
small variations in the
data might result in a
completely different
tree being generated.

6 Random Forest • Reduction in over-fitting and • Slow real time


random forest classifier is more prediction, difficult to
accurate than decision trees in implement, and
most cases. complex algorithm
7 Support Vector Machine • Effective in high dimensional • The algorithm does not
spaces and uses a subset of directly provide probability
training points in the decision estimates these are calculated
function so it is also memory using an expensive five-fold
efficient. cross-validation.

8 Comparison Matrix • Describe and compare attributes • Gives low accuracy on small
and characteristics of items dataset.
Classification Algorithms Accuracy

Logistic Regression 84.60%

Naïve Bayes 80.11%

Stochastic Gradient Descent 82.20%

K-Nearest Neighbours 83.56%

Decision Tree 84.23%

Random Forest 84.33%

Support Vector Machine 84.09%


 AN ANALYSIS OF DEEP NEURAL NETWORK MODELS FOR PRACTICAL
APPLICATIONS - Alfredo Canziani et al.
 Author has analysed multiple state-of-the-art deep neural networks submitted to the

ImageNet challenge.

 Author goal was to provide insights into the design choices that can lead to efficient
neural networks for practical application and optimization.

 Finally, showed that ENet is the best architecture in terms of parameters space
utilisation, squeezing up to 13 more information per parameter

 MODEL USED :VGG-19


 A Novel Method for Classification of Ancient Coins Based on Image Textures
- Kaiping Wei et al.
 Proposed an approach to texture extraction by multi-resolution character is
proposed to extract characteristics hidden in all channels of ancient coin images.

 The multi-resolution character of the texture is extracted by TWT, and information


can be accessed in various scale rather than low frequency.

 Segmentation algorithm used : ACO(before TWT)


 Heritage Images Classification by NaIve Bayes Image Classifier -Jantima Polpinij et al.
 Approach is been divided into two classes:

1) The class of heritage images concerning human activities and the class of heritage
images with non-human activities.
2) Proposed a new method of CBIR to automatically classify a collection of images
based on the color features.

 Naives Bayes Image Classifier can classify an image to a class according to the skin
pixels appearing in the image content
Sl.No Title of the Paper Authors Issues Methodology Results

1 Marine Animal Zheng Cao Problem of • Detection Classified


Classification Using et al. classifying • Tracking marine animals
Combined CNN marine animals • Feature extraction from
and Hand-designed from underwater • classification. underwater
imagery
Image Features. images with
accuracy 92%.
2 Classification of Green Carlito Pinto Image • Input Image Classified the
Coffee Bean Imag.es et al. processing • Convolution type of coffee
Based System was • Pooling bean defect.
on Defect Types Using
unable to process • Full connected Accuracy = 90%
the green coffee Layer
Convolutional Neural beans images. • Output Layer.
Network. (CNN Model)
3 Implementation of Seong- The Fully connected • Input Image Implemented an
Image Classification Hyeon Han layer with a large • Convolution image classification
CNN number of parameters • Pooling using a multi-
Using Multithread took a long time to • Full connected thread GPU
read and write Layer
GPU.
memory(high • Output Layer.
computational
complexity)
4 Classification of Ali Seydi, Conventional Convolutional Classification of
Radiolarian Fossil Seda methods for Neural Network Radiolarian images
Images with Deep identifying (CNN) trained using fine tuned
Learning Methods radiolarians are time using radiolarian model
consuming and can images and
not be scaled by the another CNN
detail which is model that was
required for large- pre-trained and
scale studies . fine-tuned.
5 Dynamic Routing Sara Convolutional • Input Image Multi-layer capsule
Between Capsules Sabour and network failed • RELU system achieves
Nicholas to recognize • Primary Capsule state-of-the-art
Frosst. highly • Convolution performance on
overlapping Capsules MNIST
digits. • Output(class of
capsule).
6 Matrix Capsules - Unable to • Input Image New iterative
With EM Routing identify object • RELU routing procedure
location within • Primary Capsule between capsule
an image. • Convolution layers, based on the
Capsules EM algorithm.
• Output(class of and the no.of errors
capsule). are reduced by 40%
7 Deep Aouaidjia Recognizing • Data • Methodology for
Convolutional Kamel et al. high-level human Preprocessing human action
Neural Networks activities from • Convolution recognition.
for Human low-level sensor Neural network • Two action
Action Recognition data is a crucial Model representations and
driver • Output three CNNs
channels are used to
maximize feature
extraction.
8 Fast Food Image Narit Food categories • Input Image • The classification
Classification Hnoohom are too broad(11 • Preprocessing average accuracy on
Using Deep et al. categories are • Deep learning a separate test
Learning considered) Model set(other than
• Classified trained images) is
Image. 88.33%.
“Classification of heritage images using Capsule Network”
 Applying Capsule Network to Heritage image classification task involves
two steps, training of the system followed by testing. The training process
means, to take the characteristic properties of the images (form a class)
and form a unique description for a particular class.

 The process is done for all classes depending on the type of classification
problem; binary classification or multi-class classification.

 The testing step means to categorize the test images under various classes
for which system was trained. This assigning of class is done based on the
partitioning between classes based on the training features.
Fig :Training of Aihole Images
Fig :Training of Amrutheshwar Images
Fig 3: Epochs Generated
THANK YOU

You might also like