Minor

Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

A MAJOR PROJECT REPORT ON

BRAIN TUMOUR DETECTION


THROUGH IMAGES USING DEEP
LEARNING
Is submitted to Jawaharlal Nehru Technology University, Hyderabad,
In partial fulfillment of the requirement for the award of the degree

BACHELOR OF TECHNOLOGY

IN

INFORMATION TECHNOLOGY

SUBMITTED BY

NATHIBAYANNAGARI SAI TEJA (20J21A1228)


KARRI DURGA MAHESH (20J21A1216)
KONDILLA ARAVIND (21J25A1202)

Under the guidance of

Mr. K. VENKAIAH
Assistant Professor

Department of Information Technology

JOGINPALLY B.R. ENGINEERING COLLEGE


(Affiliated to Jawaharlal Nehru Technology University,

Hyderabad, T.S.) Yenkapally (v), Moinabad (M), Hyderabad

(T.S.)-500075

2020-2024
ABSTRACT
The tremendous success of machine learning algorithms at image recognition tasks in recent years
intersects with a time of dramatically increased use of electronic medical records and diagnostic
imaging. This review introduces the machine learning algorithms as applied to medical image
analysis, focusing on convolution neural networks, and emphasizing clinical aspects of the held.
The advantage of machine learning in an era of medical big data is that significant hierarchical
relationships within the data can be discovered algorithmically without laborious hand-crafting of
features. We cover key research areas and applications of medical image classification,
localization, detection, segmentation, and registration. We conclude by discussing research
obstacles, emerging trends, and possible future directions.
INDEX TERMS Convolutional neural networks, medical image analysis, machine learning,
deep learning.
1. INTRODUCTION
1.1 INTRODUCTION TO PROJECT:

Machine learning algorithms have the potential to be invested deeply in all fields of
medicine, from drug discovery to clinical decision-making, significantly altering the way
medicine is practiced. The success of machine learning algorithms at computer vision tasks
in recent years comes at an opportune time when medical records are increasingly
digitalized. The use of electronic health records (EHR) quadrupled from 11.8% to 39.6%
among face-based physicians in the US from 2007 to 2012 [1]. Medical images are an
integral part of a patient's EHR and are currently analyzed by human radiologists, who are
limited by speed, fatigue, and experience. It takes years and great financial cost to train a
qualified radiologist, and some health- care systems outsource radiology reporting to lower-
cost countries such as India via teleradiology. A delayed or erroneous diagnosis causes
harm to the patient. Therefore, it is ideal for medical image analysis to be carried out by an
automated, accurate, and efficient machine learning algorithm.

2. Existing System

There is a myriad of imaging modalities, and the frequency of their use is increasing.
Smith-Bondman et al. [2] looked at imaging use from 1996 to 2010 across six large
integrated healthcare systems in the United States, involving 30.9 million imaging
examinations. The authors found that over the study period, CT, MRI, and PET usage
increased by 7.8%, 10%, and 57% respectively.
Fig: 3.1 MRI Scan Image
The symbolic AI paradigm of the 1970s led to the development of rule-based, expert
systems. One early implementation in medicine was the MYCIN system by Shurtleff [3],
which suggested different regimes of antibiotic therapies for patients. Parallel to these
developments, AI algorithms moved from heuristics-based techniques to manual,
handcrafted feature extraction techniques. and then to supervised learning techniques.
Unsupervised machine learning methods are also being researched, but the majority of the
algorithms from 2015-2017 in the published literature have employed supervised learning
methods.

3. Proposed System

Currently, CNNs are the most researched machine learning algorithms in medical
image analysis [4]. The reason for this is that CNNs preserve spatial relationships when
altering input images. As mentioned, spatial relationships are of crucial importance in
radiology, for example, in how the edge of a bone joins with muscle, or where normal lung
tissue interfaces with cancerous tissue. As shown in Fig. 2., a CNN takes an input image of
raw pixels and transforms it via Convolutional Layers, Recited Linear Unit (RELU)
Layers, and Pooling Layers. This feeds into a _nil Fully Connected Layer which assigns
class scores or probabilities, thus classifying the input into the class with the highest
probability.
Fig: 3.2 CNN Algorithm Working Process

Detection, sometimes known as Computer-Aided Detection (CADe) is a keen area of


study as missing a lesion on a scan can have drastic consequences for both the patient and
the clinician. The task for the Kaggle Data Science Bowl of 2017 [64] involved the
detection of cancerous lung nodules on CT lung scans. Approximately 2000 CT scans were
released for the competition and the winner Fangzhou [65] achieved a logarithmic loss
score of 0.399. Their solution used a 3-D CNN inspired by U-Net architecture [19] to
isolate local patches _rst for nodule detection. Then this output was fed into a second stage
consisting of 2 fully connected layers for classification of cancer probability. Shin et al. [24]
evaluated _ well-known CNN architectures in detecting thoracoabdominal lymph nodes and
Interstitial lung disease on CT scans. Detecting lymph nodes is important as they can be
a marker of infection or cancer. They achieved a mediastinal lymph node detection AUC
score of 0.95 with a sensitivity of 85% using Google Net, which was state-of-the-art. They
also documented the benefits of transfer learning, and the use of deep learning architectures
of up to 22 layers, as opposed to fewer layers which was the norm in medical image
analysis. Over feat was a CNN pre-trained on natural images that won the ILSVRC 2013
localization task [66]. Ciompi et al. [67] applied Over feat to 2-dimensional slices of CT
lung scans oriented in the coronal, axial, and sagittal planes, to predict the presence of
nodules within and around lung _assures. They combined this approach with simple
SVM and RF binary classifiers, as well as a Bag of Frequencies [68], a novel 3- 3-
dimensional descriptor of their invention.
4. MODULES
In this project, we are using brain tumor MRI images to build a deep deep-learning
auto stack CNN model. To implement this project, we are using the following
modules.

1) UPLOAD MRI IMAGE: using this module we upload MRI train


images and then the application reads all images and converts them to
grey format.
2) OSTU THRESHOLDING: Using this module, we will apply the
OSTU thresholding technique on each image to extract features.
3) GENERATE TRAIN & TEST MODEL: Using this module we will
build an array of pixels with all image features and then split the
dataset into train and test models to calculate accuracy using test
images by applying the train model on it.
4) GENERATE DEEP LEARNING CNN MODEL: Using this
the module will input train and test data to auto stack CNN model to
build a training classifier.
5) GET DRIVEHQ IMAGES: Using this module we will read a test
image from the Drive website and then the application will apply the
CNN classifier model on that test image to predict whether an image
contains tumor disease or not.
SOFTWARE REQUIREMENTS SPECIFICATION

5.1 SOFTWARE REQUIREMENTS:


Operating System : Windows 11
Front End : HTML, JS, CSS.

Technologies : JAVA, J2EE

Database : Oracle10g.

Web Server : Tomcat 6.0.

5.2 HARDWARE REQUIREMENTS:


Processor : PENTIUM IV, 2.0 GHz

Hard disk : 160 GB

RAM : 2 GB

You might also like