Automated Diagnosis of Brain Tumor Classification and Segmentation of Magnetic Resonance Imaging Images

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

IAES International Journal of Artificial Intelligence (IJ-AI)

Vol. 13, No. 4, December 2024, pp. 4833~4842


ISSN: 2252-8938, DOI: 10.11591/ijai.v13.i4.pp4833-4842  4833

Automated diagnosis of brain tumor classification and


segmentation of magnetic resonance imaging images

Chandrakala B. Muddaraju1, Shrinivasa2, Shobha Narasimhamurthy3, Vaishali Sontakke4


1
Department of Information Science and Engineering, Dayananda Sagar College of Engineering, Bangalore, India
2
Department of Computer and Communication Engineering, NITTE (Deemed to be university), NMAM Institute of Technology,
Nitte, India
3
Department of Computer Science and Design, Dayananda Sagar College of Engineering, Bangalore, India
4
Department of Information Science and Engineering, East Point College of Engineering and Technology, Bangalore, India

Article Info ABSTRACT


Article history: Brain tumors are one of the most prevalent disorders of the central nervous
system and are dangerous. For patients to receive the best treatment, early
Received Jan 31, 2024 diagnosis is crucial. For radiologists to correctly detect brain tumor images,
Revised Mar 22, 2024 an automated approach is required. The identification procedure can be time-
Accepted Apr 17, 2024 consuming and prone to mistakes. In this work, the issue of fully automated
brain tumor classification and segmentation of magnetic resonance imaging
(MRI) including meningioma, glioma, pituitary, and no tumor is taken into
Keywords: consideration. In this study, convolutional neural network (CNN) and mask
region-based convolutional neural network (R-CNN) are proposed for
Brain tumors classification and segmentation problems respectively. This study employed
Classification 3,200 images as a training set and the system achieved an accuracy of 96%
Convolutional neural networks for classifying the tumors and 94% accuracy in segmentation of tumors.
Magnetic resonance imaging
Segmentation This is an open access article under the CC BY-SA license.

Corresponding Author:
Chandrakala B. Muddaraju
Department of Information Science and Engineering, Dayananda Sagar College of Engineering
Bangalore, India
Email: [email protected]

1. INTRODUCTION
A brain tumor is an abnormal growth emerging from the brain tissues, which might be fatal if not
identified and properly treated at an early stage. In contrast to invasive treatments like tissue biopsies, medical
workers often use magnetic resonance imaging (MRI) and computerized tomography (CT) scans to get
comprehensive images of the brain for initial analysis. Additionally, the use of computer-based image
evaluation in conjunction with medical expertise can greatly benefit early diagnosis. For supervised and
unsupervised feature extractions, image classification, and segmentation have been investigated for a long time
using a variety of techniques in image processing and computer vision. Convolutional neural network (CNN)
[1] has recently emerged as the most widely used method for image classification and segmentation in a variety
of research fields including diagnostic imaging, surveillance cameras, and factory automation to accomplish
automation.
For supervised and unsupervised feature extractions, image classification and segmentation have been
investigated for a long time using a variety of techniques in image processing and computer vision. CNN's
capacity to extract more complex information from the input for such classification [2] task is its main draw.
For instance, CNN designs like the Alex Krizhevsky network (AlexNet) are frequently used for medical image
segmentation, while Google Net and image Net are widely utilized for computer vision and visual recognition.
However, because of the computational expense and the lengthy training period linked to the system design,
CNN applications have been constrained during the past ten years. But recently, because of improvements in

Journal homepage: http://ijai.iaescore.com


4834  ISSN: 2252-8938

current computer technology, particularly graphics processing unit (GPU), CNN's performance has
significantly increased while processing times have been reduced [3].
Deep learning algorithms are now widely utilized in medical imaging to recognize regional anatomical
features, locate organs and body components, and recognize various cell types [4]. According to literature,
classifiers, neural networks, and machine learning algorithms can readily distinguish between normal and
diseased classes of brain MRI. However, at this time, research on CNN's potential for tumor identification and
segmentation is still in its early stages and is not very thorough. Figures 1(a) to 1(d) shows the types of tumors
considered in this study.

(a) (b) (c) (d)

Figure 1. Types of tumors considered in this study; (a) pituitary tumor, (b) glioma tumor, (c) meningioma
tumor, and (d) no tumor

2. RELATED WORK
Numerous studies have been conducted in the past on the subject of region of interest (ROI)
segmentation and multi-view classification using CNN by a variety of authors. Tables 1 and 2 show the survey
on healthcare applications based on classification and segmentation respectively. This work covers the
knowledge and skills required to understand the research work covered in the portions of this section that came
before it. The principles of image segmentation, along with its application and composition, are discussed here
for the benefit of the reader. They classified the images from the contrast-enhanced magnetic resonance
imaging (CE-MRI) dataset using a kernel learning algorithm, but they also employed CNN as a method to
extract the features. They also contrasted the outcomes with several classifiers, including the support vector
machine and radial base function. As the scientists noted, CNN is more effective because it yields more precise
feature extraction findings, which are necessary for the classification step. In their proposal, the authors used
two different approaches: first, a CNN model with a 28×28-pixel input layer, accompanied by convolution,
batch normalization, rectified linear unit (ReLU), Maxpooling, and one convolution layer; and second,
keyword extraction and synonym substitution (KE-CNN) to categorize the images into 3 groups. The two
components of KE-CNN are CNN, which is used for feature extraction, and the output, a feature vector utilized
as an input for Kernel extreme learning machine (KELM), which takes the role of the linked layer and performs
classification. Radial basis function (RBF) serves as a kernel function in KELM. Despite the authors revised
solution model, 93.68 % accuracy was still attained [5]–[7].
The author's approach used two separate techniques: first, a CNN model used a pre-trained deep
classification algorithm as the extracting features. They built their idea on block-wise fine-tuning using transfer
learning, which works with tiny datasets and produces workable results. Even though their proposal included
a pre-processing step, the average multimedia tools as well as applications accuracy was 94.82 % [8]. The
pre-processing method used was meant to reduce memory needs and accelerate the process by normalizing the
CE-MRI datasets using a minimum-maximum strategy and then resizing it from 512×512 to 224×224.
Following that, they replicated the scaled images 3 times, based on the visual geometry group 19 (VGG19's)
i/p size. Due to the short dataset, they initialized the weights from the pre-trained model during training to
prevent overfitting. Additionally, it was divided the 19 layers into 6 blocks because processing and fine-tuning
each of the 19 layers would take a long time. Additionally, they achieved an overall accuracy of 89.52%. The
first to publish the image database utilized in this research identified tumor types utilizing enhanced tumor
ROI, image dilatation, and ring-form partition. They retrieved features with an accuracy of 91.28 % using the
intensity histogram. Further articles that made use of the same data are addressed. There, we cover several
network types, including trained ones, capsule networks, and additional convolutional network designs, and
combine them with neural network models for extracting features and classifications for the output [9].
They utilized the clever edge detection model compounded with adaptive thresholding for extracting
the ROI based on edge detection methodologies. 102 images were included in the dataset. Images were initially

Int J Artif Intell, Vol. 13, No. 4, December 2024: 4833-4842


Int J Artif Intell ISSN: 2252-8938  4835

pre-processed, after which a second set of CNN architecture was used: the first set was used for clever edge
detection, while the second set was used for adaptive thresholding. The Harris technique is used to extract
character traits from its segmented image, which is then indicated by a level number. Then, 2 neural networks
are used, the first to determine if the brain is healthy or has a tumor, and the second to determine the type of
tumor. When comparing these two models and depicting the findings, the clever edge detection technique
produced more accurate results [10]–[12].
The proposed segmentation method resembled a histogram. Regarding the segmentation of brain
tumors as a three-class classification challenge (tumor comprising necrosis and tumor, edema, and normal
tissue) including the two modalities fluid-attenuated inversion recovery (FLAIR) and T1. A region-based
contour-based model using FLAIR modality was used to find the aberrant areas. By using the k-means
approach and the contrast adjustment T1 modality, the edema and tumor tissues were discriminated in the
aberrant areas, achieving a dice ratio and sensitivities of 73.6% and 90.3%, respectively [13]. They provided a
system that incorporated two feature-extraction techniques before applying a binary classifier to determine the
categorization of the i/p. This paper extracted features using a two-dimensional (2D) wavelet approach by
making several copies of the input and applying various functions to each to obtain various degrees of
complexity with an estimate in each. They also applied the previously stated Gabor filter method. Then, for the
classification, they used back-propagation neural networks with three neurons in the output layer, 90 neurons
in a single hidden intermediate layer, and 270 neurons in the input layer [14].
The fuzzy c-means clustering approach, followed by conventional classifications and CNN, is a
technique we introduced in this study to extract brain tumors from 2D MRI. Real-time data with various tumor
sizes, locations, forms, and image intensities was used for the experimental investigation. We used six
conventional classifiers in the scikit-learn-implemented traditional classifier part. Then, because it performs
better than conventional ones, we moved on to CNN, which is constructed using Keras and TensorFlow. In
this research, CNN achieved an impressive accuracy rate of 97.87 % [15]. Proposed the paper brain tumor
segmentation using JeisloNet - Unet architecture. In this study, an Unet architecture called JeisloNet was
created for automatically segmenting problematic tissues, such as brain tumors, in MR scan data. The cancer
imaging archives provided MR scans of the brain, which were pre-processed and divided into train and test
sets in an 80:20 ratio. The model design, which is based on the Unet, consists of an expansion or up-sampling
path and a contraction or down-sampling or encoding path. Input blocks of 256×256. Four steps make up the
contracting route, each of which comprises two convolutional layers followed by a ReLU and a down-sampling
procedure. The experiment's findings demonstrated that Unet architecture, JeisloNet, performed well, with dice
coefficient index (DSC) values of 0.9931, mean intersection over union (IoU) values of 0.9321, global accuracy
values of 0.9928, and error rate values of 0.0072 [16]–[18].
In this paper, they provide a CNN training technique that uses learned linear mapping functions and
non-negative integers to estimate weights and activations. The fundamental insight of our task is that the
scalability and translation considerations for weights can be trained along with other characteristics, whereas
activations are typically normalized by batch normalization and ReLU, allowing for accurate approximation of
activations using a single function across all layers that is pre-calculated based on the common half-waves
gaussian distribution. The model quantized by the trained affine mapping technique utilizing 2-bit weight and
activations obtain an average dice value within 0.01 compared to the full-precision model, according to
evaluation findings from the BraTS 2018 competition [19].
This research develops a segmentation approach for brain tumors. Images are segmented using
region-based and edge-based methods for this purpose. This work makes use of the brain tumor segment 2020
(BraTS2020) dataset. A comparison of the edge-based and region-based approaches to image segmentation
utilizing the U-Net along with ResNet50 encoder architecture is carried out. The suggested approach includes
segmentation, edge and area identification, and data preparation. The provided MRI image is first pre-processed,
then brain boundary or region boundary detection is done, and finally, segmentation is done to display the tumor
area. Its dice loss score was 0.008768, IoU score was 0.7542, f1 score was 0.9870, accuracy was 0.9935,
precision was 0.9852, the recall was 0.9888, and specificity was 0.9951 [20]. In this paper, to further the field
of brain tumor segmentation research, this work suggests the 2D image segmentation technique BU-Net. The
basic U-Net design makes use of wide context (WC), residual extended skip (RES), and a bespoke loss function.
The high-grade glioma data from the BraTS2017 and the testing data for the BraTS 2017 and 2018 datasets were
used to evaluate the proposed BU-Net. Tumor core (TC), whole tumor (WT), and enhancing core (EC) were the
three main labels to segment (EC). First, the 210-case BraTS 2017 high-grade gliomas (HGG) dataset was used
to assess the proposed model. 80% of these instances were utilized for training, while the remaining 20% were
used for testing. The BraTS challenge establishes the training and testing instances [21].
In this paper, the rich information about brain tumor architecture provided by MRI scans makes it a
crucial tool for accurate treatment. A brain tumor segmentation and identification method is presented to
address this issue, and tests are conducted. For each patient in this data, four separate MRI modalities T1, T2,
T1Gd, and FLAIR are included. As a result, a segmentation image and the ground truth for tumor segmentation,
Automated diagnosis of brain tumor classification and segmentation … (Chandrakala B. Muddaraju)
4836  ISSN: 2252-8938

or class label, are supplied. The tumor location was localized using the U-Net deep learning model; the
contracting section of U-Net effectively captures the information from the compressed feature extraction and
expansion route localization. Asymmetric parameter tuning of the contracting and expanding route layers was
used to obtain great accuracy [22], [23]. In this paper, a mixture of the new architectures SegNet and UNet is
the planned USeg-Net and SegU-Net. By comparing the suggested hybrid architecture with the widely used
CNN models for segmentation, U-Net, SegNet3, SegNet5, and, it is possible to examine the segmentation
capabilities in terms of accuracy. The enhancing tumor, necrotic and non-enhancing tumor, peritumoral
edoema, and anything else are each represented by the colors green, red, yellow, and grey, respectively. Each
model passes 172,800 neurons through the various hidden layers after accepting them as input in the input
layers. In the output layer, these neurons are divided into four types. Five metrics are taken into account to
assess how effectively the model executes segmentation tasks: global correctness, mean correctness, mean
intersection over union, weighting IoU, and mean BF-scores [24]. The mean accuracy for the U-SegNet, Seg-
Unet, and Res-SegNet, was 91.6%, 93.1%, and 93.3%, respectively.
As per the paper, they suggest an automated technique using the mask region-based convolutional
neural network (RCNN) framework to improve the stability of brain tumor detection and segmentation. Tumor
segmentation and localization with pixel-to-pixel accuracy utilizing the region proposal network (RPN) [25].
To demonstrate the effectiveness of our strategy, a thorough quantitative and qualitative analysis of the recently
developed methodologies was conducted on two internet datasets. The suggested method is resilient to MRI
aberrations such as noise, bias field effects, and varied acquisition angles as well as differences in tumor size,
size, location, and overlap with normal brain tissues. Researchers used learning to perfect the model using MRI
data for segmentation after initializing it with pre-trained weights of the common objects in context (COCO)
data. For the experiment, they employed a 70:30 split, with training sets making up 70% and test sets making
up 30%. Table 1 shows the survey on healthcare applications based on classification. Table 2 shows the survey
on healthcare applications based on segmentation.

Table 1. Survey on healthcare applications based on classification


Sr no. Paper Year Author Dataset Accuracy (%)
1 [3] 2018 Pashaei et al. CE-MRI Dataset 93.68
2 [4] 2019 Swati et al. CE-MRI Dataset 94.82
3 [5] 2015 Cheng et al. CE-MRI Dataset 91.28
4 [6] 2017 Badran et al. Kaggle Dataset 90
5 [7] 2016 Song et al. BRATS 2012 and 2015 98
6 [8] 2018 Ismael and Qader Figshare Website 91.9
7 [9] 2020 Hossain et al. BRATS Dataset 97.87

Table 2. Survey on healthcare applications based on segmentation


Sr no. Paper Year Author Dataset Accuracy (%)
1 [10] 2021 Zhang et al. Cancer Images Archives (TCIA) 99.2
2 [11] 2022 Wang and Lin BraTS 2018 88
4 [12] 2022 Kapoor and Agarwal BraTS 2017 and 2018 90
5 [13] 2021 Rehman et al. BraTS 2018 98
6 [14] 2020 Arora et al. BraTS Dataset 93.3
7 [15] 2021 Daimary et al. Brain Tumor Figshare (BTF) 95.1

3. METHOD
3.1. Data preparation
In this study, a T1-weighted brain tumor dataset comprising 4,000 MRI images is presented. The
dataset, sourced from Kaggle, encompasses meningioma, glioma, pituitary, and no tumor cases in axial planes.
The dataset is initially partitioned into training and testing sets.

3.2. Proposed system architecture


3.2.1. Proposed architecture for classification
The Figure 2 represents the block diagram of classification. It consists of two phases i.e., the training
phase and the testing phase. Before the training and testing phase, the dataset will be split into 80:20 ratios for
the training and testing part.

3.3. Training phase


In the training phase, the first step is to train the model with the image dataset. After the images are
given to the model they are labelled based on the requirement. In our project classification of the brain tumor

Int J Artif Intell, Vol. 13, No. 4, December 2024: 4833-4842


Int J Artif Intell ISSN: 2252-8938  4837

MRI images are of four classes so the labels are as follows pituitary tumor in Figure 1(a), glioma tumor in
Figure 1(b), meningioma tumor in Figure 1(c), and no tumor in Figure 1(d). Then the images undergo the
pre-processing step where resize of the images takes place. With feature extraction, which is a sort of
dimensionality reduction, a sizable portion of the image pixels are properly represented allowing for the
effective capture of the image's relevant details. The output of the feature extraction which consists of feature
maps is given as an input to the CNN. In CNN, the images undergo four steps i.e., convo 2D+ReLU,
Maxpooling, flatten layer, and dense layer which will help to give the prediction model. After every cycle, the
model calculates the loss function which is nothing but the error rate.

Figure 2. Block diagram of classification

3.4. Testing phase


In the testing phase, the images are given as input to the pre-processing step where the same procedure
as training takes place. After the pre-processing the extraction of feature maps is given as the input to the
convolution neural networks and then the results are compared with the prediction model. Finally, the classified
results are obtained in comparison with the prediction model.

3.5. Architecture of convolutional neural network


CNN are a family of artificial neural networks used most frequently in deep learning to interpret visual
data as depicted in Figure 3. CNNs are created by modifying multilayer perceptrons. Each neuron in one fully
-connected layer is linked to every neuron in the layer above it and is known as multilayer perceptrons. An
advanced kind of artificial neural network known as a CNN substitutes the mathematical operation. There are
four layers in the CNN model they are as follows convo 2D+ReLU, Maxpooling, flatten layer, and dense layer.
Figure 3 shows the CNN architecture.

Figure 3. CNN architecture

Automated diagnosis of brain tumor classification and segmentation … (Chandrakala B. Muddaraju)


4838  ISSN: 2252-8938

3.5.1. Convo 2D+rectified linear unit


Because characteristics of the image are extracted within this layer, this layer is known as the feature
extractor layer. Before performing the convolution operation previously, a small portion of the image is linked
to the Convo layer to compute the linear combination between the filter and receptive field. One integer
representing the output volume is the operation's output. The filter is then moved by a Stride across the
following receptive field of the identical input image, and the process is repeated. The process will continue
until the entire image is processed. The subsequent layer's input will be the output. Additionally, the convo
layer has ReLU activation to set all negative values to zero.

3.5.2. Max pooling


After convolution, the spatial size of the input image is compressed using a pooling layer. Between
two convolution layers, it is employed. It would be computationally costly and undesirable if fully-connected
were applied after the Convo layer without using pooling or maximum pooling. Therefore, maximum pooling
is the sole method for reducing the spatial size of the input image. In Figure 4, max pooling is used in a single
scale slice with a stride of 2. As you can see, the input's four dimensions are reduced to two.

Figure 4. Max pooling

3.5.3. Flatten layer


This layer depicted in Figure 5 performs the conversion of 2D vector features to 1D vector features.
The transformation is essential for simplifying the representation of features from a two-dimensional to
one-dimensional format. This process aids in more efficient data handling and analysis.

Figure 5. Flatten layer

3.5.4. Dense layer


In the basic layer of neurons, referred to as the dense layer, each neuron receives information from
every cell in the layer beneath it. This layer utilizes the results obtained from the preceding convolutional layers
to categorize images effectively. The dense layer's comprehensive connectivity allows it to capture complex
patterns and features for accurate image classification.

3.6. Proposed architecture for tumor segmentation


3.6.1. Preprocessing
In the pre-processing step, noise is reduced using a median filter, contributing to improved image
quality. Simultaneously, resizing is implemented to optimize the images for further analysis or tasks. These
steps collectively enhance the data quality and prepare the images for more effective processing.

Int J Artif Intell, Vol. 13, No. 4, December 2024: 4833-4842


Int J Artif Intell ISSN: 2252-8938  4839

3.6.2. Tumor localization and segmentation using mask region-based convolutional neural network
Segmentation is employed to autonomously identify and isolate brain tumors from a complex
environment without human intervention. The mask RCNN as shown in Figure 6 is utilized for this purpose,
aiming to identify tumor and non-tumor areas within the provided MRI images. The goal is to achieve precise
delineation and classification of tumor regions through automated processes.

Figure 6. Proposed framework architecture

3.6.3. Feature extraction


The pre-defined CNN is used to obtain the relevant features from the input image. For extracting more
discriminating and reliable features ResNet 101 is considered in the implementation. The final feature map
from the intermediate layer with a more accurate representation of the image at various sizes is given as input
to the RPN.

3.6.4. Region proposal network


The RPN network receives the feature map obtained in the preceding step and generates ROIs.
Utilizing a sliding window to scan the image, a 3×3 convolutional layer creates essential anchors that symbolize
the bounding box in various widths and are dispersed over the whole image. The image is covered by around
20k anchors of various sizes and scales that correlate with one another. The RPN network has two outputs i.e.
binary classification and bounding box regression. Binary classification is used to identify whether an anchor
contains the object or not. Bounding box regression builds bounding boxes according to the IoU value. A
positive anchor is defined as one that has an IoU value of more than 0.7 and with the ground-truth box as
opposed to a negative anchor.

3.6.5. Region of interest pooling


The designed ROI as well as the feature map are inputs to this network Figure 2. The ROI pooling
network is deeper than the RPN, classifies regions of interest according to a certain category, like tumor or
non-tumor, and enhances the bounding box size. The goal of the bounding box regression is to precisely enclose
the tumor region by adjusting the size and placement of the bounding box. The feature map is down-sampled
nearest k from the dimension of the original image; the bounds of the ROI typically do not correspond with the
resolution of the feature map. The ROI align layer is used to extract fixed-length feature maps for arbitrarily
defined candidate regions, which is then used to scale the feature maps.

3.6.6. Segmentation mask


A segmentation mask is represented by floating integers, which carry more information than binary
masks and are produced by the segmentation network from the positive ROI recognized by the ROI classifier.
To evaluate the loss with the expected mask during the training stage, the ground truth masks are reduced.
However, the predicted mask is scaled up during inference to match the parameters of the ROI bounding box,
producing the final output mask.

Automated diagnosis of brain tumor classification and segmentation … (Chandrakala B. Muddaraju)


4840  ISSN: 2252-8938

4. RESULTS AND DISCUSSION


The results of the suggested classification are displayed in Figure 7 as classification results;
Figure 7(a) no tumor, Figure 7(b) glioma tumor, Figure 7(c) pituitary tumor, and Figure 7(d) meningioma
tumor and segmentation in Figures 8 to 10 approach for brain tumors are presented in this section. Many images
were used to validate the suggested technique, which was developed using the tensorflow and keras packages
in Python. The outcomes are examined twice: first, in terms of the classification data's accuracy, and
subsequently, in terms of the extracted tumor mask's accuracy. The aim of optimizing the proposed
methodology to be able to categorize the tumors with the maximum performance for a particular kind of
imaging modality, namely T1 MRI images, was employed in the work presented. Additionally, only axial plane
slices are used in this experiment since these have a greater resolution and less noise than coronal and sagittal
plane planes. The radiologists might continue with further examination utilizing both coronal and sagittal plane
slices once the tumor is initially found in axial plane slices.

(a) (b) (c) (d)

Figure 7. Classification results; (a) no tumor, (b) glioma tumor, (c) pituitary tumor, and (d) meningioma tumor

Figure 8. The segmentation of meningioma tumor Figure 9. The segmentation of glioma tumor

Figure 10. Figure showing the segmentation of pituitary tumor

5. CONCLUSION
This work focuses on the classification and segmentation of brain tumors from T1-weighted MRI
images using an innovative CNN-based technique. The method showcased achieves an impressive overall
classifier accuracy average of 96%. Additionally, segmentation is performed using mask R-CNN, yielding a
segmentation accuracy of 94%. The potential future extension of the automated approach to include flair and
T2-weighted MRI images, as well as slices from other planes like coronal and sagittal, could provide valuable
diagnostic support for medical facilities facing shortages of skilled personnel and resources. This suggests the
method's adaptability and potential impact on improving healthcare outcomes in resource-constrained settings.

Int J Artif Intell, Vol. 13, No. 4, December 2024: 4833-4842


Int J Artif Intell ISSN: 2252-8938  4841

REFERENCES
[1] S. Rashmi, B. M. Chandrakala, M. R. Divya, and S. H. Megha, “CNN based multi-view classification and ROI segmentation: A
survey,” Global Transitions Proceedings, vol. 3, no. 1, pp. 86–90, 2022, doi: 10.1016/j.gltp.2022.04.019.
[2] H. A. Khan, W. Jue, M. Mushtaq, and M. U. Mushtaq, “Brain tumor classification in MRI image using convolutional neural
network,” Mathematical Biosciences and Engineering, vol. 17, no. 5, pp. 6203–6216, 2020, doi: 10.3934/MBE.2020328.
[3] A. Pashaei, H. Sajedi, and N. Jazayeri, “Brain tumor classification via convolutional neural network and extreme learning
machines,” 2018 8th International Conference on Computer and Knowledge Engineering, ICCKE 2018, pp. 314–319, 2018, doi:
10.1109/ICCKE.2018.8566571.
[4] Z. N. K. Swati et al., “Brain tumor classification for MR images using transfer learning and fine-tuning,” Computerized Medical
Imaging and Graphics, vol. 75, pp. 34–46, 2019, doi: 10.1016/j.compmedimag.2019.05.001.
[5] J. Cheng et al., “Enhanced performance of brain tumor classification via tumor region augmentation and partition,” PLoS ONE, vol.
10, no. 10, 2015, doi: 10.1371/journal.pone.0140381.
[6] E. F. Badran, E. G. Mahmoud, and N. Hamdy, “An algorithm for detecting brain tumors in MRI images,” Proceedings, ICCES’2010
- 2010 International Conference on Computer Engineering and Systems, pp. 368–373, 2010, doi: 10.1109/ICCES.2010.5674887.
[7] Y. Song, Z. Ji, Q. Sun, and Y. Zheng, “A novel brain tumor segmentation from multi-modality MRI via a level-set-based model,”
Journal of Signal Processing Systems, vol. 87, no. 2, pp. 249–257, 2017, doi: 10.1007/s11265-016-1188-4.
[8] M. R. Ismael and I. A. -Qader, “Brain tumor classification via statistical features and back-propagation neural network,” IEEE
International Conference on Electro Information Technology, vol. 2018, pp. 252–257, 2018, doi: 10.1109/EIT.2018.8500308.
[9] T. Hossain, F. S. Shishir, M. Ashraf, M. A. Al Nasim, and F. Muhammad Shah, “Brain tumor detection using convolutional neural
network,” 1st International Conference on Advances in Science, Engineering and Robotics Technology 2019, ICASERT 2019, 2019,
doi: 10.1109/ICASERT.2019.8934561.
[10] J. Zhang, Z. Jiang, J. Dong, Y. Hou, and B. Liu, “Attention gate ResU-Net for automatic MRI brain tumor segmentation,” IEEE
Access, vol. 8, pp. 58533–58545, 2020, doi: 10.1109/ACCESS.2020.2983075.
[11] W. Wang and B. Lin, “Optimizing 3D U-Net-based brain tumor segmentation with integer-arithmetic deep learning accelerators,”
ACM Journal on Emerging Technologies in Computing Systems, vol. 18, no. 2, 2022, doi: 10.1145/3495210.
[12] A. Kapoor and R. Agarwal, “Brain image segmentation based on U-Net architecture with adaptive histogram equalization,” 2021
9th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions), ICRITO 2021,
2021, doi: 10.1109/ICRITO51393.2021.9596335.
[13] M. U. Rehman, S. Cho, J. H. Kim, and K. T. Chong, “Bu-net: Brain tumor segmentation using modified u-net architecture,”
Electronics, vol. 9, no. 12, pp. 1–12, 2020, doi: 10.3390/electronics9122203.
[14] A. Arora, A. Jayal, M. Gupta, P. Mittal, and S. C. Satapathy, “Brain tumor segmentation of MRI images using processed image
driven u-net architecture,” Computers, vol. 10, no. 11, 2021, doi: 10.3390/computers10110139.
[15] D. Daimary, M. B. Bora, K. Amitab, and D. Kandar, “Brain tumor segmentation from MRI images using hybrid convolutional
neural networks,” Procedia Computer Science, vol. 167, pp. 2419–2428, 2020, doi: 10.1016/j.procs.2020.03.295.
[16] M. Masood, T. Nazir, M. Nawaz, A. Javed, M. Iqbal, and A. Mehmood, “Brain tumor localization and segmentation using mask
RCNN,” Frontiers of Computer Science, vol. 15, no. 6, 2021, doi: 10.1007/s11704-020-0105-y.
[17] N. Noreen, S. Palaniappan, A. Qayyum, I. Ahmad, M. Imran, and M. Shoaib, “A deep learning model based on concatenation
approach for the diagnosis of brain tumor,” IEEE Access, vol. 8, pp. 55135–55144, 2020, doi: 10.1109/ACCESS.2020.2978629.
[18] S. Huda, J. Yearwood, H. F. Jelinek, M. M. Hassan, G. Fortino, and M. Buckland, “A hybrid feature selection with ensemble
classification for imbalanced healthcare data: a case study for brain tumor diagnosis,” IEEE Access, vol. 4, pp. 9145–9154, 2016,
doi: 10.1109/ACCESS.2016.2647238.
[19] W. Deng, Q. Shi, M. Wang, B. Zheng, and N. Ning, “Deep learning-based HCNN and CRF-RRNN model for brain tumor
segmentation,” IEEE Access, vol. 8, pp. 26665–26675, 2020, doi: 10.1109/ACCESS.2020.2966879.
[20] A. S. Musallam, A. S. Sherif, and M. K. Hussein, “A new convolutional neural network architecture for automatic detection of brain
tumors in magnetic resonance imaging images,” IEEE Access, vol. 10, pp. 2775–2782, 2022, doi: 10.1109/ACCESS.2022.3140289.
[21] W. Wang, F. Bu, Z. Lin, and S. Zhai, “Learning methods of convolutional neural network combined with image feature extraction
in brain tumor detection,” IEEE Access, vol. 8, pp. 152659–152668, 2020, doi: 10.1109/ACCESS.2020.3016282.
[22] A. Farzamnia, S. H. Hazaveh, S. S. Siadat, and E. G. Moung, “MRI brain tumor detection methods using contourlet transform based
on time adaptive self-organizing map,” IEEE Access, vol. 11, pp. 113480–113492, 2023, doi: 10.1109/ACCESS.2023.3322450.
[23] Z. Atha and J. Chaki, “SSBTCNet: semi-supervised brain tumor classification network,” IEEE Access, vol. 11, pp. 141485–141499,
2023, doi: 10.1109/ACCESS.2023.3343126.
[24] H. H. Sultan, N. M. Salem, and W. Al-Atabany, “Multi-classification of brain tumor images using deep neural network,” IEEE
Access, vol. 7, pp. 69215–69225, 2019, doi: 10.1109/ACCESS.2019.2919122.
[25] S. Ahmad and P. K. Choudhury, “On the performance of deep transfer learning networks for brain tumor detection using MR
images,” IEEE Access, vol. 10, pp. 59099–59114, 2022, doi: 10.1109/ACCESS.2022.3179376.

BIOGRAPHIES OF AUTHOR

Chandrakala B. Muddaraju is working as an Associate Professor, Department of


Information Science & Engineering, Dayananda Sagar College of Engineering, Bengaluru,
Karnataka, India. She received her Ph.D. degree in Computer and Information Science and
Engineering, Visvesvaraya Technological University, Belagavi, Karnataka. She had been
working as an Associate Professor for the past 20 years. Her research interests are in cloud
computing, network security, cyber security, artificial intelligence, and machine learning. She
published 44 papers in journals and conference, also 5 patents. She guiding research scholars in
various domains. She can be contacted at email: [email protected].

Automated diagnosis of brain tumor classification and segmentation … (Chandrakala B. Muddaraju)


4842  ISSN: 2252-8938

Shrinivasa is working as an Assistant Professor Gd.-III in the Department of


Computer and Communication Engineering, NMAM Institute of Technology, Nitte, Karkala. He
has about 16 years of teaching and two years of industrial experience. He received his Bachelor
of Engineering degree in Computer Science and Engineering and M.Tech. degree in Computer
Science and Engineering with Distinction from Visvesvaraya Technological University,
Belagavi. He is pursuing his Ph.D. at Visvesvaraya Technological University, Belagavi. He has
published eight papers in reputed journals and also has a patent. His areas of research include
cloud computing and security. He is an active member of ISTE. He can be contacted at email:
[email protected].

Shobha Narasimha is working as Associate Professor in the Department of


Computer Science & Design, Dayananda Sagar College of Engineering, Bengaluru. She
obtained her Ph.D. in Computer and Information Science from Visvesvaraya Technological
University, Karnataka. She has published around 11 papers in international/national journals and
conferences and published two patents. Her research interests include data mining, machine
learning, and data science. She can be contacted at email: [email protected].

Vaishali Sontakke is working as an Assistant Professor Department of Information


Science & Engineering, East point College of Engineering and Technology, Bengaluru,
Karnataka, India. She is pursuing her Ph.D. Degree in Computer and Science and Engineering,
Visvesvaraya Technological University, Belagavi, Karnataka. She received Post Graduate
Program in Computer Network and Engineering with the Visvesvaraya Technological
University, Belagavi, and Karnataka. Her Bachelor’s Degree in Computer Science and
Engineering from SSGMCE Shegaon. She had been working as an Assistant Professor for the
past 16 years. Her research interests are in big data management, artificial intelligence, and
machine learning. She has published 12 papers in journals and conference. She can be contacted
at email: [email protected].

Int J Artif Intell, Vol. 13, No. 4, December 2024: 4833-4842

You might also like