Next Article in Journal
Stability of Strawberry Fruit (Fragaria x ananassa Duch.) Nutritional Quality at Different Storage Conditions
Next Article in Special Issue
Effectiveness of Lower-Cost Strategies for Running Gait Retraining: A Systematic Review
Previous Article in Journal
Design and Experiments of Autonomous Path Tracking Based on Dead Reckoning
Previous Article in Special Issue
A Single Wearable Sensor for Gait Analysis in Parkinson’s Disease: A Preliminary Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A CNN-Based Strategy to Classify MRI-Based Brain Tumors Using Deep Convolutional Network

1
Department of Computer Science and Engineering, East West University, Dhaka 1212, Bangladesh
2
Department of Artificial Intelligence, BJIT Limited, Dhaka 1212, Bangladesh
3
School of Computing, Engineering and Intelligent Systems, Ulster University, Northland Road, Londonderry BT48 7JL, UK
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(1), 312; https://doi.org/10.3390/app13010312
Submission received: 7 November 2022 / Revised: 30 November 2022 / Accepted: 24 December 2022 / Published: 27 December 2022
(This article belongs to the Special Issue Advances in Data Analysis for Wearable Sensors)

Abstract

:
Brain tumor is a severe health condition that kills many lives every year, and several of those casualties are from rural areas. However, the technology to diagnose brain tumors at an early stage is not as efficient as expected. Therefore, we sought to create a reliable system that can help medical professionals to identify brain tumors. Although several studies are being conducted on this issue, we attempted to establish a much more efficient and error-free classification method, which is trained with a comparatively substantial number of real datasets rather than augmented data. Using a modified VGG-16 (Visual Geometry Group) architecture on 10,153 MRI (Magnetic Resonance Imaging) images with 3 different classes (Glioma, Meningioma, and Pituitary), the network performs significantly well. It achieved a precision of 99.4% for Glioma, 96.7% for Meningioma, and 100% for Pituitary, with an overall accuracy of 99.5%. It also attained better results than several other existing CNN architectures and state-of-the-art work.

1. Introduction

A brain tumor is a mass developed by abnormal cell growth and division inside the skull. Brain tumors are rare and can be fatal [1]. They come in a variety of shapes and sizes and can arise in any place and with varying image intensities [2]. Depending on their origin, brain tumors are classified as primary or metastatic. Primary cancer cells originate in brain tissue [3], whereas metastatic cancer cells become malignant in any other part of the body and spread to the brain [1]. A timely diagnosis of a brain tumor is critical for optimal treatment planning and patient care.
Various medical imaging techniques are applied to collect information about tumors. Imperative innovations incorporate Computed Tomography (CT), positron emission tomography (PET), Single-Photon-Emission Computed Tomography (SPECT), Magnetic Resonance Spectroscopy (MRS), and Magnetic Resonance Imaging (MRI). These technologies can be used in conjunction to gather more specific information about tumors. MRI, on the contrary, is the most commonly employed method because of its beneficial properties. MRI is a non-invasive in vivo imaging method that uses radiofrequency waves to trigger target tissues, causing them to form internal images under the influence of a superconducting magnet [4]. The scan delivers hundreds of 2D image slices with good soft tissue contrast in MRI collection, while not using ionizing radiation [5,6]. During image acquisition, excitation and repetition periods are modulated to create images of varied MRI sequences.
The radiologists rely on their training and experiences to manually determine the abnormality of the brain MRI, then categorize them into tumor types [7]. Early detection and classification of brain tumors play a key role in the evaluation case and, consequently, contribute to the selection of the most appropriate treatment to save patients’ lives [8]. One of the major difficulties of manual detection is the chance of misclassification of the tumor, which can lead to the wrong treatments for the patients. In addition, as time plays an important role, manual detection has no major advantage here. Therefore, the desire for an automated and quick detection technique is expected [9].
The classification of brain tumors into subgroups is a more difficult scientific problem. The factors causing the problems are brain tumors that vary widely in form, size, and intensity [10], and tumors of different pathological categories that may seem identical [11]. We are keen on classifying abnormal types and normal brain images in this study. The MRI dataset used in this study includes images of the brain without tumors, as well as three different forms of brain tumors. Glioma, meningioma, and pituitary tumors account for approximately 45%, 15%, and 15% of all brain tumors, respectively, in clinical practice [12]. This work is more sophisticated and demanding than conventional binary classification (normal and abnormal), as not only can it identify the problem, it also has the ability to categorize the abnormalities. Machine learning techniques are now frequently utilized in medical imaging [6]. To estimate new topic labels in supervised techniques, an algorithm is used to find a mapping function of input variables and their related output labels. The primary goal is to find intrinsic patterns in training data using techniques, such as Artificial Neural Network (ANN) [13], Support Vector Machine (SVM), and K-Nearest Neighbors (KNN) [14]. Unsupervised learning, on the other hand, is based solely on input variables, as shown by fuzzy c-means [15] and the Self-Organization Map (SOM) [16]. To establish learning, the features of the training images must be extracted, which are typically grayscale, texture, and statistical characteristics. These characteristics are called handmade features and they require the expertise of a specialist with considerable knowledge and the ability to select the most vital aspects. Furthermore, this operation takes a long time and is prone to errors when dealing with large amounts of data [17].
Deep learning was recently introduced to the medical imaging area and has shown substantial success in classification problems, specifically the multiclass classification problem with better accuracy [18,19,20,21,22]. Deep learning algorithms use a matrix of multiple layers of asymmetric processing techniques to extract features.
Convolutional Neural Network (CNN) comprises several convolutional layers, pooling layers, and fully connected layers for segmentation and feature extraction, reduction of the spatial size of the representation, and classification [23,24]. The frequently used activation functions in CNN include ReLU (Rectified Linear Unit) [25], FReLU [26], LeakyReLU [27], Swish [28], ACON (Activate or Not) [29], and SoftMax [30].
In this study, our goal was to create an artificial model which can predict the types of tumors accurately within a few seconds. The proposed model is a modified CNN model, inspired by the Visual Geometry Group (VGG) architecture, which performs significantly well than many other architectures. Working with medical images is harder than usual, because of the sensitivity of the results, as a wrong diagnosis can result in a life-threatening condition. Therefore, our focus was primarily on getting a high accuracy that can exceed the results of other commonly used architectures.
After the introduction, we discuss related works regarding our study and their research analogy in the Related Works section. Training of the model, along with performance measurements presented in the Materials and Methods section. We have analyzed the result of our proposed architecture in the Results section, discussed the findings and compared its performance with other methods in the Discussion section. Finally, we conclude the paper in the Conclusions section.

2. Related Works

Various approaches have been developed in recent years to recognize brain tumors on MRI images. SVM and Neural Networks (NN) are the most widely used techniques due to their excellent performance over the past decade [31].
A Probabilistic Neural Network (PNN) is used where the decision-making strategy partitioned into extraction utilizing vital component investigation and classification using PNN, with an accuracy ranging from 100% to 73% depending on the spread value [32].
A CNN-based deep learning model is proposed for categorizing various forms of brain tumors, where the architecture has a classification accuracy of 96.13% for the categorization of brain tumor types [33].
Extreme Learning Machine of Local Receptive Fields (ELM-LRF) is a new deep learning paradigm that covers two distinct structures in its body [31]. An ELM-LRF model is created with four adjustable parameters, the convolution filter size r, the number of convolution filters K, the pooling size, and the regulatory coefficient C. The proposed ELM-LRF approach yielded a classification accuracy of 97.18% [34].
A Deep Neural Network (DNN) is used as a discriminator in a Generative Adversarial Network (GAN) to extract powerful features and grasp the structure of MRI images in its convolutional layers, resulting in 95.6% accuracy [35].
Multiple Kernel-Based Probabilistic Clustering (MKPC) is used to segment the image, and a deep learning classifier is used to categorize it, achieving an accuracy of 0.83% [36].
A CNN technique is proposed, where the Fuzzy C-Means (FCM) method is used for brain segmentation and texture and form properties from these segmented areas were recovered before being sent into the SVM and DNN processors, with an accuracy of 97.4% [37].
A tiny kernel CNN model can also be used for the classification with 3 × 3 kernels for all convolutional layers with 1 stride. This result continues to demonstrate a 90.67% precision in the augmented dataset [38].
In a study, a DNN classifier combined with the Discrete Wavelet Transform (DWT) achieved a good result of 96.77% in a tiny dataset [39].
A deep learning technique is provided to classify multimodal brain tumors using a linear contrast augmentation approach. Before fusion, features are extracted using transfer learning from two distinct CNN models. ELM is used to classify the robust properties obtained by this technique [40].
Several researchers employed pre-trained CNN architectures and fine-tuned them for brain tumor classification. The proposed categorization system extracts attributes from brain MRI images using a pre-trained GoogLeNet. After that, by using proven classifier models and a five-fold cross-validation technique, the experiment obtains 97% accuracy [41].
The Grab cut method is used to properly segment the real lesion [17]. This study shows segmentation utilizing Unet architecture with ResNet50 as a baseline. The application of evolutionary methods (ResNet50, DenseNet201, MobileNet V2, and InceptionV3) and reinforcement learning through transfer learning achieves an accuracy of 91.8%, 92.8%, 92.9%, 93.1%, and 99.6%, respectively, in the categorization of brain cancers.
A multi-level CNN model is introduced, where pre-trained models, such as ResNet-50, VGG-16, and Inception V3, are used to generate trained parameters, achieving 99.89% classification accuracy [42].
A hybrid deep learning model called DeepTumorNet is proposed, which is generated by modifying the layers of GoogleNet architecture, with the addition of leaky ReLU activation function. This architecture achieved 99.67% accuracy [43].
A differential deep convolutional neural network model is suggested, where the differential feature maps of CNN are derived using differential operators, which resulted in an accuracy of 99.25% [44].
Some of the other state-of-the-art work are shown in Table 1.

3. Materials and Methods

Figure 1 shows the workflow of our proposed model. First, images are loaded to go through several crucial pre-processing stages. Then, the dataset is split into two parts: training and testing.

3.1. Dataset

For this study, we were looking for a comparatively large dataset as we wanted to avoid the data augmentation method. Furthermore, we needed a dataset that contained normal and abnormal brain images and their subtypes. To address this problem, we combined two publicly available Kaggle datasets [46,47]. We added a small amount of data from various sources to enlarge the dataset. Finally, a dataset of 10,153 MRI images was obtained for which the samples from each class are shown in Figure 2. The number of images are 10,153. The number of data from each class are 2547, 2582, 2658, and 2396 for Glioma, Meningioma, Pituitary, and No Tumor, respectively. The data ranges from 2.3 k to 2.6 k. There is no major issue with data imbalance within the dataset, therefore we did not have to implement any kind of extra techniques to handle this insignificant data imbalance.

3.2. Preprocessing

The images in the dataset needed to go through some pre-processing stages before training the proposed model. The images were of different sizes. To train our model and get the best accuracy, we resized the image into 200 × 200 × 1 pixels. As a result, it gives better performance and straightforward calculation. We choose to decrease the MRI picture size from an average of 256 × 256 × 1 (the highest being at the size of 500 × 500 × 1) to 200 × 200 × 1 due to the computational restrictions of fitting the complete image to the model here. The required size is chosen so that all parts of the skull are captured, and the images have a centering effect after cropping and resizing. We used the same orientation of the raw data. We have used grayscale images, and to ensure that no images contained any unwanted colors, we have turned them into grayscale images before proceeding any further. Before splitting the data set, we shuffled the dataset and split it into two parts, training and testing, where the training dataset is 80% and the testing dataset is 20%. The dataset is divided here into 80% for training and 20% for testing. We did not perform any data validation separately, however, we have used that 20% testing data as validation during training.

3.3. Proposed Model

Figure 3 shows the architecture of the proposed model. Here, we have implemented a similar structure to the VGG-16 architecture. VGG-16 is a deep CNN architecture that contains numerous different layers. The VGG model inspired us to utilize several (deeper architecture) convolution layers to use a restricted receptive field, followed by a max pooling layer to decrease image dimensionality by decreasing the number of pixels in the convolution layer output.
CNN is a very deep architecture for a large number of image datasets for image recognition [48]. It gives a particularly good accuracy in large-scale image processing. In our model, there are 21 layers, where the first 20 layers use the ReLU (Rectified Linear Unit) function, and the last one is the SoftMax function.
Overfitting is a common problem in machine learning while training a large model with a high number of parameters and a relatively small training dataset. Dropout regularization is a technique for combating overfitting. We have used dropout layers to reduce overfitting. The moving edges of hidden node neurons that make up the hidden layers are randomly set at 0 while updating the training phase [35]. In the proposed model, we found that 20% dropout gives the most accurate values. In Figure 4, all nodes relate to the output layer. After dropping out, some of the nodes are avoided to help our model to reduce overfitting.
In our proposed model, we have used a Stochastic Gradient Descent (SGD) optimizer. SGD subtracts the weights from the gradient multiplied by the learning rate. SGD has strong theoretical foundations and is still used in edge-training NNs, despite its simplicity [49].
θ i = θ i α L θ i
Batch normalization is an effective CNN training technique that processes the input to each layer for each mini-batch [50]. The function of batch normalization used here is
y i = γ B . x i ¯ + β B
In the proposed model, the last layer is softmax.
After finding the normalization in channel B for a mini-batch, an arbitrary value is multiplied, which works as a scaling transformation, and another arbitrary value is added to shift it. These values are set by 0/1 as default and get updated after each epoch. This function reduces the problem with input value changing by stabilizing the values, thus increasing the training speed in our model. We divide our batch into 42 images.
To build our model, we used different parameters and different layers. The summary of the model is given in Table 2.
After the input layer, there are six sets of layers in one set, which are Convo2D, MaxPool, and batch normalization layers with different shapes. Then, there is a flatten layer. After that, there are three dropout layers and three dense layers. In this case, it is a multi-class.

3.4. Pseudocode

Input:
   Xt: Brain tumour pre-processed train dataset;
   Xv: Brain tumour pre-processed test dataset;
   ε: Number of epochs;
   η: Learning Rate;
   Β: Batch Size;
Output:
   Assessment Metrics (accuracy etc.) calculation on test dataset.
   
Start Procedure
   Add_Conv2D (filters, kernel_size, padding, activation)
   Add_MaxPool2D (pool_size)
   Add_BatchNormalization ()
   Add_Flatten ()
   Add_Dense ()
   Add_Dropout (0.2)
   Optimizing with Stochastic gradient descent (η)
   
   for all epochs in 1 to ε do
     for Β ∈ a random batch from Xt do
       model_fit with test data (Xv)
       append (Accuracy)
     Endfor
   Endfor
   Evaluate trained Model dataset -> totalAccuracy
   return totalAccuracy
EndProcedure

4. Results

We have used modified VGGNet-CNN architecture for the proposed system. In total, 10,153 samples of MRI images with an input vector size of 200 × 200 were used. The dataset was partitioned into two sets, 80% training set and 20% testing set. For validation, 80% of the data from the testing set were kept, and the rest were for the actual testing. We tested the model with three epoch settings (10, 20, 30) and for each epoch condition, the model was evaluated with three different learning rates, which are 0.001, 0.05, and 0.01. The best precision of 99.5% was found with the following hyperparameters: epoch as 20, momentum = 0.9, and learning rate as 0.01. The results are shown in Table 3.

4.1. Performance Analysis

Figure 5a,b illustrate the training and validation accuracy/loss. At approximately the 9th iteration, the accuracy reached nearly 100%, as observed in Figure 5a, where the highest overall accuracy achieved throughout the testing phase is 99.6%. In Figure 5b, the curve starts to decline steeply at first, although there are some fluctuations because of the short batch size of 32. The fluctuations started to fade at approximately the 13th epoch and remained at almost zero.

4.2. Confusion Matrix

The confusion matrix that evaluates the system’s performance is shown in Figure 6. The predicted values or system output are represented on the X axis, whereas the true labels or ground truth are represented on the Y axis. This is used to calculate evaluation metrics, such as precision, recall, sensitivity, specificity, and accuracy using (3), (4), (5), (6), and (7), respectively, which are used to evaluate the models’ performance, such as [51].
p r e c i s i o n = t p t p + f p
s e n s i t i v i t y = t p t p + f n
s p e c i f i c i t y = t n t n + f n
a c c u r a c y = c o r r e c t   p r e d i c t i o n s a l l   p r e d i c t i o n s
F 1   S c o r e = 2 . p r e c i s i o n . r e c a l l p r e c i s i o n + r e c a l l
The confusion matrix, the F1 score, and the Mean Squared Error are used to evaluate the proposed system. Sensitivity, Specificity, Accuracy, Precision, and Recall are computed using True Positive or TP (True Positive), False Negative or FN (False Negative), True Negative or TN (True Negative), and False Positive or FP (False Positive). The best score for precision, recall, and specificity are the bold values in Table 4. An accuracy of 98.98% is achieved for Glioma, 99.13% for Meningioma, 99.95% for Pituitary, and 99.81% for No Tumor.

5. Discussion

The final architecture was formed by applying multiple parameters, including the changes in the dropout layers and optimization using SGD, to the VGG-16 configuration. Due to the multiple models and datasets used, it is difficult to compare the findings to other approaches reported at the state-of-the-art. However, we did manage to compare existing methods for detecting and classifying brain tumors in Table 5. We have applied the same dataset to our proposed model which is used in the following studies. The approach taken was segmentation-free.
The authors of [54] adopted a GA to identify the network architecture, although the prediction results did not implement the best one. In [53], the author incorporated the coarse tumor boundary as an additional input to assist the network to produce better outcomes. Furthermore, the authors of [52] applied only two convolutional layers, each having 64 kernels. Therefore, the proposed method predicts the best result when compared to other relevant past research on multi-class classification types, demonstrating the proposed system’s reliability.
Additionally, in Table 6, after comparing the proposed model with existing models, it is clear that our model provides the highest accuracy, while popular architectures, such as EfficientNet and ResNet remain behind. Even though our model is not pre-trained, it achieved 99.5% accuracy, whereas the pre-trained EfficientNet and ResNet models generated between 87–99.3% accuracy, which proves the validity of the modified VGG architecture.
As the proposed model is a modified version of the VGG-16 architecture, after comparing the precision, recall, F1-score and accuracy, it can be seen that the modified VGGNet is performing better than the base VGG-16 model in Table 7. The VGG-16 model has been trained using the same hyperparameters we have used in the modified VGGNet. We used learning rate = 0.01, momentum = 0.0 as hyperparameters for VGG-16 and modified VGGNet experiments and for other CNN architectures (ResNet, EfficientNet), we used learning rate = 0.001 and momentum = 0.9.
Even though the total parameters of our model are much higher than the existing models, the evident advantage of having a large number of parameters is the ability to describe far more intricate functions than those with fewer parameters. Deep Neural Networks deal with a large number of training and testing parameters. The ability of neural networks to fit different sorts of information as the number of parameters rises is what makes them so powerful. Our model is versatile enough to describe the necessary mapping because it has numerous parameters. However, because of overfitting, this power is what causes the model to be weak. To avoid overfitting, models can always use more regularization. By utilizing batch normalization and max pooling, we were able to accommodate it. In our model, there are six blocks, and for each block, a conv2D layer followed by a max-pooling layer, and a batch normalization were placed. A conv2D layer’s filter or kernel applies an elementwise multiplication to the 2D input data by “sliding” over it. It will therefore combine the outcomes into a single output pixel. The outcomes are then down-sampled or pooled feature maps that stress the patch’s most prominent feature. The output can then be normalized in the same manner and distributed among the feature maps. The work was done by batch normalization. Instead of using the entire data set, it is done in mini-batches. It facilitates learning by accelerating training and utilizing higher learning rates. Although it is slow because of this entire block, the approach appears to be more effective than others.

6. Conclusion and Future Work

Since we could achieve an accuracy of 99.5%, this system can indeed be used for industrial applications because of having such remarkable results. Compared to other research on multiclass classification, like SVM and KNN of [11], CNN of [52,53], and GA-CNN of [54], an accuracy of, respectively, 91.28%, 91.43%, 90.89%, and 94.2% were achieved. With their dataset, we obtained an accuracy of 95.7%, 96.4%, 96.01%, and 95.3%, respectively. However, with our modified VGG-16 and test set, we found a much better accuracy of 99.5%. The system is yet to be trained to detect brain tumors at an early stage. The identification is unquestionably crucial when it comes down to the health of a human being. If detection faces a complication and generates false results, it might be fatal to a person. One of the crucial limitations of this model is that it is comparatively slow as it contains high numbers of parameters, which can be solved by utilizing various other efficient existing models. Additionally, this model cannot detect the exact location of the tumor. The model can be trained to work with 3D images, which will further make it possible to locate the tumor’s position. It can also be trained to detect brain tumors at an early stage.

Author Contributions

All authors contributed equally to this manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Ethical review and approval were waived for this study due to the nature of it. The research does not use any sensitive personally identifiable data, forming no ethical issues.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. DeAngelis, L.M. Brain Tumors. N. Engl. J. Med. 2001, 344, 114–123. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Ricci, P.E.; Dungan, D.H. Imaging of low- and intermediate-grade gliomas. Semin. Radiat. Oncol. 2001, 11, 103–112. [Google Scholar] [CrossRef] [PubMed]
  3. Day, M.R.; Lacarra, E.D.; Sacks, J.M.; Sadraei, A. Theranostics of Glioblastoma Multiforme: In Vitro Characterization of Targeted Nanoemulsions and Creation of a 3D Statistical Heatmap to Visualize Nanoemulsion Uptake. 2014. Available online: https://web.wpi.edu/Pubs/E-project/Available/E-project-043015-050001/ (accessed on 13 July 2022).
  4. Işin, A.; Direko, C.; Şah, M. Review of MRI-based Brain Tumor Image Segmentation Using Deep Learning Methods. Procedia Comput. Sci. 2016, 102, 317–324. [Google Scholar] [CrossRef] [Green Version]
  5. Drevelegas, A. Imaging of Brain Tumors with Histological Correlations; Springer: Berlin/Heidelberg, Germany, 2011; p. 432. [Google Scholar]
  6. Litjens, G.; Kooi, T.; Bejnordi, B.E.; Setio, A.A.A.; Ciompi, F.; Ghafoorian, M.; Van Der Laak, J.A.; Van Ginneken, B.; Sánchez, C.I. A survey on deep learning in medical image analysis. Med. Image Anal. 2017, 42, 60–88. [Google Scholar] [CrossRef] [Green Version]
  7. Kaus, M.R.; Warfield, S.K.; Nabavi, A.; Black, P.M.; Jolesz, F.A.; Kikinis, R. Automated segmentation of MR images of brain tumors. Radiology 2001, 218, 586–591. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  8. Louis, D.N.; Perry, A.; Reifenberger, G.; Von Deimling, A.; Figarella-Branger, D.; Cavenee, W.K.; Ohgaki, H.; Wiestler, O.D.; Kleihues, P.; Ellison, D.W. The 2016 World Health Organization Classification of Tumors of the Central Nervous System: A summary. Acta Neuropathol. 2016, 131, 803–820. [Google Scholar] [CrossRef] [Green Version]
  9. Mohan, G.; Subashini, M.M. MRI based medical image analysis: Survey on brain tumor grade classification. Biomed. Signal Process. Control 2018, 39, 139–161. [Google Scholar] [CrossRef]
  10. Cheng, J.; Huang, W.; Cao, S.; Yang, R.; Yang, W.; Yun, Z.; Wang, Z.; Feng, Q. Enhanced performance of brain tumor classification via tumor region augmentation and partition. PLoS ONE 2015, 10, e0140381. [Google Scholar] [CrossRef]
  11. Cheng, J.; Yang, W.; Huang, M.; Huang, W.; Jiang, J.; Zhou, Y.; Yang, R.; Zhao, J.; Feng, Y.; Feng, Q.; et al. Retrieval of Brain Tumors by Adaptive Spatial Pooling and Fisher Vector Representation. PLoS ONE 2016, 11, e0157112. [Google Scholar] [CrossRef] [Green Version]
  12. Swati, Z.N.K.; Zhao, Q.; Kabir, M.; Ali, F.; Ali, Z.; Ahmed, S.; Lu, J. Brain tumor classification for MR images using transfer learning and fine-tuning. Comput. Med. Imaging Graph. 2019, 75, 34–46. [Google Scholar] [CrossRef]
  13. Rajesh, T.; Malar, R.S.M. Rough set theory and feed forward neural network based brain tumor detection in magnetic resonance images. In Proceedings of the International Conference on “Advanced Nanomaterials and Emerging Engineering Technologies”, ICANMEET 2013, Chennai, India, 24–26 July 2013; pp. 240–244. [Google Scholar] [CrossRef]
  14. Machhale, K.; Nandpuru, H.B.; Kapur, V.; Kosta, L. MRI brain cancer classification using hybrid classifier (SVM-KNN). In Proceedings of the 2015 International Conference on Industrial Instrumentation and Control, ICIC 2015, Pune, India, 28–30 May 2015; pp. 60–65. [Google Scholar] [CrossRef]
  15. Shasidhar, M.; Raja, V.S.; Kumar, B.V. MRI brain image segmentation using modified fuzzy c-means clustering algorithm. In Proceedings of the 2011 International Conference on Communication Systems and Network Technologies, CSNT 2011, Katra, India, 3–5 June 2011; pp. 473–478. [Google Scholar] [CrossRef]
  16. Goswami, M.S.; Bhaiya, M.L.K.P. Brain tumour detection using unsupervised learning based neural network. In Proceedings of the 2013 International Conference on Communication Systems and Network Technologies, CSNT 2013, Gwalior, India, 6–8 April 2013; pp. 573–577. [Google Scholar] [CrossRef]
  17. Sadad, T.; Rehman, A.; Munir, A.; Saba, T.; Tariq, U.; Ayesha, N.; Abbasi, R. Brain tumor detection and multi-classification using advanced deep learning techniques. Microsc. Res. Tech. 2021, 84, 1296–1308. [Google Scholar] [CrossRef] [PubMed]
  18. Sharif, M.I.; Khan, M.A.; Alhussein, M.; Aurangzeb, K.; Raza, M. A decision support system for multimodal brain tumor classification using deep learning. Complex Intell. Syst. 2021, 8, 3007–3020. [Google Scholar] [CrossRef]
  19. Chatterjee, S.; Nizamani, F.; Nürnberger, A.; Speck, O. Classifcation of brain tumours in MR images using deep spatiospatial models. Sci. Rep. 2022, 12, 1505. [Google Scholar] [CrossRef] [PubMed]
  20. Wu, J.; Li, C.; Gensheimer, M.; Padda, S.; Kato, F.; Shirato, H.; Wei, Y.; Schönlieb, C.B.; Price, S.J.; Jaffray, D.; et al. Radiological tumour classification across imaging modality and histology. Nat. Mach. Intell. 2021, 3, 787–798. [Google Scholar] [CrossRef]
  21. Fan, M.; Xia, P.; Clarke, R.; Wang, Y.; Li, L. Radiogenomic signatures reveal multiscale intratumour heterogeneity associated with biological functions and survival in breast cancer. Nat. Commun. 2020, 11, 4861. [Google Scholar] [CrossRef] [PubMed]
  22. Coudray, N.; Ocampo, P.S.; Sakellaropoulos, T.; Narula, N.; Snuderl, M.; Fenyö, D.; Moreira, A.L.; Razavian, N.; Tsirigos, A. Classification and mutation prediction from non–small cell lung cancer histopathology images using deep learning. Nat. Med. 2018, 24, 1559–1567. [Google Scholar] [CrossRef]
  23. Wu, J. Introduction to Convolutional Neural Networks. Natl. Key Lab Nov. Softw. Technol. 2017, 5, 495. Available online: https://web.archive.org/web/20180928011532/https://cs.nju.edu.cn/wujx/teaching/15_CNN.pdf (accessed on 28 June 2022).
  24. Lecun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  25. Nair, V.; Hinton, G.E. Rectified linear units improve restricted boltzmann machines. Procedings of the 27th International Conference on Machine Learning, ICML 2010, Haifa, Israel, 21–June 2010; Omnipress: Madison, WI, USA, 2010; pp. 807–814. [Google Scholar]
  26. Qiu, S.; Xu, X.; Cai, B. FReLU: Flexible Rectified Linear Units for Improving Convolutional Neural Networks. Proc.-Int. Conf. Pattern Recognit. 2018, 2018, 1223–1228. [Google Scholar] [CrossRef] [Green Version]
  27. Xu, B.; Wang, N.; Chen, T.; Li, M. Empirical Evaluation of Rectified Activations in Convolutional Network. 2015. Available online: http://arxiv.org/abs/1505.00853 (accessed on 28 June 2022).
  28. Ramachandran, P.; Zoph, B.; Le Google Brain, Q. Swish: A Self-Gated Activation Function. In Proceedings of the 6th International Conference on Learning Representations, ICLR 2018—Workshop Track Proceedings, Vancouver, BC, Canada, 30 April–3 May 2018. [Google Scholar] [CrossRef]
  29. Ma, N.; Zhang, X.; Liu, M.; Sun, J. Activate or Not: Learning Customized Activation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Virtual, 10 September 2020; pp. 8028–8038. [Google Scholar] [CrossRef]
  30. Bridle, J.S. Training Stochastic Model Recognition Algorithms Training Stochastic Model Recognition Algorithms as Networks can lead to Maximum Mutual Information Estimation of Parameters. In Proceedings of the 2nd International Conference on Neural Information Processing Systems, Denver, CO, USA, 27–30 November 1989. [Google Scholar]
  31. Pan, Y.; Huang, W.; Lin, Z.; Zhu, W.; Zhou, J.; Wong, J.; Ding, Z. Brain tumor grading based on Neural Networks and Convolutional Neural Networks. In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS, Milan, Italy, 25–29 August 2015; Volume 2015, pp. 699–702. [Google Scholar] [CrossRef]
  32. Othman, M.F.; Basri, M.A.M. Probabilistic Neural Network for brain tumor classification. In Proceedings of the 2011 Second International Conference on Intelligent Systems, Modelling and Simulation, Phnom Penh, Cambodia, 25–27 January 2011; pp. 136–138. [Google Scholar] [CrossRef] [Green Version]
  33. Sultan, H.H.; Salem, N.M.; Al-Atabany, W. Multi-Classification of Brain Tumor Images Using Deep Neural Network. IEEE Access 2019, 7, 69215–69225. [Google Scholar] [CrossRef]
  34. Ari, A.; Hanbay, D. Deep learning based brain tumor classification and detection system. Turk. J. Electr. Eng. Comput. Sci. 2018, 26, 2275–2286. [Google Scholar] [CrossRef]
  35. Ghassemi, N.; Shoeibi, A.; Rouhani, M. Deep neural network with generative adversarial networks pre-training for brain tumor classification based on MR images. Biomed. Signal Process. Control 2020, 57, 101678. [Google Scholar] [CrossRef]
  36. Pushpa, R.; Palani, S. Brain Tumor Detection and Classification Using Deep Learning Classifier on MRI Images. Appl. Sci. Eng. Technol. 2015, 10, 177–187. [Google Scholar] [CrossRef]
  37. Seetha, J.; Raja, S.S. Brain Tumor Classification Using Convolutional Neural Networks. Biomed. Pharmacol. J. 2018, 11, 1457–1461. [Google Scholar] [CrossRef]
  38. Sajjad, M.; Khan, S.; Muhammad, K.; Wu, W.; Ullah, A.; Baik, S.W. Multi-grade brain tumor classification using deep CNN with extensive data augmentation. J. Comput. Sci. 2019, 30, 174–182. [Google Scholar] [CrossRef]
  39. Mohsen, H.; El-Dahshan, E.-S.A.; El-Horbaty, E.-S.M.; Salem, A.-B.M. Classification using deep learning neural networks for brain tumors. Futur. Comput. Inform. J. 2018, 3, 68–71. [Google Scholar] [CrossRef]
  40. Khan, M.A.; Ashraf, I.; Alhaisoni, M.; Damaševičius, R.; Scherer, R.; Rehman, A.; Bukhari, S.A.C. Multimodal brain tumor classification using deep learning and robust feature selection: A machine learning application for radiologists. Diagnostics 2020, 10, 565. [Google Scholar] [CrossRef]
  41. Deepak, S.; Ameer, P.M. Brain tumor classification using deep CNN features via transfer learning. Comput. Biol. Med. 2019, 111, 103345. [Google Scholar] [CrossRef]
  42. Li, J.P.; Kumar, R.; Ali, Z.; Khan, I.; Uddin, M.I.; Agbley, B.L.Y. MCNN: A multi-level CNN model for the classification of brain tumors in IoT-healthcare system. J. Ambient Intell. Humaniz. Comput. 2022, 1, 1–12. [Google Scholar] [CrossRef]
  43. Raza, A.; Ayub, H.; Khan, J.A.; Ahmad, I.; SSalama, A.; Daradkeh, Y.I.; Javeed, D.; Ur Rehman, A.; Hamam, H. A Hybrid Deep Learning-Based Approach for Brain Tumor Classification. Electronics 2022, 11, 1146. [Google Scholar] [CrossRef]
  44. El Kader, I.A.; Xu, G.; Shuai, Z.; Saminu, S.; Javaid, I.; Ahmad, I.S. Differential Deep Convolutional Neural Network Model for Brain Tumor Classification. Brain Sci. 2021, 11, 352. [Google Scholar] [CrossRef] [PubMed]
  45. Alanazi, M.F.; Ali, M.U.; Hussain, S.J.; Zafar, A.; Mohatram, M.; Irfan, M.; AlRuwaili, R.; Alruwaili, M.; Ali, N.H.; Albarrak, A.M. Brain Tumor/Mass Classification Framework Using Magnetic-Resonance-Imaging-Based Isolated and Developed Transfer Deep-Learning Model. Sensors 2022, 22, 372. [Google Scholar] [CrossRef] [PubMed]
  46. Brain Tumor Dataset | Kaggle. Available online: https://www.kaggle.com/datasets/nniisshhaann/braintumors (accessed on 5 October 2022).
  47. Brain Tumor Dataset. Available online: https://figshare.com/articles/dataset/brain_tumor_dataset/1512427?fbclid=IwAR3VaHmuktRYQKmRLMYpDVD2DUxFWQqMlm4fqe5voJuNidukZD6WzC-kf0U (accessed on 5 October 2022).
  48. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2014, arXiv:1409.1556. Available online: http://arxiv.org/abs/1409.1556 (accessed on 13 July 2022).
  49. Keskar, N.S.; Socher, R. Improving Generalization Performance by Switching from Adam to SGD. arXiv 2017, arXiv:1712.07628. Available online: http://arxiv.org/abs/1712.07628 (accessed on 20 September 2022).
  50. Thakkar, V.; Tewary, S.; Chakraborty, C. Batch Normalization in Convolutional Neural Networks—A comparative study with CIFAR-10 data. In Proceedings of the 5th International Conference on Emerging Applications of Information Technology, EAIT 2018, Kolkata, India, 12–13 January 2018; pp. 1–5. [Google Scholar] [CrossRef]
  51. Ul Haq, A.; Li, J.P.; Agbley, B.L.Y.; Mawuli, C.B.; Ali, Z.; Nazir, S.; Din, S.U. A survey of deep learning techniques based Parkinson’s disease recognition methods employing clinical data. Expert Syst. Appl. 2022, 208, 118045. [Google Scholar] [CrossRef]
  52. Paul, J.S.; Plassard, A.J.; Landman, B.A.; Fabbri, D. Deep learning for brain tumor classification. In Medical Imaging 2017: Biomedical Applications in Molecular, Structural, and Functional Imaging; SPIE: Bellingham, EA, USA, 2017; Volume 10137. [Google Scholar] [CrossRef] [Green Version]
  53. Afshar, P.; Plataniotis, K.N.; Mohammadi, A. Capsule Networks for Brain Tumor Classification based on MRI Images and Course Tumor Boundaries. In Proceedings of the ICASSP 2019–2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 12–17 May 2019; Volume 2019, pp. 1368–1372. [Google Scholar] [CrossRef]
  54. Anaraki, A.K.; Ayati, M.; Kazemi, F. Magnetic resonance imaging-based brain tumor grades classification and grading via convolutional neural networks and genetic algorithms. Biocybern. Biomed. Eng. 2019, 39, 63–74. [Google Scholar] [CrossRef]
  55. Tan, M.; Le, Q.V. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. In Proceedings of the 36th International Conference on Machine Learning, PMLR, Long Beach, CA, USA, 9–15 June 2019; Volume 97, pp. 10691–10700. [Google Scholar] [CrossRef]
  56. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 770–778. [Google Scholar] [CrossRef]
Figure 1. A high-level workflow diagram of the proposed method.
Figure 1. A high-level workflow diagram of the proposed method.
Applsci 13 00312 g001
Figure 2. Images of the normal brain and the other three types of brain tumors—Glioma, Meningioma and Pituitary.
Figure 2. Images of the normal brain and the other three types of brain tumors—Glioma, Meningioma and Pituitary.
Applsci 13 00312 g002
Figure 3. The modified VGG-16 model.
Figure 3. The modified VGG-16 model.
Applsci 13 00312 g003
Figure 4. Comparison between before and after dropout of nodes.
Figure 4. Comparison between before and after dropout of nodes.
Applsci 13 00312 g004
Figure 5. (a) Epoch vs. training and validation accuracy, and (b) Epoch vs. training and validation loss.
Figure 5. (a) Epoch vs. training and validation accuracy, and (b) Epoch vs. training and validation loss.
Applsci 13 00312 g005
Figure 6. Confusion matrix of the validation dataset.
Figure 6. Confusion matrix of the validation dataset.
Applsci 13 00312 g006
Table 1. Some other state-of-the-art work.
Table 1. Some other state-of-the-art work.
Author(s)ConceptMethodFindingsGaps
Deepak, Ameer [41]Designed a model to classify three pathological types of brain tumor.Using deep transfer learning and a pre-trained GoogLeNet to extract features, a classifier to classify the types.For a small dataset, higher classification accuracy was observed.Higher misclassification in the confusion matrix, overfitting because of a small dataset.
Emrah Irmak [45]Three types of classification tasks have been performed.Three CNN models perform three classification tasks, in which hyperparameters have been manually optimized using a grid.Using the grid optimizer is effective as it could find the best model for classification types.Three classification systems for all three types, a joint multi-classification system can decline its necessity.
Sharif, Attique, Musaed, Khursheed, Mudassar [18]Brain tumor classifications on four types of MRI images, such as T1W, T1CE, T2W, and Flair.Selection of the most optimal features using Modified Genetic Algorithm (MGA) and entropy-Kurtosis-based techniques and trained by a fine-tuned pre-trained DenseNet201Using a feature selection technique improved the result of a publicly available dataset.Reducing certain key features could have a great impact, as it could help the system achieve accuracy.
Table 2. The layers of the proposed model.
Table 2. The layers of the proposed model.
LayersOutput SizeParameters
Conv2DNone,200,200,31664
MaxPooling 2DNone,100,100,640
BatchNormalizationNone,100,100,64256
Conv2DNone,100,100,128204,928
MaxPooling 2DNone,50,50,1280
BatchNormalizationNone,50,50,128512
Conv2DNone,50,50,128409,728
MaxPooling 2DNone,25,25,1280
BatchNormalizationNone,25,25,128512
Conv2DNone,25,25,256819,456
MaxPooling 2DNone,12,12,2560
BatchNormalizationNone,12,12,2561024
Convo2DNone,12,12,2561,638,656
MaxPoolingNone,6,6,2560
BatchNormalizationNone,6,6,2561024
Convo2DNone,6,6,5123,277,312
MaxpoolingNone,3,3,5120
BatchNormalizationNone,3,3,5122048
FlattenNone, 46080
Dense layerNone, 10244,719,616
Dropout 20%None, 10240
Dense layerNone, 512524,800
Dropout 20%None, 5120
Dense layerNone, 256131,328
Dropout 20%None, 2560
Dense layerNone, 41024
SoftmaxNone,40
Table 3. Accuracy comparison with different epochs and learning rates.
Table 3. Accuracy comparison with different epochs and learning rates.
Epoch102030
Learning RateAccuracy
0.00198.8%99%99.2%
0.0198.3%99.5%99.01%
0.0597%98.7%99.3%
Table 4. Performance analysis of the proposed model.
Table 4. Performance analysis of the proposed model.
TumorsTPTNFPFNPrecisionRecallSpecificityAccuracyF1-Score
Glioma52915103180.9940.9670.98898.980.98
Meningi-oma51115321710.9670.9980.99999.130.98
No Tumor4551601220.9960.9960.99899.810.99
Pituitary5431516011.000.9980.99999.950.99
Table 5. Comparison of the proposed structure with existing studies.
Table 5. Comparison of the proposed structure with existing studies.
SerialAuthorModel UsedDataset UsedModel AccuracyOur Model Accuracy
1Paul et al. [52]Fully Connected Network (FCN), CNN3064 T1-weighted contrast-enhanced images with three kinds of brain tumor [44].91.43%96.4%
2Afshar et al. [53]CapsNets incorporated with coarse tumor boundary3064 T1-weighted contrast-enhanced images with three kinds of brain tumor [44].90.89%96.4%
3Anaraki et al. [54]Genetic Algorithm (GA)3064 T1-weighted contrast-enhanced images with three kinds of brain tumor, combined with data from other sources94.2%95.3%
Table 6. Comparison of the proposed structure with existing popular architectures.
Table 6. Comparison of the proposed structure with existing popular architectures.
ModelPrecisionRecallF1-Score(macro)AccuracyPre-Trained
EfficientNetB0 [55]0.9420.9410.9410.941NO
EfficientNetB0 [55]0.9930.9930.9930.993YES
Resnet50 [56]0.8780.880.8780.879YES
Resnet152 [56]0.8890.8850.8850.885YES
VGG16 [48]0.9800.9800.9800.980NO
Modified-VGGNet0.9970.9880.9850.995NO
Table 7. Comparison of the proposed model with VGG-16.
Table 7. Comparison of the proposed model with VGG-16.
TumorsPrecision
(VGG-16)
Precision
(Proposed Model)
Recall
(VGG-16)
Recall
(Proposed Model)
F1-Score
(VGG-16)
F1-Score
(Proposed Model)
Accuracy
(VGG-16)
Accuracy
(Proposed Model)
Glioma0.980.9940.950.9670.970.98
Meningioma0.970.9670.980.9980.970.9898%99.5%
No Tumor0.980.9961.000.9960.990.99
Pituitary0.991.000.990.9980.990.99
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Reza, A.W.; Hossain, M.S.; Wardiful, M.A.; Farzana, M.; Ahmad, S.; Alam, F.; Nandi, R.N.; Siddique, N. A CNN-Based Strategy to Classify MRI-Based Brain Tumors Using Deep Convolutional Network. Appl. Sci. 2023, 13, 312. https://doi.org/10.3390/app13010312

AMA Style

Reza AW, Hossain MS, Wardiful MA, Farzana M, Ahmad S, Alam F, Nandi RN, Siddique N. A CNN-Based Strategy to Classify MRI-Based Brain Tumors Using Deep Convolutional Network. Applied Sciences. 2023; 13(1):312. https://doi.org/10.3390/app13010312

Chicago/Turabian Style

Reza, Ahmed Wasif, Muhammad Sazzad Hossain, Moonwar Al Wardiful, Maisha Farzana, Sabrina Ahmad, Farhana Alam, Rabindra Nath Nandi, and Nazmul Siddique. 2023. "A CNN-Based Strategy to Classify MRI-Based Brain Tumors Using Deep Convolutional Network" Applied Sciences 13, no. 1: 312. https://doi.org/10.3390/app13010312

APA Style

Reza, A. W., Hossain, M. S., Wardiful, M. A., Farzana, M., Ahmad, S., Alam, F., Nandi, R. N., & Siddique, N. (2023). A CNN-Based Strategy to Classify MRI-Based Brain Tumors Using Deep Convolutional Network. Applied Sciences, 13(1), 312. https://doi.org/10.3390/app13010312

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop