Future Generation Computer Systems: Javeria Amin Muhammad Sharif Mussarat Yasmin Steven Lawrence Fernandes
Future Generation Computer Systems: Javeria Amin Muhammad Sharif Mussarat Yasmin Steven Lawrence Fernandes
highlights
• A new light-weight Deep Neural Networks approach for brain tumor segmentation.
• Extensive evaluation of proposed model on eight challenging big datasets.
• Proposed work achieves state-of-the-art accuracy on these benchmark datasets.
• Comparison of presented work with sixteen existing techniques in the same domain.
• Better results by proposed method without incurring heavy computational burden.
article info a b s t r a c t
Article history: Brain tumor detection is an active area of research in brain image processing. In this work, a methodology
Received 29 September 2017 is proposed to segment and classify the brain tumor using magnetic resonance images (MRI). Deep Neural
Received in revised form 14 April 2018 Networks (DNN) based architecture is employed for tumor segmentation. In the proposed model, 07
Accepted 22 April 2018
layers are used for classification that consist of 03 convolutional, 03 ReLU and a softmax layer. First
Available online 5 May 2018
the input MR image is divided into multiple patches and then the center pixel value of each patch is
supplied to the DNN. DNN assign labels according to center pixels and perform segmentation. Extensive
Keywords:
Random forests experiments are performed using eight large scale benchmark datasets including BRATS 2012 (image
Segmentation dataset and synthetic dataset), 2013 (image dataset and synthetic dataset), 2014, 2015 and ISLES (Ischemic
Patches stroke lesion segmentation) 2015 and 2017. The results are validated on accuracy (ACC), sensitivity (SE),
Filters specificity (SP), Dice Similarity Coefficient (DSC), precision, false positive rate (FPR), true positive rate
Tissues (TPR) and Jaccard similarity index (JSI) respectively.
© 2018 Published by Elsevier B.V.
https://doi.org/10.1016/j.future.2018.04.065
0167-739X/© 2018 Published by Elsevier B.V.
J. Amin et al. / Future Generation Computer Systems 87 (2018) 290–297 291
difficult to formulate efficient segmentation steps. It is extremely Fields (MRF) [20]. MRF method is used for brain tumor segmen-
non-trivial to explain contusions, edema and hemorrhage’s in sub- tation. Generative models well generalize hidden data [8] with
components of brain tumor such as the necrotic core and prolifer- some limitations at the training stage. These techniques can learn
ated cells [8]. The arguably more accurate segmentation outcomes the pattern of brain tumor without utilizing any specific model.
can be achieved by the manual explanation through human ex- These types of methods usually consider identical and indepen-
perts that is more time-consuming, expensive and tedious task. dent voxels distribution through context features information.
Moreover, it is totally impractical in case of more studies which Due to this reason, some small or isolated clusters of voxels may
introduces additional inter observer variations [9]. More efficient be discriminated mistakenly in the incorrect class, sometimes
automated method for tumor extraction is a major aim in com- in anatomically and physiological improbable locations. To avoid
puting medical images that provides reproducible, objective and these issues, many researchers included neighborhood informa-
scalable methods for quantitative evaluation of brain tumor. MS
tion via embedding probabilistic predictions into a Conditional
and stroke lesions have same hyper-intense appearance in FLAIR
Random Field (CRF) classifier [21]. Deep CNN models are used
and other white matter lesions (WML) sequences. It is commonly
to automatically learn hierarchies of complex data features [22].
hard to achieve statistically prior information related to lesion
CNN can run over patches by using the kernel trick. In tumor
appearance and shape [10].
Several supervised methods are used for brain lesions segmen- segmentation field, recent methods are utilizing CNN. CNN model
tation such as Random Forests classifier (RFC), Intensity based is used with max-pooling two convolutional layers and stride with
features and generative Gaussian Mixture Model (GMM) [11]. Con- fully-connected (FC) layer and softmax layer for tumor detection
textual and morphological [12] features are used for the detection in MR images [23]. 3D [13] and 2D filters [24] are used for the
of different types of brain lesions. Markov Random Field (MRF) [10] evaluation of brain tumor. 3D filters can take benefits over the 3D
is used for brain lesion segmentation. The above mentioned meth- MR nature but computational load is raised. Two-pathway CNN
ods are used with hand crafted feature extraction method but the is used for brain tumor evaluation [14]. Binary CNN is used [24]
problems with hand crafted method are computationally intensive to detect the complete tumor region. Then cellular automata are
as compared to deep learning methods [13]. applied to smooth segmentation results before the CNN performs
At the same time, deep learning methods are more powerful as multi-class discrimination between the sub tumor regions. CNN
compared to supervised methods with the great ability of model is trained on extracted patches in each voxel plane and output of
to learn more discriminated features for task on hand. These fea- final FC layer, softmax and random forest (RF) are used to train the
tures perform better as well as they are pre-defined and hand- model [25]. Brain tumor segmentation process is divided into sub
crafted feature sets [14]. Convolutional Neural Networks (CNN) binary tasks and suggested structured predictions on the basis of
can be used to analyze the medical imaging problematic tasks to CNN as a learning model [26]. Labels of patches are grouped into
achieve better results. Firstly 2D-CNN is used for segmenting neu- clusters and CNN predicts input membership on each cluster. Deep
ral membranes with the support of GPU. 3D brain segmentation CNN are used for the extraction in MR images. 3 × 3 kernel size is
is obtained via processing the 2D slice separately [15]. Despite used to achieve deeper CNN model [27]. Table 1 shows the brief
the simple architecture, better results are achieved by using these
summary of existing methods.
techniques indicating CNN potential. Large variety of parameters,
Major contribution of this article is as follows:
more computational power and significant memory requirements
are needed in fully 3D-CNN model [16]. 2D patches are extracted 1. The proposed DNN model is based on 07 layers hence effi-
from multi scale images and further combined into single 3D ciently segmenting the brain tumor.
patches to avoid the fully 3D-CNN networks. Major reason is dis-
2. Input MR image is divided into multiple patches of 4 × 65
couraging the 3D-CNN usage because it has slow inference due
× 65 and then center pixel label of each patch is calculated
to computationally more expensive. Hence, classifier biases into
and supplied to the DNN which improves the results of
rare classes might outcome in over-segmentation. A CNN model
segmentation as well as classification.
is designed to train samples through distribution of classes that
are close to actual class but over-segmented pixels lead towards 3. Proposed model is evaluated on two MICCAI challenges
incorrect classification in the first phase [17]. Second training datasets such as ISLES and BRATS.
phase is also presented by [14] in which patches on discrimination 4. Achieved results are evaluated with recent methods which
layer are uniformly extracted from input image. Two phases of prove that DNN model performed better than existing tech-
training structure might be prone to over fitting and also more niques.
sensitive to first classifier stage. Then dense training method is
used for network training [18]. This method introduced imbalance
class label that is similar to uniform sampling. Weight cost function 3. Proposed CNN architecture
is used to overcome this problem. Manual adjustment of network
sensitivity is provided but it becomes more difficult to handle
A patch based method is presented for problem of brain tumor
multiclass problems by using this method [19].
detection in MR images. The input MRI image is first divided into
The overall article organization is as follows: Section 2 defines
N patches. The center pixel label of each patch is then estimated
related work. Detailed presented approach steps are mentioned in
Section 3. DNN outcomes are described in Section 4. Conclusion of by using a trained CNN model. Overall results are then generated
this research work is illustrated in Section 5. by combining the predictions for all patches. Due to low resolution
in third dimension of MR images, segmentation is performed on
2. Related work each slice from different axial views. The proposed architecture
processes sequentially every 2D slice in which each pixel is related
In brain tumor cases, atlas can be estimated at the time of through different MR modalities e.g., diffusion weighted image
segmentation due to the variable location and shape of neoplasms. (DWI), fluid attenuation inversion recovery (FLAIR), spin–spin re-
Tumor mass effect can be measured by the lesion growth models. laxation (T2), T1 and T1-contrast and image modalities like many
The voxels neighborhood provides helpful information used for CNN segmentation methods [14]. Proposed DNN model is shown
obtaining smoother segmentations results by the Markov Random in Fig. 1.
292 J. Amin et al. / Future Generation Computer Systems 87 (2018) 290–297
Table 1
Summary of existing work.
Reference Dataset Methods Results
[8] BRATS2013 Hierarchical majority vote 74%-85% DSC
[21] Contra Cancrum Conditional Random Fields (CRF) 0.84 DSC
[28] ISLES 2015 Ensemble two Deconv Nets 0.67DSC
[23] BRATS 2013 CNN model 83.7 ± 9.4 DSC
[13] Brats 2015, ISLES 2015 3D fully connected conditional random field 84.7 DSC
[26] BRATS2014 CNNs model 83 ± 13 DSC
[27] BRATS 2013 CNN with small (3 × 3) filters 0.88 DSC
[14] BRATS 2013 Input Cascade (CNN) 0.81DSC
[29] ISLES 2015 CNN model 69%DSC
[30] BRATS 2012 (image) CRF (conditional random fields) 62%DSC
Fig. 1. Proposed model (patch generation, CNN train model, label generation, segmented image).
3.1. Training phase In the proposed training model, 5 × 5 filter ω is applied with
convolutional layers output size (N − m + 1)×(N − m + 1). Non-
The input image to proposed DNN model is a multiple channel linearity pre-input is computed (xli,j ) in convolutional layers and
MRI image of size 4 × 240 × 240 shown in Fig. 2. The input filter component is needed to be added, weighted on the previous
image is first divided of 4 × 65 × 65 patches. The Input X of layers. This is given in Eqs. (1) and (2) below.
proposed DNN method is accordingly a N × N 2D patch with all m−1 m−1
MR modalities. The primary building block utilized to develop DNN
∑ ∑
xli,j = ωab yl(−i+1a)(j+b) (1)
model is the convolutional layer. Many layers which form features
a=0 b=0
hierarchy could be stacked on the top of one another. Each layer
yli,j = σ xli,j
( )
extracts features from its previous layer in the hierarchy to which (2)
it is linked. Each convolutional layer executes MR images planes In this operation, each feature map takes maximum neuron (fea-
as an input stack and delivers some number of feature maps as ture) value in 3 × 3 sub windows. Max-pooling makes position in-
an output. Each feature map could be topologically organized in variance over larger neighborhood areas and down specimens the
the response maps of specific non-linear spatial feature extraction information picture by a factor of 3 × 3 along every direction [31].
called learning parameters which are identically applied to every Max-pooling prompts quicker convergence rate by choosing
spatial neighborhood input planes in the sliding window design. predominant invariant features which enhance the generalization
Initially in convolutional layer process, the single plane of input execution. The output feature mapping process is performed at
images is related to various MR modalities. Following the layers, the convolutional layer as an input from the previous convolu-
the information planes normally consist of feature maps of the tional layers. With the neural network perception, feature maps
preceding layer. are related to the hidden layers of neurons. Each coordinate of
J. Amin et al. / Future Generation Computer Systems 87 (2018) 290–297 293
The proposed architecture is evaluated on eight benchmark The key components that affect and extract patches on the
datasets such as BRATS 2012, 2013, 2014, 2015 and ISLES (Ischemic proposed model performance are analyzed. The presented method
stroke lesion segmentation) 2015 and 2017. The description of comparison with the previous deep learning techniques is men-
proposed training and testing subjects in benchmark datasets is tioned in this section. Finally outcomes of proposed technique
described as follows: are reported by experimentation on all modalities of BRATS and
In BRATS 2015 dataset, total 384 subjects are used in which ISLES datasets and shown in Tables 4 and 5. Proposed method
220 HGG with 54 LGG subjects are in the training phase and 110 performance is also evaluated on individual MR modalities given
both (LGG+HGG) subjects are applied in testing phase of the pro- in Table 6. Computational time is also considered regarding each
posed model [8]. BRATS 2012 image dataset contains multimodal image modality as well as overall average time is mentioned in
(T1, T2, T1-contrast, Flair) MR images in which total 80 input Table 7.
subjects are used. 25 LGG and 25 HGG are used in the training On all MRI modalities, proposed method achieved 98.4% DSC,
phase and 20 HGG and 10 LGG testing subjects are used in the 98.5% SE, 96.0 JSI, 0.02 FNR, 0.02 FPR, 99.9% precision, 98.4% SP,
proposed model [32]. In BRATS 2014 dataset, 300 subjects are used 98.65% ACC and 99.8% DSC, 99.7% SE, 99.0% JSI, 0.01 FNR, 0.01
in which 200 training and 100 testing subjects are taken in the FPR, 99.9% SP, 99.8% ACC on BRATS 2012 and BRATS 2013 (image)
proposed model [33]. BRATS 2013 image dataset consists of 30 datasets. In the same way it obtained 100% results by using all
input subjects in which 20HGG and 10 LGG subjects are taken in performance measures in BRATS 2012 and BRATS 2013 (synthetic)
training stage and 10 both (LGG and HGG) testing subjects are datasets. However, BRATS 2015 achieved better results as com-
used in the proposed model [34]. BRATS 2013 synthetic dataset pared to BRATS 2014 dataset. Similarly proposed method is tested
294 J. Amin et al. / Future Generation Computer Systems 87 (2018) 290–297
Fig. 3. Filter results on the benchmark datasets (a) BRATS 2013(b) BRATS 2014 (c) ISLES 2015 (d) ISLES 2017 (e) BRATS 2012 (f) synthetic 2013 (g) synthetic 2012 (h) BRATS
2015.
Fig. 4. Sample segmentation results of the presented method, (upper row represents the input images of five modalities and lower row depicts the corresponding
segmentation results) (a) DWI (b) Flair (c) T1-c (d) T2 (e) T1.
on ISLES 2015 and 2017 datasets in which ISLES 2015 obtained to other modalities. It obtained 99.8% DSC, 99.7% SE, 99.0% JSI,
higher accuracy as compared to ISLES 2017 dataset. 0.01 FNR, 0.01 FPR, 98.9% precision, 99.8% SP, 99.9% ACC and 100%
Proposed method performance is also validated on individual DSC, 100% SE, 100 JSI, 0.00 FNR, 0.00 FPR, 100% precision,100% SP,
MRI modalities on all benchmark datasets. In this scenario, it is 100% ACC on Flair and DWI modalities. In the same case of other
observed that Flair and DWI achieve greater outcomes as compared modalities such as T1, T1C andT2, T2 achieves better results as
J. Amin et al. / Future Generation Computer Systems 87 (2018) 290–297 295
Table 4
Proposed method results on all BRATS datasets.
Dataset DSC SE JSI FNR FPR Precision SP ACC
BRATS 2012 (image) 98.4% 98.5% 96.0% 0.02 0.02 99.9% 98.4% 98.6%
BRATS 2012 (synthetic image) 100% 100% 100% 0.00 0.00 100% 100% 100%
BRATS 2013 (image) 99.8% 99.7% 99.0% 0.01 0.01 98.9% 99.9% 99.8%
BRATS 2013 (synthetic image) 100% 100% 100% 0.00 0.00 100% 100% 100%
BRATS 2014 92.9% 93.0% 85.1% 0.08 0.07 95.5% 93.2% 93.1%
BRATS 2015 95.0% 95.0% 90.4% 0.05 0.05 97.2% 95.2% 95.1%
Table 5
Proposed method results on ISLES datasets.
Dataset DSC SE JSI FNR SP FPR Precision ACC
ISLES 2015 100% 100% 100% 0.00 100% 0.00 100% 100%
ISLES 2017 98.7% 98.7% 96.0% 0.02 98.8% 0.02 99.0% 98.8%
Table 6
Performance on each image modality.
Dataset DSC SE JSI FNR FPR Precision SP ACC
Flair 99.8% 99.7% 99.0% 0.01 0.01 98.9% 99.8% 99.9%
DWI 100% 100% 100% 0.00 0.00 100% 100% 100%
T2 98.0% 96.2% 96.0% 0.04 0.00 100% 100% 97.0%
T1-C 95.4% 96.1% 90.4% 0.06 0.16 94.8% 84.6% 93.2%
T1 97.4% 94.9% 94.1% 0.06 0.00 100% 100% 96.1%
Table 7 Table 9
Proposed method computational time. Proposed method results on BRATS 2013 (synthetic) dataset.
Modality Computational time Method DSC
Flair 6.856 s Proposed 100%
DWI 0.008 s Abbasi et al. [42] 93%
T2 6.865s Cordier et al. [43] 84%
T1-C 6.853 s
T1 6.932 s
Average time 5.502 s Table 10
Proposed method results on BRATS 2015 dataset.
Method DSC SE SP ACC
Table 8
Proposed method results on ISLES-SISS 2015 dataset. Proposed 95% 95% 95.2% 95.1%
Dong et al. [44] 86% – - –
Method DSC SE Pereira et al. [27] 78% – - –
Proposed 100% 100% Havaei et al. [14] 79% – - –
Chen et al. [28] 67.0% – Kamnitsas et al. [13] 90% 90.4% – –
Haeck et al. [39] 78% 80%
Larochelle et al. [29] 69% 67%
Table 11
McKinley et al. [40] 85% –
Proposed method results on BRATS 2014 dataset.
Mahmood et al. [41] – 50%
Method DSC SE SP ACC
Proposed 92.9% 93% 93.2% 93.1%
Reza et al. [45] 89.6% 84% – 82.2%
compared to T1 and T1C. T2 obtains 98.0% DSC, 96.2% SE, 96.0% JSI,
0.04 FNR, 0.00 FPR, 100% SP and 97.0% ACC respectively. Moreover,
Table 12
after the performance evaluation on each modality, it is observed Proposed method results on BRATS 2012 (image) dataset.
that precision rate is increased on 40 training epochs in T1, T1C and Method DSC
T2 modalities but Flair and DWI achieve better results in less than
Proposed 98.4%
40 training epochs. The proposed methodology achieved better Wu et al. [30] 62%
results in less processing time as compared to the recently existing Bauer et al. [32] 73%
architecture. Computational time is also compared between each Huang et al. [46] 75%
modality and it is observed that Flair, DWI, T1, T1C, T2 take 6.856 s,
0.008 s, 6.932 s, 6.853 s and 6.865 s in segmentation process Table 13
respectively. The average processing time of the proposed DNN Proposed method results on BRATS 2013 (image) dataset.
model is 5.502 s which proves that the DNN model is light in Method DSC SE SP ACC
nature. Comparison of the presented approach performance with Proposed 99.8% 99.7% 99.9% 99.8%
the existing techniques is illustrated in Tables 8–13. In terms of Zikic et al. [47] 83.7% – - –
computational time, proposed model comparison with the existing Reza et al. [45] 90.9% – - 86.7%
models is given in Table 14. Havaei et al. [14] 88% 89% 87% –
Pereira et al. [27] 88% – - –
Presented approach performance is evaluated with seventeen Goetz et al. [48] 83% – - –
previous techniques such as [13,14,27–30,32,39–48]. Two Deconv Huang et al. [46] 88% – - –
Networks are ensemble in which one is EDD and second one is CNN
for brain tumor detection. It is tested on 741 stroke lesion images. It
obtained 94% DSC [28]. Expectation maximization method is used CNN model is taken into account for stroke lesion detection [29]. A
for stroke lesion detection. It achieved 78% DSC and 80% SE [39]. Decision Forest method with threshold finding approach and RFC
296 J. Amin et al. / Future Generation Computer Systems 87 (2018) 290–297
Table 14 [5] D.J. Sharp, C.F. Beckmann, R. Greenwood, K.M. Kinnunen, V. Bonnelle, X.
Computational time comparison of proposed method with existing methods. De Boissezon, J.H. Powell, S.J. Counsell, M.C. Patel, R. Leech, Default mode
Method Computational time for the prediction of per network functional and structural connectivity after traumatic brain injury,
brain image Brain 134 (8) (2011) 2233–2247.
[6] L.M. Carey, R.J. Seitz, M. Parsons, C. Levi, S. Farquharson, J.-D. Tournier, S.
Tustison’s method [8] 100 min Palmer, A. Connelly, Beyond the lesion: neuroimaging foundations for post-
Input Cascade CNN [14] 3 min stroke recovery, Future Neurol. 8 (5) (2013) 507–527.
Two Path CNN [14] 25 s [7] P.Y. Wen, D.R. Macdonald, D.A. Reardon, T.F. Cloughesy, A.G. Sorensen, E.
Proposed DNN model 5.502 s Galanis, J. DeGroot, W. Wick, M.R. Gilbert, A.B. Lassman, Updated response
assessment criteria for high-grade gliomas: response assessment in neuro-
oncology working group, J. Clinical Oncol. 28 (11) (2010) 1963–1972.
[8] B.H. Menze, A. Jakab, S. Bauer, J. Kalpathy-Cramer, K. Farahani, J. Kirby, Y.
is used for stroke lesion segmentation [40,41]. Otsu and Random
Burren, N. Porz, J. Slotboom, R. Wiest, The multimodal brain tumor image
Forest method is used for glioma detection. This method is tested segmentation benchmark (BRATS), IEEE Trans. Med. Imaging 34 (10) (2015)
on BRATS 2013 synthetic dataset. It achieved 93% DSC [42]. Patch 1993–2024.
based segmentation method is used for brain lesion detection [43]. [9] E.L. Yuh, S.R. Cooper, A.R. Ferguson, G.T. Manley, Quantitative CT improves
outcome prediction in acute traumatic brain injury, J. Neurotrauma 29 (5)
U-Net Based Fully CNN model is utilized for the detection of brain
(2012) 735–746.
lesion and it achieved 86% DSC on BRATS 2015 dataset [44]. A [10] J. Mitra, P. Bourgeat, J. Fripp, S. Ghose, S. Rose, O. Salvado, A. Connelly, B.
CNN model is tested for brain tumor detection [27]. Input Cascade Campbell, S. Palmer, G. Sharma, Lesion segmentation from multimodal MRI
model (CNN) model is tested on BRATS 2013 image dataset for using random forest following ischemic stroke, NeuroImage 98 (2014) 324–
detecting brain lesion [14]. 3D fully connected conditional random 335.
[11] R. Domingues, M. Filippone, P. Michiardi, J. Zouaoui, A comparative evaluation
field is used for the detection of brain lesion. It obtained 90% DSC of outlier detection algorithms: Experiments and analyses, Pattern Recognit.
and 90.4% SE respectively [13]. Texture features are used for brain 74 (2018) 406–421.
tumor detection. This approach achieved 89.6% DSC, 84% SE and [12] C. Ledig, R.A. Heckemann, A. Hammers, J.C. Lopez, V.F. Newcombe,
82.2% ACC on BRATS 2014 dataset [45]. Conditional random fields A. Makropoulos, J. Lötjönen, D.K. Menon, D. Rueckert, Robust whole-brain
segmentation: application to traumatic brain injury, Medical Image Anal.
(CRF) method with the pixel-pairwise affinity and superpixel-level
21 (1) (2015) 40–58.
features are used for glioma detection [30]. Hierarchical regular- [13] K. Kamnitsas, C. Ledig, V.F. Newcombe, J.P. Simpson, A.D. Kane, D.K. Menon, D.
ization and classification method is tested on BRATS 2012 image Rueckert, B. Glocker, Efficient multi-scale 3D CNN with fully connected CRF for
dataset. It attained 76% DSC [32]. Local independent projection- accurate brain lesion segmentation, Med. Image Anal. 36 (2017) 61–78.
based classification (LIPC) technique is utilized to classify the class [14] M. Havaei, A. Davy, D. Warde-Farley, A. Biard, A. Courville, Y. Bengio, C.
Pal, P.-M. Jodoin, H. Larochelle, Brain tumor segmentation with deep neural
labels. It is tested on BRATS 2013 image dataset and obtained networks, Med. Image Anal. 35 (2017) 18–31.
88% DSC [46]. CNN architecture [47] and extremely randomized [15] A. de Brébisson, G. Montana, Deep neural networks for anatomical brain
trees [48] is used for glioma detection. It obtained 83.7% and 83.0% segmentation, 2015, arXiv preprint arXiv:1502.02445.
DSC on BRATS 2013 image dataset. [16] H.R. Roth, L. Lu, A. Seff, K.M. Cherry, J. Hoffman, S. Wang, J. Liu, E. Turkbey, R.M.
Summers, A new 25 D representation for lymph node detection using random
Through observing the above mentioned experimental results, sets of deep convolutional neural network observations, pp. 520–527.
the performance of proposed technique is better as compared to [17] D. Ciresan, A. Giusti, L.M. Gambardella, J. Schmidhuber, Deep neural networks
previous approaches which demonstrate applicability of proposed segment neuronal membranes in electron microscopy images, pp. 2843–2851.
model. [18] K. Simonyan, A. Zisserman, Very deep convolutional networks for large-scale
image recognition, 2014, arXiv preprint arXiv:1409.1556.
[19] O. Ronneberger, P. Fischer, T. Brox, U-net: Convolutional networks for biomed-
5. Conclusion ical image segmentation, pp. 234–241.
[20] B.H. Menze, K. Van Leemput, D. Lashkari, M.-A. Weber, N. Ayache, P. Golland,
In this work, DNN based architecture is proposed for brain A generative model for brain tumor segmentation in multi-modal images, pp.
151–159.
tumor detection. Proposed model is evaluated on eight challenges
[21] S. Bauer, L.-P. Nolte, M. Reyes, Fully automatic segmentation of brain tumor
datasets and five MRI modalities such as Flair, DWI, T2, T1 and T1- images using support vector machine classification in combination with hier-
C respectively. The achieved results are 99.8% DSC on Flair, 100% archical conditional random field regularization, pp. 354–361.
results on DWI, 98.0% on T2, 97.4% on T1 and 95.4% on T1-C modal- [22] Y. Bengio, A. Courville, P. Vincent, Representation learning: A review and new
perspectives, IEEE Trans. Pattern Anal. Machine Intell. 35 (8) (2013) 1798–
ities. The proposed model is validated on a number of datasets
1828.
and performance measures while the existing techniques are not [23] D. Zikic, Y. Ioannou, M. Brown, A. Criminisi, Segmentation of brain tumor
evaluated on such number of datasets and performance measures. tissues with convolutional neural networks, in: proc of BRATS-MICCAI, 2014.
This authenticates the consistency of proposed model because it [24] M. Lyksborg, O. Puonti, M. Agn, R. Larsen, An ensemble of 2D convolutional
equally performs well on all datasets and performance measures. neural networks for tumor segmentation, pp. 201–211.
[25] V. Rao, M. Sarabi, A. Jaiswal, Brain tumor segmentation with deep learning, in:
These results are compared with existing methods which show MICCAI Multimodal Brain Tumor Segmentation Challenge (BraTS), 2015, pp.
that proposed model performs better in terms of accuracy and 56–59.
time because average processing time of the proposed CNN model [26] P. Dvorak, B. Menze, Structured prediction with convolutional neural networks
is 5.502 s. This model practically can be used for brain lesion for multimodal brain tumor segmentation, in: Proceeding of the Multimodal
Brain Tumor Image Segmentation Challenge, 2015, pp. 13–24.
detection at an early stage.
[27] S. Pereira, A. Pinto, V. Alves, C.A. Silva, Brain tumor segmentation using con-
volutional neural networks in MRI images, IEEE Trans. Med. Imaging 35 (5)
References (2016) 1240–1251.
[28] L. Chen, P. Bentley, D. Rueckert, Fully automatic acute ischemic lesion seg-
[1] J. Amin, M. Sharif, M. Yasmin, S.L. Fernandes, A distinctive approach in brain mentation in DWI using convolutional neural networks, NeuroImage: Clinical
tumor detection and classification using MRI, Pattern Recognit. Lett. (2017). (2017).
[2] V. Rajinikanth, S.L. Fernandes, B. Bhushan, N.R. Sunder, Segmentation and [29] H. Larochelle, P.-M. Jodoin, A Convolutional Neural Network Approach to Brain
analysis of brain tumor using tsallis entropy and regularised level set, pp. 313– Tumor Segmentation, p. 195.
321. [30] W. Wu, A.Y. Chen, L. Zhao, J.J. Corso, Brain tumor detection and segmentation
[3] V. Rajinikanth, S.C. Satapathy, Segmentation of ischemic stroke lesion in brain in a CRF (conditional random fields) framework with pixel-pairwise affinity
MRI based on social group optimization and fuzzy-tsallis entropy, Arab. J. Sci. and superpixel-level features, International J. Comput. Assist. Radiol. Surgery
Eng. 1 (2018) 1–14. 9 (2) (2014) 241–253.
[4] V. Rajinikanth, S.C. Satapathy, S.L. Fernandes, S. Nachiappan, Entropy based [31] D. Scherer, A. Müller, S. Behnke, Evaluation of pooling operations in convo-
segmentation of tumor from brain MR images–a study with teaching learning lutional architectures for object recognition, in: Artificial Neural Networks–
based optimization, Pattern Recognit. Lett. 94 (2017) 87–95. ICANN 2010, 2010, pp. 92–101.
J. Amin et al. / Future Generation Computer Systems 87 (2018) 290–297 297
[32] S. Bauer, T. Fejes, J. Slotboom, R. Wiest, L.-P. Nolte, M. Reyes, Segmentation Muhammad Sharif, Ph.D. is Associate Professor at COM-
of brain tumor images based on integrated hierarchical classification and SATS, Wah Cantt Pakistan. His area of specialization is
regularization. Artificial Intelligence and Image Processing. He is into
[33] BRATS2014, https://sites.google.com/site/miccaibrats2014/. (Accessed 28 teaching field from 1995 to date. He has 110 plus research
March 2018). publications in IF, SCI and ISI journals and national and
[34] BRATS2013, https://www.smir.ch/BRATS/Start2013. (Accessed 28 March international conferences. He has so far supervised 25
2018). MS (CS) thesis. He is currently supervising 5 Ph.D. (CS)
[35] BRATS2012, https://www.smir.ch/BRATS/Start2012/. (Accessed 28 March students and co-supervisor of 5 others. More than 200
2018). undergraduate students have successfully completed their
[36] O. Maier, M. Wilms, J. von der Gablentz, U.M. Krämer, T.F. Münte, H. Handels, project work under his supervision. His research interests
Extra tree forests for sub-acute ischemic stroke lesion segmentation in MR are Image Processing, Computer Networks & Security and
sequences, J. Neurosci. Methods 240 (2015) 89–100. Algorithms Design and Analysis.
[37] M. Descoteaux, L. Maier-Hein, A. Franz, P. Jannin, D.L. Collins, S. Duchesne,
Medical Image Computing and Computer-Assisted Intervention — MICCAI
2017, Springer, 2007. Mussarat Yasmin, Ph.D. is Assistant Professor at COM-
[38] L.R. Dice, Measures of the amount of ecologic association between species, SATS, Wah Cantt Pakistan. Her area of specialization is
Ecology 26 (3) (1945) 297–302. Image Processing. She is in education field since 1993.
[39] T. Haeck, F. Maes, P. Suetens, ISLES challenge 2015: Automated model-based She has so far 30 research publications in IF, SCI and ISI
segmentation of ischemic stroke in MR images, pp. 246–253. journals as well as in national and international confer-
[40] R. McKinley, L. Häni, R. Wiest, M. Reyes, Segmenting the Ischemic Penumbra: ences. A number of undergraduate projects are complete
A decision forest approach with automatic threshold finding, pp. 275–283. under her supervision. She is currently supervising 5 Ph.D.
[41] Q. Mahmood, A. Basit, Automatic Ischemic stroke lesion segmentation in (CS) students. She is gold medalist in MS (CS) from IQRA
multi-spectral mri images using random forests classifier, pp. 266–274. University, Pakistan. She is getting COMSATS research pro-
[42] S. Abbasi, F. Tajeripour, Detection of brain tumor in 3D MRI images using ductivity award since 2012. Her research interests include
local binary patterns and histogram orientation gradient, Neurocomputing Neural Network, Algorithms design and Analysis, Machine
219 (2017) 526–535. Learning and Image processing.
[43] N. Cordier, B. Menze, H. Delingette, N. Ayache, Patch-based segmentation of
brain tissues, pp. 6–17.
[44] H. Dong, G. Yang, F. Liu, Y. Mo, Y. Guo, Automatic brain tumor detection and Steven Lawrence Fernandes, Ph.D. is member of Core
segmentation using u-net based fully convolutional networks, 2017, arXiv Research Group, Karnataka Government Research Centre
preprint arXiv:1705.03820. of Sahyadri College of Engineering and Management, Man-
[45] S.M. Reza, R. Mays, K.M. Iftekharuddin, Multi-fractal detrended texture feature galore, Karnataka. He has received Young Scientist Award
for brain tumor classification. by Vision Group on Science and Technology, Government
[46] M. Huang, W. Yang, Y. Wu, J. Jiang, W. Chen, Q. Feng, Brain tumor segmen- of Karnataka. He also received grant from The Institution
tation based on local independent projection-based classification, IEEE Trans. of Engineers (India), Kolkata for his Research work. His
Biomed. Eng. 61 (10) (2014) 2633–2645. current Ph.D. work, ‘‘Match Composite Sketch with Drone
[47] D. Zikic, Y. Ioannou, M. Brown, A. Criminisi, Segmentation of brain tumor Images’’, has received patent notification (Patent Applica-
tissues with convolutional neural networks, in: Proceedings MICCAI-BRATS, tion Number: 2983/CHE/2015) from the Government of
2014, pp. 36–39. India.
[48] M. Goetz, C. Weber, J. Bloecher, B. Stieltjes, H.-P. Meinzer, K. Maier-Hein,
Extremely randomized trees based brain tumor segmentation, in: Proceeding
of BRATS challenge-MICCAI, 2014, pp. 006-011.