An Attention-Based Deep Convolutional Neural Network For Brain Tumor and Disorder Classification and Grading in Magnetic Resonance Imaging
An Attention-Based Deep Convolutional Neural Network For Brain Tumor and Disorder Classification and Grading in Magnetic Resonance Imaging
An Attention-Based Deep Convolutional Neural Network For Brain Tumor and Disorder Classification and Grading in Magnetic Resonance Imaging
Article
An Attention-Based Deep
Convolutional Neural Network for
Brain Tumor and Disorder
Classification and Grading in
Magnetic Resonance Imaging
Special Issue
Artificial Intelligence and Big Data Applications
Edited by
Prof. Dr. Amar Ramdane-Cherif, Dr. Ravi Tomar and Dr. TP Singh
https://doi.org/10.3390/info14030174
information
Article
An Attention-Based Deep Convolutional Neural Network for
Brain Tumor and Disorder Classification and Grading in
Magnetic Resonance Imaging
Ioannis D. Apostolopoulos 1, * , Sokratis Aznaouridis 2 and Mpesi Tzani 3
1 Department of Medical Physics, School of Medicine, University of Patras, 26504 Rio, Greece
2 Department of Computer Engineering and Informatics, University of Patras, 26504 Rio, Greece
3 Department of Electrical and Computer Technology Engineering, University of Patras, 26504 Rio, Greece
* Correspondence: [email protected]
Abstract: This study proposes the integration of attention modules, feature-fusion blocks, and
baseline convolutional neural networks for developing a robust multi-path network that leverages its
multiple feature-extraction blocks for non-hierarchical mining of important medical image-related
features. The network is evaluated using 10-fold cross-validation on large-scale magnetic resonance
imaging datasets involving brain tumor classification, brain disorder classification, and dementia
grading tasks. The Attention Feature Fusion VGG19 (AFF-VGG19) network demonstrates superiority
against state-of-the-art networks and attains an accuracy of 0.9353 in distinguishing between three
brain tumor classes, an accuracy of 0.9565 in distinguishing between Alzheimer’s and Parkinson’s
diseases, and an accuracy of 0.9497 in grading cases of dementia.
Keywords: artificial intelligence; deep learning; attention module; feature fusion; magnetic
resonance imaging
One of the main challenges of using DL in MRI is the limited availability of labelled
data. MRI images are often difficult to obtain and expensive to acquire, making it chal-
lenging to train DL algorithms. Additionally, the images can be of poor quality, making it
difficult to detect specific patterns.
Another challenge is the variability in the MRI images. They can vary depending on
the imaging protocol and the acquisition parameters. This can make it non-trivial to train
generalized DL algorithms able to cope with variations in acquisition parameters.
Recent advances in the architecture and functions of convolutional neural networks
(CNNs) [10] have allowed for a dramatic improvement in the performance of these mod-
els. Attention mechanisms [11] are a popular addition to CNNs, allowing for adaptive
refinement of the feature maps to identify essential components of the image. Optimization
strategies [12] such as batch normalization and dropout have enabled faster convergence
and higher accuracy when applied to CNNs. In addition, feature fusion techniques such as
sparse coding, autoencoders, and multi-resolution processing have been used to combine
the strengths of multiple feature maps and have been shown to significantly improve the
accuracy of CNNs.
The study proposes an innovative modification of a well-established CNN developed
by the Virtual Geometry Group (VGG) [13] and named after it. We propose the integration
of feature-fusion blocks and attention models to enrich the encapsulation of important
image features that lead to more precise image classification. This network is employed to
classify brain MRI images in brain tumor classification and brain disorder discrimination
and grading.
The paper is structured as follows: In Section 2, we briefly describe the entities of DL, the
attention modules, and feature-fusion blocks. The proposed network is described in detail. In
addition, the employed datasets are presented. In the Section 3, the results of the study are
presented. Discussion and concluding remarks take place in Sections 5 and 6, respectively.
2. Related Work
Sadad et al. [14] utilized the Unet architecture with ResNet50 as a backbone to perform
segmentation on the Figshare dataset, achieving an impressive intersection over union
(IoU) score of 0.9504. The researchers also employed preprocessing and data augmentation
techniques to improve classification accuracy. They used evolutionary algorithms and rein-
forcement learning in transfer learning to perform multi-classification of brain tumors. The
study compared the performance of different DL models, such as ResNet50, DenseNet201,
MobileNet V2, and InceptionV3; and demonstrated that the proposed framework out-
performed the state-of-the-art methods. The study also applied various CNN models to
classify brain tumors, including MobileNet V2, Inception V3, ResNet50, DenseNet201, and
NASNet, achieving accuracies of 91.8%, 92.8%, 92.9%, 93.1%, and 99.6%, respectively. The
NASNet model showed the highest accuracy among all the models.
Allah et al. [15] investigated the effectiveness of a novel approach to classify brain
tumor MRI images using a VGG19 feature extractor and one of three different types of
classifiers. To address the shortage of images needed for deep learning, the study employed
a progressive, growing generative adversarial network (PGGAN) augmentation model to
generate “realistic” brain tumor MRI images. The findings demonstrated that the proposed
framework outperformed previous studies in accurately classifying gliomas, meningiomas,
and pituitary tumors, achieving an accuracy rate of 98.54%.
In [16], a novel hybrid CNN-based architecture was proposed to classify three types
of brain tumors using MRI images. The approach involves utilizing two methods of hybrid
deep learning classification based on CNN. The first method combines a pre-trained Google-
Net model of the CNN algorithm for feature extraction with SVM for pattern classification,
while the second method integrates a finely tuned Google-Net with a soft-max classifier.
The performance of the proposed approach was evaluated on a dataset containing a total of
1426 glioma images, 708 meningioma images, 930 pituitary tumor images, and 396 normal
brain images. The results revealed that the finely tuned Google-Net model achieved an
Information 2023, 14, 174 3 of 13
accuracy of 93.1%. However, the accuracy was improved to 98.1% when the Google-Net
was combined with an SVM classifier as a feature extractor.
Kang et al. [17] applied transfer learning and utilized pre-trained deep convolutional
neural networks to extract deep features from MRI images of the brain. These extracted
features were evaluated using various machine learning classifiers, and the top three
performing features were selected and combined to form an ensemble of deep features.
This ensemble was then used as input to several machine learning classifiers to predict the
final output. We evaluated the effectiveness of different pre-trained models as deep feature
extractors, various machine learning classifiers, and the impact of the ensemble of deep
features for brain tumor classification using three openly accessible brain MRI datasets.
The experimental results revealed that using an ensemble of deep features significantly
improved performance, and SVM with radial basis function kernel outperformed other
machine learning classifiers, particularly for large datasets.
Sivaranjini et al. [18] used a DL neural network to classify MRI images of healthy
individuals and those with Parkinson’s disease (PD). The researchers utilized the AlexNet
convolutional neural network architecture to improve the accuracy of Parkinson’s disease
diagnosis. By training the network with MRI images and testing it, the system was able to
achieve an accuracy rate of 88.9%. This demonstrates that deep learning models can aid
clinicians in diagnosing PD more objectively and accurately in the future.
Bhan et al. [19] successfully diagnosed PD from MRI images. The LeNet-5 architecture
and a dropout algorithm achieved 97.92% accuracy using batch normalization on a large
dataset consisting of 10,548 images. This method has the potential to accurately diagnose
various stages of PD.
Hussain et al. [20] presented a 12-layer CNN for binary classification and detection
of Alzheimer’s disease using brain MRI data. The proposed model’s performance is
evaluated and compared to existing CNN models based on accuracy, precision, recall, F1
score, and receiver operating characteristic (ROC) curve using the open access series of
imaging studies (OASIS) dataset. The model attained an accuracy of 97.75%, higher than
any previously published CNN models on this dataset.
Salehi et al. [21] used to detect and classify Alzheimer’s Disease (AD) at an early stage
by analyzing MRI images from the ADNI database. The dataset consisted of 1512 mild,
2633 normal, and 2480 AD images. The CNN model achieved a remarkable accuracy of
99%, surpassing the performance of several other studies.
connected layers. These layers are trained to extract features from the input image at
different levels of abstraction. The features from each layer are then combined to form a
single set of features.
Another method is to use feature pyramid networks (FPN), which combines features
from different layers of a CNN in a pyramid-like structure. This allows for the extraction of
features at different scales, which can be helpful in tasks such as object detection, where the
size of the object can vary greatly.
A third method is to use an attention-based feature fusion technique, which uses an
attention mechanism to selectively focus on different regions of the input image when
extracting features. This allows the network to pay more attention to the regions of the
image that are most important for the task at hand.
We propose using simple feature-fusion blocks that solely connect the output of the
convolutional blocks directly to the top of the network (Figure 1). A feature-fusion block
involves a batch normalization, dropout, and global average pooling layers. These layers do
not extract additional features. The feature-fusion blocks are placed after the second, third,
and fourth convolutional groups and are connected to the output of the max pooling layers
that follow (Figure 1). In this way, the extorted image features of each convolutional group
are connected directly to the classification layer at the top of the network, ensuring that no
further processing is applied. Therefore, the feature-fusion blocks negate the hierarchical
feature-extraction manner of the VGG19.
4. Results
Section 4.1 describes the classification performance of the AFF-VGG19 network on the
brain tumor dataset. Section 4.2. presents the results of the network when classifying the
brain disorders dataset. Accordingly, in Section 4.3., the performance of AFF-VGG19 in the
dementia grading dataset is presented. Finally, Section 4.4. presents comparisons between
the proposed AFF-VGG19 network and alternative state-of-the-art networks.
Table 2. Performance metrics of AFF-VGG19 on the brain tumor dataset. G stands for glioma, M for
meningioma, P for pituitary.
For the G class, the network achieves an accuracy of 0.9505, a sensitivity of 0.9676, and
a specificity of 0.9427. The AUC score reaches 0.9552. For the M class, the network achieves
an accuracy of 0.9304, a sensitivity of 0.9062, and a specificity of 0.9408. The AUC score
reaches 0.9235. Accordingly, for the P class, the network achieves an accuracy of 0.9572, a
sensitivity of 0.9161, and a specificity of 0.9758. The AUC score reaches 0.9460.
The relatively large FNR (0.0938) and accuracy (0.9304) in the M class indicate that
the network performs sub-optimally in the discrimination of the M class from the rest
(M versus ALL classification).
Table 3. Performance metrics of AFF-VGG19 on the brain disorders dataset. AD stands for
Alzheimer’s disease and PD for Parkinson’s disease.
For the AD class, the network achieves an accuracy of 0.9409, a sensitivity of 0.9222,
and a specificity of 0.9541. The AUC score reaches 0.9382. For the PD class, the network
achieves an accuracy of 0.9489, a sensitivity of 0.9860, and a specificity of 0.9160. The AUC
score reaches 0.9510.
AFF-VGG19 yields a relatively high FPR in PD detection and a larger FNR in AD
detection (0.0840 and 0.0778, respectively).
the network exhibits an accuracy of 0.9769 in the control class, with a very small FPR
(0.0394), as Table 4 presents. In addition, a very low FNR is recorded (0.0069).
Table 4. Performance metrics of AFF-VGG19 on the dementia grading dataset. Mo stands for
moderate, Mi for mild, and VMi for very mild.
For the Mo class, the network achieves an accuracy of 0.9670, a sensitivity of 0.9531,
and a specificity of 0.9672. The AUC score reaches 0.9601. For the Mi class, the network
achieves an accuracy of 0.9264, a sensitivity of 0.8281, and a specificity of 0.9424. The
AUC score reaches 0.8853. For the VMi class, the network yields an accuracy of 0.9539, a
sensitivity of 0.9362, a specificity of 0.9635, and an AUC of 0.9498.
Figure 2 summarizes the performance of AFF-VGG19. Figure 2 presents the ROC
curve, and the training and validation accuracy and loss for the brain tumor and brain
disorder datasets.
The results of Table 5 justify the selection of the baseline VGG19 as the main component
for an attention-based feature-fusion network.
The study compares the proposed method and methods presented by recent related
works on similar datasets. Table 6 summarizes the results.
First Author Ref. No. Test Data Size Classes Method ACC SEN SPE
612
Sadad [14] G-M-P NASNet 0.996 - -
slices
G: 0.9854 G: 0.9777 G: 0.9914
460
Allah [15] G-M-P VGG19 M: 0.9857 M: 0.9804 M: 0.9871
slices
P: 1 P: 1 P: 1
G: 0.978
692 M: 0.973
Rasool [16] G-M-P-controls Google-Net 0.981
slices P: 0.989
N: 0.987
692
Kang [17] DenseNet-169 0.9204 - -
slices
G: 0.9505 G: 9676 G: 0.9427
26,249
This study G-M-P-controls AFF-VGG19 M: 0.9304 M: 0.9062 M: 0.9062
slices
P: 0.9572 P: 0.9161 P: 0.9758
1055
Bhan [19] PD-controls LeNet-5 0.9792 - -
Slices
36
Sivaranjini [18] PD-controls AlexNet 0.889 - -
patients
11 AD: 0.92
Hussain [20] AD-controls CNN 0.9775 -
patients C: 1
Information 2023, 14, 174 11 of 13
Table 6. Cont.
First Author Ref. No. Test Data Size Classes Method ACC SEN SPE
7756 PD: 0.9409 PD: 0.9222 PD: 0.9541
This study PD-AD-controls AFF-VGG19
slices AD: 0.9489 AD: 0.9860 AD: 0.9160
7635
Salehi [21] Mi-VMi-controls CNN 0.99 - -
slices
6400
Mohammed [32] Mi-VMi-Mo-controls AlexNet 94.8 93 97.75
slices
Mo: 0.967 Mo: 0.9531 Mo: 0.9672
6400
This study Mi-VMi-Mo-controls AFF-VGG19 Mi: 0.9264 Mi: 0.8281 Mi: 0.9424
slices
VMi: 0.9539 VMi: 0.9362 VMi: 0.9635
Compared to recent literature, the present study utilized large-scale data (brain tumor
and brain disorders datasets). Still, the results are consistent with the literature and verify
that the proposed methodology is robust for big-data classification.
4.5. Reproducibility
This section presents the results of statistical significance tests to verify the reproducibility
of the experiments and the stability of the proposed approach. For this purpose, AFF-VGG19
was trained and validated under a 10-fold cross-validation on each dataset 20 times, and a
T-test was performed. The results verify that the model produces consistent outcomes without
statistically significant deviations from the initially reported accuracy (Table 7).
5. Discussion
DL will likely play an important role in disease diagnosis and classification from MRI
images. With the ability to detect subtle changes in MRI images, accurately diagnose and
classify diseases, detect and segment lesions, identify biomarkers, develop personalized
medicine, and develop new diagnostic tools and therapies, deep learning has the potential
to revolutionize medical imaging and improve patient outcomes.
The study proposed a modification of the baseline VGG19 network that improves its
feature-extraction capabilities. Integrating the feature-fusion block and attention module
avoided the hierarchical nature of the baseline model and improved the classification
accuracy. The model was evaluated using three MRI datasets related to brain tumor
discrimination, brain disorder classification, and dementia grading. The AFF-VGG19
network demonstrates superiority against state-of-the-art networks. It attains an accuracy
of 0.9353 in distinguishing between three brain tumor classes, an accuracy of 0.9565 in
distinguishing between AD and PD, and an accuracy of 0.9497 in grading cases of dementia.
The high FPR in PD detection and high FNR in AD detection may have their cause in
the fact that these two classes may give similar findings and patterns in specific parts of
the image, such that they confuse the model as there are no distinct differences. For this
purpose, it would be useful in future research to also consider clinical data that would
probably help to better and more accurately determine the image classes.
Information 2023, 14, 174 12 of 13
Without integrating the attention and feature-fusion blocks, the baseline VGG19
network proved superior to the rest of the pretrained networks (Table 5). Therefore,
the attention and feature-fusion blocks were implemented using the baseline VGG19
architecture as the main feature-extraction pipeline.
The study has limitations that the authors aim to tackle in the future. Firstly, the pro-
posed network needs further evaluation using more MRI datasets to verify its effectiveness.
Secondly, more fine-tuning and hyper-parameter tuning may be required depending on the
particular dataset and classification task. In the present study, the same parameters were
used for each dataset, which may decrease the efficiency. Thirdly, the attention modules
can be further improved and attached to other state-of-the-art networks.
A CNN can be trained to identify features such as edges, textures, and shapes in an
image associated with a particular disease or abnormality. By visualizing the intermediate
representations of the network, medical experts can gain insights into which features the
network is using to make its predictions. This can help to validate the network’s predictions
and provide additional information that can be used to support medical decision-making.
In this context, the lack of post-hoc explainability-enhancing algorithms in the present
study is a limitation and an opportunity for future studies.
Nevertheless, the proposed network achieves a top accuracy in every task. It proves
to be superior to the baseline VGG19 model and other pretrained networks.
6. Conclusions
Recent advances in the architecture and functions of CNNs have allowed for a dramatic
improvement in the performance of these models. The study proposes an innovative
modification of VGG19 with integration of feature-fusion blocks and attention models
to enrich the encapsulation of important image features that lead to more precise image
classification. The AFF-VGG19 network demonstrated obtained an accuracy of 0.9353 in
distinguishing between three brain tumor classes, an accuracy of 0.9565 in distinguishing
between AD and PD, and an accuracy of 0.9497 in grading cases of dementia from MRI
images. Future research should focus on evaluating the network using more datasets,
enhancing the explainability of the framework, and tuning the attention modules to obtain
more precise results.
Author Contributions: Conceptualization, I.D.A.; Methodology, I.D.A. and S.A.; software, I.D.A. and
S.A.; validation, I.D.A. and M.T.; Resources, S.A. and M.T.; data curation, S.A. and M.T.; Supervision,
I.D.A.; Writing—original draft, I.D.A. and M.T.; Writing—review and editing, S.A. and M.T. All
authors have read and agreed to the published version of the manuscript.
Funding: This research received no external funding.
Data Availability Statement: The data presented in this study are openly available.
Conflicts of Interest: The authors declare no conflict of interest.
References
1. Goodfellow, I.; Bengio, Y.; Courville, A.; Bengio, Y. Deep Learning; MIT Press: Cambridge, MA, USA, 2016; Volume 1.
2. Plewes, D.B.; Kucharczyk, W. Physics of MRI: A Primer. J. Magn. Reson. Imaging 2012, 35, 1038–1054. [CrossRef]
3. Lundervold, A.S.; Lundervold, A. An Overview of Deep Learning in Medical Imaging Focusing on MRI. Z. Med. Phys. 2019, 29,
102–127. [CrossRef] [PubMed]
4. Turkbey, B.; Haider, M.A. Deep Learning-Based Artificial Intelligence Applications in Prostate MRI: Brief Summary. Br. J. Radiol.
BJR 2022, 95, 20210563. [CrossRef] [PubMed]
5. Noor, M.B.T.; Zenia, N.Z.; Kaiser, M.S.; Mamun, S.A.; Mahmud, M. Application of Deep Learning in Detecting Neurological
Disorders from Magnetic Resonance Images: A Survey on the Detection of Alzheimer’s Disease, Parkinson’s Disease and
Schizophrenia. Brain Inf. 2020, 7, 11. [CrossRef] [PubMed]
6. Mostapha, M.; Styner, M. Role of Deep Learning in Infant Brain MRI Analysis. Magn. Reson. Imaging 2019, 64, 171–189. [CrossRef]
7. Akkus, Z.; Galimzianova, A.; Hoogi, A.; Rubin, D.L.; Erickson, B.J. Deep Learning for Brain MRI Segmentation: State of the Art
and Future Directions. J. Digit. Imaging 2017, 30, 449–459. [CrossRef]
8. Tao, Q.; Lelieveldt, B.P.F.; van der Geest, R.J. Deep Learning for Quantitative Cardiac MRI. Am. J. Roentgenol. 2020, 214, 529–535.
[CrossRef]
Information 2023, 14, 174 13 of 13
9. Schelb, P.; Kohl, S.; Radtke, J.P.; Wiesenfarth, M.; Kickingereder, P.; Bickelhaupt, S.; Kuder, T.A.; Stenzinger, A.; Hohenfellner, M.;
Schlemmer, H.-P.; et al. Classification of Cancer at Prostate MRI: Deep Learning versus Clinical PI-RADS Assessment. Radiology
2019, 293, 607–617. [CrossRef]
10. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [CrossRef]
11. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention Is All You Need.
arXiv 2017. [CrossRef]
12. Le, Q.V.; Ngiam, J.; Coates, A.; Lahiri, A.; Prochnow, B.; Ng, A.Y. On optimization methods for deep learning. In Proceedings of
the ICML, Bellevue, WA, USA, 28 June–2 July 2011.
13. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2015, arXiv:1409.1556.
14. Sadad, T.; Rehman, A.; Munir, A.; Saba, T.; Tariq, U.; Ayesha, N.; Abbasi, R. Brain tumor detection and multi-classification using
advanced deep learning techniques. Microsc. Res. Tech. 2021, 84, 1296–1308. [CrossRef]
15. Gab Allah, A.M.; Sarhan, A.M.; Elshennawy, N.M. Classification of Brain MRI Tumor Images Based on Deep Learning PGGAN
Augmentation. Diagnostics 2021, 11, 2343. [CrossRef] [PubMed]
16. Rasool, M.; Ismail, N.A.; Boulila, W.; Ammar, A.; Samma, H.; Yafooz, W.M.S.; Emara, A.-H.M. A Hybrid Deep Learning Model for
Brain Tumour Classification. Entropy 2022, 24, 799. [CrossRef] [PubMed]
17. Kang, J.; Ullah, Z.; Gwak, J. MRI-Based Brain Tumor Classification Using Ensemble of Deep Features and Machine Learning
Classifiers. Sensors 2021, 21, 2222. [CrossRef] [PubMed]
18. Sivaranjini, S.; Sujatha, C.M. Deep learning based diagnosis of Parkinson’s disease using convolutional neural network. Multimed.
Tools Appl. 2020, 79, 15467–15479. [CrossRef]
19. Bhan, A.; Kapoor, S.; Gulati, M.; Goyal, A. Early Diagnosis of Parkinson’s Disease in brain MRI using Deep Learning Algorithm.
In Proceedings of the 2021 Third International Conference on Intelligent Communication Technologies and Virtual Mobile
Networks (ICICV), Tirunelveli, India, 4–6 February 2021; IEEE: Tirunelveli, India, 2021; pp. 1467–1470.
20. Hussain, E.; Hasan, M.; Hassan, S.Z.; Hassan Azmi, T.; Rahman, M.A.; Zavid Parvez, M. Deep Learning Based Binary Classification for
Alzheimer’s Disease Detection using Brain MRI Images. In Proceedings of the 2020 15th IEEE Conference on Industrial Electronics and
Applications (ICIEA), Kristiansand, Norway, 9–13 November 2020; IEEE: Kristiansand, Norway, 2020; pp. 1115–1120.
21. Salehi, A.W.; Baglat, P.; Sharma, B.B.; Gupta, G.; Upadhya, A. A CNN Model: Earlier Diagnosis and Classification of Alzheimer
Disease using MRI. In Proceedings of the 2020 International Conference on Smart Electronics and Communication (ICOSEC),
Trichy, India, 10–12 September 2020; IEEE: Trichy, India, 2020; pp. 156–161.
22. Alloghani, M.; Al-Jumeily, D.; Mustafina, J.; Hussain, A.; Aljaaf, A.J. A Systematic Review on Supervised and Unsupervised
Machine Learning Algorithms for Data Science. In Supervised and Unsupervised Learning for Data Science; Unsupervised and
Semi-Supervised Learning; Berry, M.W., Mohamed, A., Yap, B.W., Eds.; Springer International Publishing: Cham, Switzerland,
2020; pp. 3–21. ISBN 978-3-030-22474-5.
23. Apostolopoulos, I.D.; Papathanasiou, N.D. Classification of lung nodule malignancy in computed tomography imaging utilizing
generative adversarial networks and semi-supervised transfer learning. Biocybern. Biomed. Eng. 2021, 41, 1243–1257. [CrossRef]
24. Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In Proceedings of
the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; IEEE: New York, NY,
USA; pp. 248–255.
25. Huh, M.; Agrawal, P.; Efros, A.A. What makes ImageNet good for transfer learning? arXiv 2016, arXiv:1608.08614.
26. Weiss, K.; Khoshgoftaar, T.M.; Wang, D. A Survey of Transfer Learning. J. Big Data 2016, 3, 9. [CrossRef]
27. Apostolopoulos, I.D.; Pintelas, E.G.; Livieris, I.E.; Apostolopoulos, D.J.; Papathanasiou, N.D.; Pintelas, P.E.; Panayiotakis, G.S.
Automatic classification of solitary pulmonary nodules in PET/CT imaging employing transfer learning techniques. Med. Biol.
Eng. Comput. 2021, 59, 1299–1310. [CrossRef]
28. Falkenstetter, S.; Leitner, J.; Brunner, S.M.; Rieder, T.N.; Kofler, B.; Weis, S. Galanin System in Human Glioma and Pituitary
Adenoma. Front. Endocrinol. 2020, 11, 155. [CrossRef] [PubMed]
29. Coskun, P.; Wyrembak, J.; Schriner, S.E.; Chen, H.-W.; Marciniack, C.; LaFerla, F.; Wallace, D.C. A Mitochondrial Etiology of
Alzheimer and Parkinson Disease. Biochim. Biophys. Acta BBA-Gen. Subj. 2012, 1820, 553–564. [CrossRef]
30. O’Bryant, S.E.; Lacritz, L.H.; Hall, J.; Waring, S.C.; Chan, W.; Khodr, Z.G.; Massman, P.J.; Hobson, V.; Cullum, C.M. Validation
of the New Interpretive Guidelines for the Clinical Dementia Rating Scale Sum of Boxes Score in the National Alzheimer’s
Coordinating Center Database. Arch. Neurol. 2010, 67, 746–749. [CrossRef] [PubMed]
31. Coley, N.; Andrieu, S.; Jaros, M.; Weiner, M.; Cedarbaum, J.; Vellas, B. Suitability of the Clinical Dementia Rating-Sum of Boxes as
a Single Primary Endpoint for Alzheimer’s Disease Trials. Alzheimer’s Dement. 2011, 7, 602–610.e2. [CrossRef]
32. Mohammed, B.A.; Senan, E.M.; Rassem, T.H.; Makbol, N.M.; Alanazi, A.A.; Al-Mekhlafi, Z.G.; Almurayziq, T.S.; Ghaleb, F.A.
Multi-Method Analysis of Medical Records and MRI Images for Early Diagnosis of Dementia and Alzheimer’s Disease Based on
Deep Learning and Hybrid Methods. Electronics 2021, 10, 2860. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.