Single Volume Image Generator and Deep Learning-Based ASD Classification

Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

3044 IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, VOL. 24, NO.

11, NOVEMBER 2020

Single Volume Image Generator and Deep


Learning-Based ASD Classification
Md Rishad Ahmed , Yuan Zhang , Senior Member, IEEE, Yi Liu,
and Hongen Liao , Senior Member, IEEE

Abstract—Autism spectrum disorder (ASD) is an intricate abnormalities such as communication difficulties, social deficits,
neuropsychiatric brain disorder characterized by social repetitive behaviors and cognitive delays, and nonsocial features
deficits and repetitive behaviors. Deep learning approaches such as restricted and stereotyped behaviors, all of which have
have been applied in clinical or behavioral identification of
ASD; most erstwhile models are inadequate in their capac- a significant impact on adaptive functioning [1]–[3]. As investi-
ity to exploit the data richness. On the other hand, classi- gated by the Centre for Disease, Control and Prevention in the
fication techniques often solely rely on region-based sum- United States, the estimated ASD prevalence of 1% or higher (1
mary and/or functional connectivity analysis of functional subject in 59) and it is augmented dramatically in the last decades
magnetic resonance imaging (fMRI). Besides, biomedical
[4]. Therefore, finding a precise biological marker to extrapolate
data modeling to analyze big data related to ASD is still
perplexing due to its complexity and heterogeneity. Sin- the underlying roots of ASD pathologies is indispensable for
gle volume image consideration has not been previously applying effective treatment in ASD diagnosis.
investigated in classification purposes. By deeming these One of the significant challenges in brain disorder research is
challenges, in this work, firstly, we design an image gen- to replicate the findings through larger datasets that can reflect
erator to generate single volume brain images from the the heterogeneity of clinical populations. Functional magnetic
whole-brain image by considering the voxel time point of
each subject separately. Then, to classify ASD and typ- resonance imaging (fMRI) has been extensively considered to
ical control participants, we evaluate four deep learning perceive functional abnormalities of ASD patients which can
approaches with their corresponding ensemble classifiers characterize the neural pathways [5], [6]. Functional connectiv-
comprising one amended Convolutional Neural Network ity analysis has produced deep insights to see the brain abnormal-
(CNN). Finally, to check out the data variability, we ap- ity connectomes between ASD and typical control (TC) either
ply the proposed CNN classifier with leave-one-site-out
5-fold cross-validation across the sites and validate our at individual or at group level characteristics. Recently, most of
findings by comparing with literature reports. We show- the machine learning techniques to study functional connectivity
case our approach on large-scale multi-site brain imag- data rely on hand-engineered feature extraction, such as the
ing dataset (ABIDE) by considering four preprocessing correlation between region of interests (ROIs) and topological
pipelines, which outperforms the state-of-the-art methods. measurements of modularity, clustering based classification [7],
Hence, it is robust and consistent.
segregation or integration [8]. On the other hand, brain ROIs
Index Terms—Biomedical data modeling, image
generator, convolutional neural network (CNN), autism
provide the structural substrates for measuring connectivities
spectrum disorder (ASD), fMRI, ABIDE. within the individual brain and functional activation patterns of
the brain. It is a common approach to analyze ASD individuals
I. INTRODUCTION based on the expert’s defined brain parcellation or data-driven
UTISM Spectrum Disorder (ASD) is the disturbance of the strategies such as dictionary learning, clustering, and ICA (inde-
A structure and functioning of the brain that cause different pendent component analysis) [9], [10]. Both the expert-defined
and data-driven ROIs strategy have several complaints such
Manuscript received October 18, 2019; revised February 25, 2020 as standardization, arbitrary decision, and selection of the re-
and May 17, 2020; accepted May 22, 2020. Date of publication May
29, 2020; date of current version November 5, 2020. This work was
gions exhibiting proficient information [11]. The data-driven
supported in part by the National Key Research and Development Pro- ROIs strategy can be biased in selecting the regions showing
gram of China under Grant 2017YFC0108000, in part by the National considerable variability across the subject, which influence the
Natural Science Foundation of China under Grant 81771940, and in
part by the Capital’s Funds for Health Improvement and Research under
results [12]. Hence, an alternative tool or strategy is essential
Grant 2018-4-6031. (Corresponding author: Yuan Zhang.) to overcome the above complaints and generate the volumetric
Md Rishad Ahmed and Yuan Zhang are with the Chongqing Key brain images to see the activated regions. A single volume image
Laboratory of Nonlinear Circuits and Intelligent Information Process-
ing, College of Electronic and Information Engineering, Southwest
generator not only generates whole brain volume images but
University, Chongqing 400715, China (e-mail: [email protected]; also has the arbitrary nature of a chosen brain region scheme,
[email protected]). which ensures the coverage of the entire brain regions of each
Yi Liu is with the Department of Respiratory Medicine, Civil Aviation
General Hospital, Beijing 100123, China (e-mail: [email protected]).
subject.
Hongen Liao is with the Department of Biomedical Engineering, Machine learning (ML) such as SVM (support vector ma-
School of Medicine, Tsinghua University, Beijing 100084, China (e-mail: chines) has been widely adopted to classify and exploit indi-
[email protected]).
Digital Object Identifier 10.1109/JBHI.2020.2998603
vidual variation in functional connectivity of ASD [13], [14].

2168-2194 © 2020 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See https://www.ieee.org/publications/rights/index.html for more information.

Authorized licensed use limited to: Anil Neerukonda Inst of Tech & Sci-Andhra Pradesh. Downloaded on March 11,2023 at 06:06:56 UTC from IEEE Xplore. Restrictions apply.
AHMED et al.: SINGLE VOLUME IMAGE GENERATOR AND DEEP LEARNING-BASED ASD CLASSIFICATION 3045

Recently, deep learning models with neuroimaging modalities limitations of traditional machine learning models for ASD
have been effective in identifying brain disorders such as ASD, classification that often rely on ROI definitions.
Alzheimer Disease (AD) [15], [16]. With the rapid advancement • Finally, to evaluate the classifier performance and check
of deep learning approaches for brain disorder diagnosis, con- out the data variability, we apply the proposed CNN clas-
volutional neural networks (CNN) becomes the most popular sifier with leave-one-site-out 5-fold cross-validation across
method for ASD classification [17]. However, most deep learn- the sites and validate our findings by comparing with
ing approaches have been focused on functional connectivity literature reports.
or ROIs analysis, time-series data analysis, or temporal/spatial The proposed approaches with a combined loss function and
information of fMRI [18], [19]. They also lack model trans- generated single volume images, establish a novel benchmark
parency, i.e., how the model secures its interpretability for model for ASD detection on the ABIDE (Autism Brain Imaging
clinical applications, as most of the deep neural networks are Data Exchange) database.
not easily interpretable. The choice of the potential classification The leftover of the paper is organized as follows: Section II
algorithm is another exception in the connectome-based analysis discusses the related works; Section III covers broad method-
of ASD. A few years ago, some of the deep learning-based ological explanation including single volume image generator
ASD classification models had been focused on simple linear and propose deep learning models. In Section IV, the experiment
predictive techniques using a vectored connectivity correlation and discussion of this method including dataset are presented
matrix [20], [21]. Additionally, ASD big data handling using and finally conclusion is drawn in Section V.
deep learning techniques is still thought-provoking due to the
lack of potential data mining and investigating methods from
the heterogeneous, complex, and dynamic nature data to di- II. RELATED WORKS
agnose this brain disorder. Due to the heterogeneity, etiology, The amalgamation of machine learning methods and brain
and severity of ASD, a more professed methodology is required imaging data permit the classification of ASD, which can as-
to forecast and analyze the behavior and functionality of each suage the significant suffering and provide safety for the patient’s
subject. Hence motivated by the above challenges, here we daily well-being. Studies on ASD classification using different
focus on designing a new image generator that can generate imaging modalities and their analysis approaches, specifically
single volume images from whole-brain fMRI and propose two deep learning techniques, are discussed in this section.
novel classification architectures for classifying ASD and typical Study on the functional connectivity of brain networks is
controls. The single volume brain image is the visualization of a sturdy utensil to understand the neurological bases of a di-
the brain regions along with specific direction and slice number, versity of brain disorders such as autism [22]. In the work
allowing the real-time voxel-periods separation of the raw fMRI [23], Abraham et al. employed resting-state fMRI to extract
data. Nevertheless, ROIs only define the interested brain regions functionally-defined brain areas and support vector classifier
depending on the pre-selected slice number. (SVC) to compare connectivity between ASD and typical con-
The main contributions of this study are as follows: trol. They considered 871 subjects from ABIDE dataset for
• To the best of our knowledge for the first time, we design a connectome-based prediction and got 67% accuracy. Guo et al.
single volume image generator that can produce 2D three- considered multiple stacked auto-encoder (SAE) as a feature
channel images from a functional magnetic resonance 4D selection method from whole-brain FCP obtained by Pearson
NIFTI image. The main advantage of the single volume correlation of ROIs [24]. Using only UM (University of Michi-
image generator is that generated 2D images represent gan) data site, they got a classification accuracy of 86.36%.
activated brain regions for each voxel time point of the In [25], using only CCS (connectome computation system)
patients. It also visualizes the brain regions in axial, sagit- pipeline without global signal regression data and LSTMs (long
tal, and coronal axes in the form of glass brain and stat_map short-term memory) method for classification of individuals
images. with ASD, Dvornek et al. achieved 68.5% accuracy.
• We incorporate four deep learning approaches with our On the other hand, time-series for several sets of regions-of-
improved CNN model to classify ASD and typical controls interests (ROIs) also have the potentiality to classify and see
using generated images as the input. The advantages of brain network connections of ASD. ROIs are usually computed
our model include leveraging the voxel-2D structures of using a predefined atlas or a parcellation scheme on anatomical
rsfMRI without possessing too many model parameters features, functional activations, and connectivity patterns of
and easily interpret the complex, heterogeneous data which brain [10], [26]. Dvornek et al. incorporated phenotypic data
can be used in combination with other tools supporting with rs-fMRI into a single LSTM based model for classifying
clinicians to diagnose ASD with more precision. ASD and achieved an accuracy of 70.1% [27]. They employed
• We propose a novel deep ensemble learning framework CCS pipeline data without global signal regression and cross-
based on the improved CNN and the benchmark ap- validation framework. With the development of deep learning
proaches to classify ASD using features extracted by model specifically, Convolutional Neural Networks (CNNs)
VGG16 from the glass brain and stat map images. The have found abundant applications on 2D and 3D images which
proposed ensemble model can integrate two different types can exploit image intensities and pixel grid to decipher image
of generated images simultaneously utilizing one ensemble segmentation and classification problems [28], [29]. Zhao et al.
learning classifier individually. Thus, it overcomes the also evaluated an effective 3D CNN to bridge the gap between

Authorized licensed use limited to: Anil Neerukonda Inst of Tech & Sci-Andhra Pradesh. Downloaded on March 11,2023 at 06:06:56 UTC from IEEE Xplore. Restrictions apply.
3046 IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, VOL. 24, NO. 11, NOVEMBER 2020

TABLE I TABLE II
A BRIEF SUMMARIZATION OF THE PREVIOUS MACHINE/DEEP LEARNING OVERVIEW OF THE BASIC PARAMETERS AND STEPS OF FOUR DIFFERENT
TECHNIQUES IN ASD CLASSIFICATION FUNCTIONAL PREPROCESSING STRATAGEMS

spatial overlap patterns and functional brain networks for char-


acterizing and differentiating ASD from healthy controls [30].
They incorporated only 200 (ASD-100, HC-100) rs-fMRI data the VGG16 model as a feature extractor, we also propose a
from the ABIDE dataset. The average 10-fold cross-validation novel deep ensemble learning model that can be treated as a new
accuracy was 70.5%, using a spatial overlap pattern of 144 benchmark approach for ASD classification compared with the
intrinsic connectivity networks (ICN) of each subject. The CNN recent literature reports.
weight training was performed by the stochastic gradient descent
(SGD) optimizer with momentum, and multinomial negative III. METHODOLOGY
log-likelihood was used as an objective loss function. Graph
convolutional neural networks (G-CNN) is another way to iden- A. Data Preprocessing Pipelines
tify brain patterns that can act as a neuropathological biomarker. The ABIDE data instigated by the various preprocessing
Ktena et al. highlighted the potential of convolutional neural pipelines are analogous, and there is no consensus on the superla-
network models for connectome-based classication [31]. Using tive methods [39]. The foremost difference between preprocess-
CPAC (configurable pipeline for the analysis of connectomes) ing pipelines is the precise algorithms and parameters used for
preprocessing pipeline and Harvard-Oxford atlases in [32], each of the step and software implementations. Rather than being
Anirudh et al. achieved 70.86% classication accuracy based on doctrinaire and preferring a single strategy, we analyze four
the G-CNN and ensemble learning. different preprocessing approaches. The principal advantages
Table I presents the summarization of recent trending tech- of the four strategies are that it overcomes the controversies
nologies in the classification of ASD. From the analysis of surrounding by bandpass filtering and global signal regression
Table I, we notice that most of the recent research works on by combining with and without filtering and with and without
ASD classification used functional connectivity (FC) or ROIs global signal adjustment [40]. Selected strategies are Connec-
data for classification purposes. Besides, the majority work tome Computation System (CCS), Configurable Pipeline for the
belongs to either one pipeline or one site or one atlas images Analysis of Connectomes (CPAC), Data Processing Assistant
for classification, which is not plentiful to answer some re- for Resting-State fMRI (DPARSF), and Neuroimaging Analysis
search and clinical questions. Biomedical big data are generally Kit (NIAK). Table II arranges for a summary of the distinct
preferable to improve the dependability (i.e., separate feature preprocessing steps and how they vary across pipelines. As
extraction) and solid contributions of research for the treatment mentioned by ABIDE, functional processing was performed
of brain disorders like ASD [33], [34]. By considering these using only four strategies which we covered in this study; how-
challenges, we design an image generator to generate the single ever other strategies were used for structural preprocessing and
volume brain images from the whole brain image using four calculation of cortical measures. In this work, data are selected
preprocessing pipelines mentioned by ABIDE. Furthermore, we from the filt_global preprocessing stratagem, which is band-pass
observe that extracting features from the images using deep filtered (0.01-0.1 Hz) and spatially registered using a nonlinear
learning models also enhance classification performance. Using method to MNI152 template space for each of four pipelines.

Authorized licensed use limited to: Anil Neerukonda Inst of Tech & Sci-Andhra Pradesh. Downloaded on March 11,2023 at 06:06:56 UTC from IEEE Xplore. Restrictions apply.
AHMED et al.: SINGLE VOLUME IMAGE GENERATOR AND DEEP LEARNING-BASED ASD CLASSIFICATION 3047

TABLE III
PARAMETERS WEIGHED WHILE PLOTTING THE VOLUMETRIC IMAGES USING
THE PROPOSED IMAGE GENERATOR

Fig. 1. Graphical representation of the proposed single volume image In Table III, display mode selects the specific direction of the
generator. cuts following x as sagittal, y as coronal and z as axial view
of the brain. On the other hand, a threshold is either a numeric
value or none; none means plotted images are not threshold, and
B. Single Volume Image Generator an absolute numeric value means it plots the threshold images
where the value below the threshold is transparent. We choose
As we know, a 3D fMRI is the voxel image containing only one two different threshold values: 3 and 5 in our experiment for
brain volume. Whereas, a 4D fMRI is a series of concatenated glass brain and stat map functions, respectively. The parameter
brain volumes over repeated time and the time is the 4th dimen- cut cords specify the number of slices to visualize the brain
sion representing the number of brain volume. Considering the images along the specific direction. Colorbar shows a vertical
number of brain volumes, it can give the images of activated color bar to the right of the current axes of the plotting images,
brain regions during spontaneously fMRI acquisition, which is which stands “false” in our experiment. The detailed description
called a single volume image [41], [42]. The functional acti- of the plotted two types of volumetric images, and their choosing
vation maps impact the properties of repeated fMRI scans, for reasons are explained below. The detailed description of the
example, the assessment of the relations between the symptom’s plotted two types of volumetric images, and their choosing
intervention and brain activation patterns [43]. Therefore, there reasons are explained below.
needs a tool to generate the images between the voxel periods for Glass Brain and Stat_Map Images: All the neuroimaging is
scrutinizing the activated brain regions. Rather than traditional the part of the brain mapping. The glass brain is a 3D brain visu-
analysis of functional connectivity or brain ROIs, in this work, alization that displays real-time source activity and connectivity
we design an image generator to produce single volume brain between brain areas [44]. On the other hand, stat_map is the full
images from the preprocessed whole-brain functional image. name of the statistical images which plot cuts of an ROI/mask
The single-volume image depicts the 2D visualization of the image. We prefer the glass brain and stat_map displaying mode
brain activity by considering each voxel time point. Fig. 1 rep- for a single volume image because of its power of projecting
resents the ow diagram to generate the single volume images by high-resolution 3D model of an individual’s brain, skull, and
predened displaying mode. The working principle of the single scalp tissue. Another fundamental discrepancy between general
volume image generator has several folds: rstly, the generator brain mapping and our generator images is the map projection.
checks out the shape of each input Nifti les, the input shape In a general brain mapping, it considers the specific brain regions
might be the 4D fMRI images. Secondly, the generator main (ROIs) or time series scheme to see the brain connectome.
body, which checks out the conditions to draw output depending However, our proposed generator estimates the number of voxel
on the enumeration and iteration counter. The enumeration and time points for each 4D image to plot the real-time activated
iteration segment counts the number of voxel time points during brain regions. The plotted images were in MNI space for all the
image acquisition. Then we set the corresponding parameters to considered pipelines to work image function accurately.
demonstrate the real-time brain activations. Finally, generator
plots and saves the brain images into two types of volumetric
images for each ASD and TC individuals by counting the whole C. Proposed CNN-Based Classification Model
voxel time points for each subject. The corresponding param- Fig. 2 depicts the overall deep learning architecture specif-
eters to display and save the single volume images from 4D ically based on CNN for classification of ASD. Convolutional
multiple brain volumes image are shown in Table III. neural network, as a portion of the neural network is widespread

Authorized licensed use limited to: Anil Neerukonda Inst of Tech & Sci-Andhra Pradesh. Downloaded on March 11,2023 at 06:06:56 UTC from IEEE Xplore. Restrictions apply.
3048 IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, VOL. 24, NO. 11, NOVEMBER 2020

Fig. 3. Architectural overview of the proposed deep ensemble learning


classifier.

prevent overfitting. Therefore, to solve the overfitting problem


as well as penalize the sum of the absolute value of weights,
Fig. 2. Proposed CNN architecture for classifying ASD vs. TC. we add L1 regularization with equation (3) to define a new loss
function as given below:
for task interrelated to image classification and segmentation. losscom = lossw + L1 (4)
CNN debilitates the limitations of traditional neural networks
through a local connection, sharing weights and sampling. There where L1 regularization is defined as,
exist two rudimentary operations in CNN: convolution through k

a kernel (weights and bias to convolve input feature map) and L1 = λ |wn | (5)
subsequent sampling of the convolved feature map. n=1
From Fig. 2, the enhanced CNN architecture has six convo- where k is the number of independent features and w is the model
lutional layers, four max-pooling layers, two batch normaliza- weights. The regularization parameter λ > 0 is manually tuned
tion layers, one flatten and dropout layer with ratio 0.25, and which has the effect of pushing w towards 0. After subsequent
two densely connected layers followed by a sigmoid activation tuning during the experiment, we select the best fit value of
function. The number of kernels and layer types are signified λ = 0.01 through monitoring the model performances using
in each box. To conciliate the model slow training due to the Keras Callbacks function. By using EarlyStopping of Callbacks
layer parameter changes, we introduce the batch normalization function, training could be terminated if we see the possible sign
[45] layer, which provides a much higher learning rate and flout of overfitting or underfitting problem. In short, L1 regularization
initialization. is the process of regularizing the parameters that shrink the
Loss Function: We contemplate the class-wise cross entropy parameters to zero by reducing the number of features in the
loss as given below: model altogether, thus avoiding the risk of overfitting.
lossi = −yi log(pi ) + (1 − yi ) log(1 − pi ) (1) Benchmark CNN Approaches: To analyze the generated
single volume images, we executed Keras based pre-trained
where, yi is the label (binary indicator) and pi is the predicted benchmark CNN approaches as follows: DenseNet [46], ResNet
probability of the label. [47], Xception [48], and Inception V3 [49]. For classification
After that, we measure the higher weight (wi ) to give it in of ASD, the output node-set to a sigmoid function, and binary
losses incurred on the training data samples. cross-entropy loss was adopted. The models are trained with a
N mini-batch size of 64, ADAM was used as the optimizer with
j=1 sj
wi = (2) learning rate 0.0004, beta_1 = 0.9, and beta_2 = 0.999.
si
where N is the total number of training samples, sj counts the D. Proposed Deep Ensemble Learning Classifier
whole samples from j = 1.....N , and si is the number of per
Ensemble learning framework sometime overcomes the lim-
class samples. Hence the weighted binary cross entropy loss
itations of traditional deep learning models which often rely
can be calculated from equation (1) and (2) as,
on ROIs based summary statistics and linear models for ASD
M
1  classification [35]. In our experiments, we reconnoitered four en-
lossw = − wi × lossi (3) semble learning strategies by combining benchmark approaches
M i=1
with improved CNN model. Each ensemble model was trained
where M is the number of classes and lossw is the weighted and tested using two types of generated images from each
loss. pipeline. Fig. 3 shows the proposed deep ensemble learning
Overfitting usually happens in a complex model, which is a classifier’s architectural overview for ASD classification, where
common problem in deep learning applications. Regularization n represents the number of classifier. Here, we adopted VGG16
is a procedure for introducing supplementary information to [50] model, which is a Keras-based pretrained paradigm utilizing

Authorized licensed use limited to: Anil Neerukonda Inst of Tech & Sci-Andhra Pradesh. Downloaded on March 11,2023 at 06:06:56 UTC from IEEE Xplore. Restrictions apply.
AHMED et al.: SINGLE VOLUME IMAGE GENERATOR AND DEEP LEARNING-BASED ASD CLASSIFICATION 3049

weights from the ImageNet dataset to extract features from TABLE IV


PHENOTYPIC SUMMARY OF ABIDE-I 4D FMRI DATA FOR EACH
both generated images. The feature extraction part comprises INDIVIDUAL SITE
of captivating the convolutional base (consisted of only con-
volutional and pooling layers) of a formerly trained VGG16
model without a densely connected layer. The feature maps of
a convolutional base are the presence maps of generic notions
which is more reusable. Each ensemble classifier averages the
predictions of the models for a specific pipeline using each
one of the two types of images. The nal evaluation metrics are
calculated as the two-stage arithmetic mean of the specific binary
class predictions for classification. The mathematics behind this
is given below:
Score = (Eg + Es )/2
= (Classif ier(n) + Classif ier(n))/2 (6)
where Eg and Es denote the glass brain and stat_map images
respectively and, Classif ier(n) is the base classifier. The base
classifiers are defined as ensemble classif ier 1 (DenseNet
+ Proposed CNN), ensemble classif ier 2 (ResNet + Pro-
posed CNN), ensemble classif ier 3 (Xception + Proposed
CNN) and ensemble classif ier 4 (Inception V3 + Pro-
posed CNN), respectively. The number of base classifiers is
denoted as the number of ensemble classifiers, such as n = 2 =
ensemble classif ier 2, and so on. We train each ensemble
classifier separately for the two types of images. For instance,
we have two types (glass brain and stat map) of images for
CCS pipelines; hence we train the ensemble classif ier 1
individually to see its performance on different image features.
The activation of the last layer was set to sigmoid function
for the classification task. Each ensemble model was trained
for 1000 iterations using ADAM optimizer with learning rate
0.0004, beta_1 = 0.9, and beta_2 = 0.999.

IV. EXPERIMENTS AND DISCUSSION


A. Participants
Resting-state fMRI allows for the investigation of the distur-
bance of brain networks without the added complexity of vari- data from 529 ASD individuals and 573 typical controls (TC)
ation associated with task-related brain activation. The present for each pipeline. Table IV contains the key phenotypical infor-
study was carried out using 4D rs-fMRI data from the ABIDE-1, mation, including distribution of ASD and TC by sex and age,
which is a particularly complex dataset due to its heterogeneity, and the ADOS score, where Ψ means that the site did not have
vast range subjects comprised and different imaging proto- this information. The 1000 Functional Connectomes Project2
cols. It’s also a connotation that provides previously collected contains complete information about the data site, including
multi-site ASD and matched typical control data in the scientic full site name, corresponding laboratory/hospital, connecting
research community from 17 different imaging sites around the projects and collaborations.
world [39]. This multisite data platform has a diagnostic pro-
cedure different from each other. Also, the scanning protocols B. Visualization of the Generated Single Volume Images
and parameters are varied from one site to another site with The envisioned image generator, firstly ensures the shape
controlling data quality. The fundamental benets of the ABIDE of the functionally preprocessed images. If the checked image
platform is that they have available preprocessed data using shape is 4D NIFTI (.nii.gz), then display the brain images
different preprocessing pipelines and images from one site can according to the voxel time points of the 4D rs-fMRI with
be considered as one dataset. The rs-fMRI preprocessed data the set parameters. For instance, if an image has volume 176,
was downloaded through four preprocessing pipelines from the the generator, therefore generates 176 brain images. Finally, the
Preprocessed Connectomes Projects (PCP).1 We incorporated generator automatically saves the single volume images from the
1 [Online]. Available: http://preprocessed-connectomes-project.org/abide/
download.html 2 [Online]. Available: http://fcon_1000.projects.nitrc.org/indi/abide/abide_I.html

Authorized licensed use limited to: Anil Neerukonda Inst of Tech & Sci-Andhra Pradesh. Downloaded on March 11,2023 at 06:06:56 UTC from IEEE Xplore. Restrictions apply.
3050 IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, VOL. 24, NO. 11, NOVEMBER 2020

TABLE V
NUMBER OF GENERATED IMAGES FOR EACH SPECIFIC DATA SITE
INCLUDING SEPARATE TRAINING AND TESTING SET

Fig. 4. Axial view (z) of the CCS pipeline images.

Fig. 5. Sagittal view (x) of the CPAC pipeline images.

From the analysis of the visualized images, Fig. 4 and Fig. 7


shows the horizontal slice that divides the brain into upper
(superior) and lower (inferior) segments and is an x-z plane.
On the other hand, Fig. 5 illustrates the longitudinal slice that
separates the brain into left and right halves and is also an x-z
plane. It is called the sagittal view because it transpires through
the sagittal stitch. From Fig. 6, it depicts the frontal spheres that
divide the brain into the front (posterior) and back (anterior)
parts and is an x-y plane.
Fig. 6. Coronal view (y) of the DPASRF pipeline images. Finally, we process the generated images according to the
model requirements because the shape of the generated images
varies with different volumes for different imaging sites during
image acquisition. The processing details of the input images
for each model are addressed in the following subsection.

C. Model Training and Testing


In our experiment, to test the robustness of the generated
images and improved model, we first divided the whole dataset
into two parts: training (85%) and testing (15%) from the
Fig. 7. Axial view (z) of the NIAK pipeline images.
generated images. We reckoned every image according to their
subject’s image as it is either ASD or TC for binary classification.
During training the first model (Fig. 2), we considered Keras
exposed whole volume brain images in PNG format according based ImageDataGenerator to process the generated images
to the defined label. The produced glass brain and stat_map according to the model input requirements. By setting up the
images for the first ten subjects of CCS, CPAC, DPASRF, and necessary parameters such as batch size (64), rescale (1. /255),
NIAK pipelines from both ASD and TC are shown in Fig. 4, and class mode (binary), we reshaped the images in target size
Fig. 5, Fig. 6 and Fig. 7, respectively. In Fig. 4–Fig. 7, row a and (160,160) of RGB images. Conversely, in the ensemble learning
b represents the corresponding glass brain and stat_map images classifier (Fig. 3), we incorporated the VGG16 model for feature
of ASD and TC individuals, respectively. Fig. 4–Fig. 7, are extraction. We then split the data into two parts of training (85%)
merely presented to show the first image of each subject for both and testing (15%) to feed into the classifier models. Depending
displaying modes which may not be more informative compared on the number of voxel time points of the fMRI data, we generate
with other images. However, during training and testing the multiple images for each subject. Therefore, for training and
classification model, we considered all of the generated images testing the deep learning models, we get an adequate number
for each individual. of images. Table V manifests the number of total images for

Authorized licensed use limited to: Anil Neerukonda Inst of Tech & Sci-Andhra Pradesh. Downloaded on March 11,2023 at 06:06:56 UTC from IEEE Xplore. Restrictions apply.
AHMED et al.: SINGLE VOLUME IMAGE GENERATOR AND DEEP LEARNING-BASED ASD CLASSIFICATION 3051

TABLE VI
CORRESPONDING CLASSIFICATION PERFORMANCES USING PROPOSED CNN MODEL

each site individually according to their splitting procedure. In it shows that the performance of the improved CNN over other
Table V, all the images have the same shape with different time methods is statistically noteworthy.
points for different data site. For instance, the original fMRI has
a shape like (61, 73, 61, voxel time points); thus, the fMRI time E. Deep Ensemble Learning Classifiers Performance
points are defined as the fourth dimension of the original fMRI Analysis
shape. In this work, we consider the number of voxel time points
to generate 2D brain images. We performed four ensemble classifier techniques in the
On the other hand, during unit-site classification, we per- experiment by combining benchmark approaches with the im-
formed a leave-one-site-out 5-fold cross-validation approach proved CNN. We have trained each classifier separately for
to evaluate the performances of the proposed CNN classifier two different image feature’s classification and shown the cor-
across sites. This method helps to extract more information responding outputs. All the ensemble classifiers were trained
from the images while leaving enough test samples to measure using ADAM optimizer and sigmoid function for classifica-
the capability of the model in classifying unobserved images. tion. The final output was taken based on the equation (6) for
Additionally, some of the sites have a small number of sam- the binary classification. Table VII represents the classification
ples, for example, CMU (Carnegie Mellon University), which performance for the ensemble learning classifiers. The highest
contains only 27 samples. Therefore, performing leave-site-out accuracy and other relevant measurements are marked as bold in
5-fold cross-validation increases the variance of cross-validation the table for each pipeline generated images. From the analysis
estimation. of the table results, the third ensemble classifier performs better
than other ensemble classifiers in CCS and CPAC pipelines data,
with the highest accuracy of 87%, respectively. For the other
D. CNN-Based Classifier Performance Evaluation on
two pipelines, DPARSF and NIAK data, the second and third
ABIDE Dataset
classifiers perform closely for the DPARSF dataset, and the first
Deep learning approach and ABIDE data have previously and second classifiers perform closely for the NIAK dataset,
been studied to identify and analyze the ASD showing dif- with an accuracy of 86% in both cases. The fourth ensemble
ferent measurement metrics. In this work, we evaluate four classifier performs averagely in all four cases.
performance measurement metrics such as P (precision), R Ensemble learning has already been introduced for ASD
(recall), F (F1-score) and A (accuracy) to validate the algorithm classification in many literature reports but not frequently yet.
performance. Precision is outlined as the ratio of the correctly In [35], Khosla et al. used stochastic parcellation and seven
ASD positive labeled to all ASD positive labeled, and recall is the atlases of ABIDE dataset to classify ASD based on the 3D CNN
ratio with the whole ASD positive in reality. On the other hand, approach. The authors also performed two ensemble learning
F1-score considers both precision and recall measurement. Ac- strategies called multi-atlas ensemble (MA-Ensemble) model
curacy is delineated as the ratio of correctly labeled individuals to and stochastic parcellation ensemble strategy (SP-Ensemble)
the whole number of subjects. All of the four metrics are weighed of a 3D CNN method. The average classification predictions
to validate the classification ability of our model for ASD and TC of the MA-Ensemble model were computed using each one of
classification. Table VI shows the comparison between different the seven atlases available in PCP for ROI time series extrac-
performance measurements for benchmark and proposed CNN tion, with the highest accuracy of 71.7%. Using 30 stochastic
method using two categorized images of four pipelines. parcellations in the SP-Ensemble model obtained a maximum
The maximum performance is marked as bold in each section. efficiency of 72.3%. Additionally, in [36], Wang et al. firstly
The highest average accuracy obtain in our proposed method divided the data according to the subject’s age and sex and
for CPAC glass brain images is 83%, average precision 80.5% found out their functional connectivity patterns for ASD clas-
for CCS stat_map images, average F1-score 80.9% for DPARSF sification. They proposed a sparse multi-view multitask ensem-
glass brain images and average specificity 81.2% for NIAK glass ble (Sparse-MVMT-E) classification method for individualized
brain images. From the analysis of the performance comparison, ASD diagnosis. They considered two data sites from ABIDE,

Authorized licensed use limited to: Anil Neerukonda Inst of Tech & Sci-Andhra Pradesh. Downloaded on March 11,2023 at 06:06:56 UTC from IEEE Xplore. Restrictions apply.
3052 IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, VOL. 24, NO. 11, NOVEMBER 2020

TABLE VII
CLASSIFICATION PERFORMANCES USING PROPOSED DEEP ENSEMBLE LEARNING CLASSIFIERS

TABLE VIII
UNIT SITE CLASSIFICATION USING LEAVE-SITE-OUT 5-FOLD CROSS-VALIDATION, COMPARED WITH OTHER METHODS

namely NYU and UM-1, and secured the highest accuracy the training process and used that data as the test set to evaluate
of 72.6% and 71.4%, respectively. Compared with the results the model. Then it tested the applicability of the model to a new,
of [35] and [36],our model surpassed with a mean accuracy different site. Leave-One-Site cross-validation always estimates
difference around 14% using different types of image features the entire 5-fold cross-validation using a single split of data folds.
rather than FC or atlas data. The results comparison suggests During the experiments, we accomplish the confidence intervals
that our proposed ensemble model classifies ASD patients more through the standard deviation (SD) of the mean accuracy.
precisely. Table VIII represents the performance comparison using the
proposed CNN model, including benchmark approaches and
F. Unit Site Classification Performances Analysis mean accuracy’s standard deviation (SD) for each model. In
As the ABIDE is a consortium of the large dataset for ASD our experiments, the highest accuracy for glass brain images
subjects from multiple renowned institutions around the world, is up to 88% belongs to several data sites and for stat_map
we performed unit-site classification to pattern out the site’s images the accuracy is up to 87% for several data sites as
variability. Leave-One-Site-Out 5-fold cross-validation has been shown in the table. Heinsfeld et al. also investigated the unit-site
employed to evaluate the CNN classifier performance across the classification, where they achieved the highest accuracy of 68%,
sites. This process firstly excluded the data from one site from both for Caltech and MaxMun data site, by using patterns of

Authorized licensed use limited to: Anil Neerukonda Inst of Tech & Sci-Andhra Pradesh. Downloaded on March 11,2023 at 06:06:56 UTC from IEEE Xplore. Restrictions apply.
AHMED et al.: SINGLE VOLUME IMAGE GENERATOR AND DEEP LEARNING-BASED ASD CLASSIFICATION 3053

functional connectivity [37]. On the other hand, Eslami et al. [2] M. A. Just, V. L. Cherkassky, A. Buchweitz, T. A. Keller, and T. M.
performed 5-fold cross-validation on each site separately using Mitchell, “Identifying autism from neural representations of social in-
teractions: Neurocognitive markers of autism,” PLOS ONE, vol. 9, no. 12,
ASD-DiagNet [38]. They got the maximum accuracy of 82% pp. 1–22, 2014.
for the OHSU data site without data augmentation. In both [3] G. Noriega, “Restricted, repetitive, and stereotypical patterns of behavior
works [37] and [38], CC-200 functional parcellation brain atlas in autism–an fmri perspective,” IEEE Trans. Neural Syst. Rehabil. Eng.,
vol. 27, no. 6, pp. 1139–1148, Jun. 2019.
data was utilized for intra-site evaluation. The closest accuracy [4] J. Baio, “Prevalence of autism spectrum disorder among children aged 8
comparing with our model has been achieved by [38] in only years-autism and developmental disabilities monitoring network, 11 sites,
one data site, while other data site performs far lower with data united states,” Centers Diseases Control Prevention, vol. 63, pp. 1–24,
2014.
augmentation. The mean accuracy difference is approximately [5] M. D. Kaiser et al., “Neural signatures of autism,” Proc. Nat. Acad. Sci.,
20% between our and the comparative works. Therefore, based vol. 107, no. 49, pp. 21 223–21 228, 2010.
on the results, our proposed method, along with benchmark [6] M. Lee, D. Y. Kim, M. K. Chung, A. L. Alexander, and R. J. Davidson,
“Topological properties of the structural brain network in autism via ε-
approaches, obtains the highest accuracy in most data sites and neighbor method,” IEEE Trans. Biomed. Eng., vol. 65, no. 10, pp. 2323–
outperforms other methods on average for classifying ASD. 2333, Oct. 2018.
Furthermore, the standard deviation between each data site is [7] X.-a. Bi, Y. Wang, Q. Shu, Q. Sun, and Q. Xu, “Classification of autism
spectrum disorder using random support vector machine cluster,” Frontiers
lower, which is an indication that the accuracy is closer to the Genetics, vol. 9, 2018.
mean accuracy. It can be concluded that the results, compared [8] Y. Kong, J. Gao, Y. Xu, Y. Pan, J. Wang, and J. Liu, “Classification of
with other findings, also suggest that there has data variability autism spectrum disorder by combining brain connectivity and deep neural
network classifier,” Neurocomputing, vol. 324, pp. 63–68, 2019.
(dispersion or spread) among these sites that do not exist in other [9] X. Bi et al., “The genetic-evolutionary random support vector machine
sites. cluster analysis in autism spectrum disorder,” IEEE Access, vol. 7,
pp. 30 527–30 535, Mar. 2019.
[10] B. A. Cociu et al., “Multimodal functional and structural brain connectivity
analysis in autism: A preliminary integrated approach with EEG, FMRI,
V. CONCLUSION and DTI,” IEEE Trans. Cogn. Dev. Syst., vol. 10, no. 2, pp. 213–226,
Jun. 2018.
The recent advancement of functional connectivity and brain [11] Z. Wang, Y. Zheng, D. C. Zhu, A. C. Bozoki, and T. Li, “Classification of
ROIs analysis has made a conspicuous invasion into the fol- alzheimers disease, mild cognitive impairment and normal control subjects
using resting-state fmri based network connectivity analysis,” IEEE J.
lowing classification of ASD. However, it is challenging to Translational Eng. Health Medicine, vol. 6, pp. 1–9, Oct. 2018.
generalize the outcomes for larger, more heterogeneous popu- [12] B. Thirion, G. Varoquaux, E. Dohmatob, and J.-B. Poline, “Which fmri
lations rather than for smaller ones. While most of the recent clustering gives good brain parcellations?” Frontiers Neurosci., vol. 8,
2014.
work investigated the functional connectivity or time series [13] C. Wang, Z. Xiao, B. Wang, and J. Wu, “Identification of autism based
analysis of fMRI, in this study, we demonstrate a suitable image on svm-rfe and stacked sparse auto-encoder,” IEEE Access, vol. 7,
generator to harvest a stable image that can provide perceptive pp. 118 030–118 036, Aug. 2019.
[14] Z. Yao et al., “Resting-state time-varying analysis reveals aberrant vari-
details on the target disease using heterogeneous neuroimaging ations of functional connectivity in autism,” Frontiers Human Neurosci.,
modality. Also, we validate the generated images using two vol. 10, 2016.
propose deep learning-based frameworks that could enhance [15] X. Li, N. C. Dvornek, J. Zhuang, P. Ventola, and J. S. Duncan, “Brain
biomarker interpretation in asd using deep learning and fmri,” in Proc.
diagnostic truthfulness, with the potential to classify and develop Med. Image Comput. Comput. Assisted Intervention, 2018, pp. 206–214.
better treatments. Furthermore, to check out the inter-site data [16] M. R. Ahmed, Y. Zhang, Z. Feng, B. Lo, O. T. Inan, and H. Liao,
variability, we apply the proposed method across the sites using a “Neuroimaging and machine learning for dementia diagnosis: Recent
advancements and future prospects,” IEEE Rev. Biomed. Eng., vol. 12,
leave-site-out cross-validation approach. Our significance image pp. 19–33, Dec. 2018.
processing scheme and sampling, along with the precise CNN [17] J. R. Sato, M. Calebe Vidal, S. de Siqueira Santos, K. Brauer Massirer,
classifier ensure a trustworthy approach of ASD classification and A. Fujita, “Complex network measures in autism spectrum disorders,”
IEEE/ACM Trans. Comput. Biol. Bioinf., vol. 15, no. 2, pp. 581–587,
associating with the other image processing techniques. Overall, Mar./Apr. 2018.
the proposed image processing scheme provides a proficient and [18] Y. Zhao et al., “Automatic recognition of fmri-derived functional networks
objective way of interpreting neuroimaging applied to the deep using 3-d convolutional neural networks,” IEEE Trans. Biomed. Eng.,
vol. 65, no. 9, pp. 1975–1984, Sep. 2018.
learning model. [19] T. Iidaka, “Resting state functional magnetic resonance imaging and neural
Future research inclinations involve expending structural pre- network classified autism and control,” Cortex, vol. 63, pp. 55–67, 2015.
processing and calculation of cortical measures pipeline data [20] P. Kassraian-Fard, C. Matthis, J. H. Balsters, M. H. Maathuis, and N.
Wenderoth, “Promises, pitfalls, and basic guidelines for applying machine
for classification of ASD. Besides, it is necessary to modify the learning classifiers to psychiatric imaging data, with autism as an example,”
architecture to consolidate raw fMRI data as well as to analyze Frontiers Psychiatry, vol. 7, 2016.
the correlation between brain activation regions (axial, sagittal [21] J. V. Hull, L. B. Dokovna, Z. J. Jacokes, C. M. Torgerson, A. Irimia, and
J. D. Van Horn, “Resting-state functional connectivity in autism spectrum
and coronal) to perceive the neural connectivity of brain during disorders: A review,” Frontiers Psychiatry, vol. 7, 2017.
the natural progression of autism. [22] A. Pascual-Belda, A. Díaz-Parra, and D. Moratal, “Evaluating functional
connectivity alterations in autism spectrum disorder using network-based
statistics,” Diagnostics, vol. 8, no. 3, p. 51, 2018.
[23] A. Abraham et al., “Deriving reproducible biomarkers from multi-site
REFERENCES resting-state data: An autism-based example,” NeuroImage, vol. 147,
pp. 736–745, 2017.
[1] E. Honey, J. Rodgers, and H. McConachie, “Measurement of restricted and [24] X. Guo, K. C. Dominick, A. A. Minai, H. Li, C. A. Erickson, and L. J. Lu,
repetitive behaviour in children with autism spectrum disorder: Selecting a “Diagnosing autism spectrum disorder from brain resting-state functional
questionnaire or interview,” Res. Aut. Spc. Dis., vol. 6, no. 2, pp. 757–776, connectivity patterns using a deep neural network with a novel feature
2012. selection method,” Frontiers Neurosci., vol. 11, 2017.

Authorized licensed use limited to: Anil Neerukonda Inst of Tech & Sci-Andhra Pradesh. Downloaded on March 11,2023 at 06:06:56 UTC from IEEE Xplore. Restrictions apply.
3054 IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, VOL. 24, NO. 11, NOVEMBER 2020

[25] N. C. Dvornek, P. Ventola, K. Pelphrey, and J. S. Duncan, “Identifying [37] A. S. Heinsfeld, A. R. Franco, R. C. Craddock, A. Buchweitz, and F.
autism from resting-state fmri using long short-term memory networks,” Meneguzzi, “Identification of autism spectrum disorder using deep learn-
in Proc. Mach. Learn. Med. Imag., 2017, vol. 10541, pp. 362–370. ing and the abide dataset,” NeuroImage: Clin., vol. 17, pp. 16–23, 2018.
[26] S. B. Eickhoff, B. Thirion, G. Varoquaux, and D. Bzdok, “Connectivity- [38] T. Eslami, V. Mirjalili, A. Fong, A. R. Laird, and F. Saeed, “Asd-diagnet: A
based parcellation: Critique and implications,” Human Brain Mapping, hybrid learning approach for detection of autism spectrum disorder using
vol. 36, no. 12, pp. 4771–92, 2015. fmri data,” Frontiers Neuroinf., vol. 13, 2019.
[27] N. C. Dvornek, P. Ventola, and J. S. Duncan, “Combining phenotypic [39] C. Craddock et al., “The neuro bureau preprocessing initiative: open
and resting-state FMRI data for autism classification with recurrent neural sharing of preprocessed neuroimaging data and derivatives,” Frontiers
networks,” in Proc. IEEE 15th Int. Symp. Biomed. Imag., 2018, pp. 725– Neuroinf., vol. 7, 2013.
728. [40] “Abide preprocessed,” 2011. [Online]. Available: http://preprocessed-
[28] S. Parisot et al., “Spectral graph convolutions for population-based disease connectomes-project.org/abide/, Accessed: May 18, 2020.
prediction,” in Proc. Med. Image Comput. Comput. Assisted Intervention, [41] X. Liu, C. Chang, and J. Duyn, “Decomposition of spontaneous brain
2017, pp. 177–185. activity into distinct fmri co-activation patterns,” Frontiers Syst. Neurosci.,
[29] X. Li et al., “2-channel convolutional 3d deep neural network (2cc3d) for vol. 7, 2013.
fmri analysis: Asd classification and feature learning,” in Proc. IEEE 15th [42] J. R. Sato et al., “Identifying multisubject cortical activation in functional
Int. Symp. Biomed. Imag., 2018, pp. 1252–1255. mri: A frequency domain approach,” J. Data Sci., vol. 6, pp. 89–103, 2008.
[30] Y. Zhao, F. Ge, S. Zhang, and T. Liu, “3d deep convolutional neu- [43] G. S. Dichter, L. Sikich, A. Song, J. Voyvodic, and J. W. Bodfish, “Func-
ral network revealed the value of brain network overlap in differ- tional neuroimaging of treatment effects in psychiatry: Methodological
entiating autism spectrum disorder from healthy controls,” in Proc. challenges and recommendations,” Int. J. Neurosci., vol. 122, no. 9,
Int. Conf. Med. Image Comput. Comput.-Assisted Intervention, 2018, pp. 483–493, 2012.
pp. 172–180. [44] T. Mullen et al., “Real-time estimation and 3d visualization of source
[31] S. I. Ktena et al., “Distance metric learning using graph convolutional dynamics and connectivity using wearable eeg,” in Conf. Proc. IEEE Eng.
networks: Application to functional brain networks,” in Proc. Med. Image Med. Biol. Soc., 2013, pp. 2184–2187.
Comput. Comput. Assisted Intervention, 2017, pp. 469–477. [45] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network
[32] R. Anirudh and J. J. Thiagarajan, “Bootstrapping graph convolutional training by reducing internal covariate shift,” in Proc. Int Conf. Mach.
neural networks for autism spectrum disorder classification,” in Proc. IEEE Learn., 2015, vol. 37, pp. 448–456.
Int. Conf. Acoust., Speech Signal Process., 2019, pp. 3197–3201. [46] G. Huang, Z. Liu, L. v. d. Maaten, and K. Q. Weinberger, “Densely
[33] A. Phinyomark, E. Ibanez-Marcelo, and G. Petri, “Resting-state FMRI connected convolutional networks,” in Proc. IEEE Conf. Comput. Vision
functional connectivity: Big data preprocessing pipelines and topological Pattern Recognit., 2017, pp. 2261–2269.
data analysis,” IEEE Trans. Big Data, vol. 3, no. 4, pp. 415–428, Dec. [47] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image
2017. recognition,” in Proc. IEEE Conf. Comput. Vision Pattern Recognit., 2016,
[34] D. Raví et al., “Deep learning for health informatics,” IEEE J. Biomed. pp. 770–778.
Health Informat., vol. 21, no. 1, pp. 4–21, Jan. 2017. [48] F. Chollet, “Xception: Deep learning with depthwise separable convo-
[35] M. Khosla, K. Jamison, A. Kuceyeski, and M. R. Sabuncu, “En- lutions,” in Proc. IEEE Conf. Comput. Vision Pattern Recognit., 2017,
semble learning with 3d convolutional neural networks for functional pp. 1800–1807.
connectome-based prediction,” NeuroImage, vol. 199, pp. 651–662, [49] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking
2019. the inception architecture for computer vision,” in Proc. IEEE Conf.
[36] J. Wang, Q. Wang, H. Zhang, J. Chen, S. Wang, and D. Shen, “Sparse Comput. Vision Pattern Recognit., 2016, pp. 2818–2826.
multiview task-centralized ensemble learning for asd diagnosis based [50] K. Simonyan and A. Zisserman, “Very deep convolutional networks for
on age- and sex-related functional connectivity patterns,” IEEE Trans. large-scale image recognition,” in Proc. 3rd Int. Conf. Learn. Representa-
Cybern., vol. 49, no. 8, pp. 3141–3154, Aug. 2019. tions, May 2015.

Authorized licensed use limited to: Anil Neerukonda Inst of Tech & Sci-Andhra Pradesh. Downloaded on March 11,2023 at 06:06:56 UTC from IEEE Xplore. Restrictions apply.

You might also like