Next Article in Journal
Using Machine Learning with Eye-Tracking Data to Predict if a Recruiter Will Approve a Resume
Previous Article in Journal
Drug-Drug Interaction Extraction from Biomedical Text Using Relation BioBERT with BLSTM
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

CovC-ReDRNet: A Deep Learning Model for COVID-19 Classification

1
School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK
2
School of Computer Science and Technology, Henan Polytechnic University, Jiaozuo 454000, China
3
Department of Information Systems, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia
*
Authors to whom correspondence should be addressed.
Mach. Learn. Knowl. Extr. 2023, 5(3), 684-712; https://doi.org/10.3390/make5030037
Submission received: 18 April 2023 / Revised: 22 June 2023 / Accepted: 22 June 2023 / Published: 27 June 2023

Abstract

:
Since the COVID-19 pandemic outbreak, over 760 million confirmed cases and over 6.8 million deaths have been reported globally, according to the World Health Organization. While the SARS-CoV-2 virus carried by COVID-19 patients can be identified though the reverse transcription–polymerase chain reaction (RT-PCR) test with high accuracy, clinical misdiagnosis between COVID-19 and pneumonia patients remains a challenge. Therefore, we developed a novel CovC-ReDRNet model to distinguish COVID-19 patients from pneumonia patients as well as normal cases. ResNet-18 was introduced as the backbone model and tailored for the feature representation afterward. In our feature-based randomized neural network (RNN) framework, the feature representation automatically pairs with the deep random vector function link network (dRVFL) as the optimal classifier, producing a CovC-ReDRNet model for the classification task. Results based on five-fold cross-validation reveal that our method achieved 94.94%, 97.01%, 97.56%, 96.81%, and 95.84% MA sensitivity, MA specificity, MA accuracy, MA precision, and MA F1-score, respectively. Ablation studies evidence the superiority of ResNet-18 over different backbone networks, RNNs over traditional classifiers, and deep RNNs over shallow RNNs. Moreover, our proposed model achieved a better MA accuracy than the state-of-the-art (SOTA) methods, the highest score of which was 95.57%. To conclude, our CovC-ReDRNet model could be perceived as an advanced computer-aided diagnostic model with high speed and high accuracy for classifying and predicting COVID-19 diseases.

1. Introduction

1.1. COVID-19

On 30 January 2020, the World Health Organization (WHO) formally declared the outbreak of COVID-19 and upgraded the pandemic to a public health emergency of international concern (PHEIC). COVID-19, generally identified as coronavirus disease 2019, is a widespread contagious disease caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). According to the epidemiological report from the WHO, over 760 million confirmed cases and over 6.8 million deaths have been reported globally since the beginning of the COVID-19 pandemic until 16 March 2023 [1].
The first people infected by the virus were reported in Wuhan City, Hubei Province, China, and it later spread rapidly across the world [2]. Several studies have confirmed that the COVID-19 virus is primarily transmitted via respiratory droplets and contact routes, resulting in direct human-to-human infection [3,4,5]. Virus transmission happens when people come into close contact (within 1 m) with a confirmed infected person who has respiratory symptoms such as coughing or sneezing, and their exposed mucosae and conjunctiva organ could become a potential receiver of the virus [6]. Common symptoms of COVID-19 include coughing, fever, loss of smell (anosmia), and taste (ageusia). Moreover, long-term consequences occur in post-COVID-19, such as weakness, general malaise, fatigue, cognitive impairment, etc. [7,8].
Diagnosing and detecting coronaviruses significantly contributes to outbreak control and further measures such as isolation and medical treatment. Currently, the mainstream virus detection technology is the reverse transcript–polymerase chain reaction (RT-PCR) test [9,10,11]. According to research from The Lancet Infectious Diseases, when pooled simultaneously, nasal and throat reach a high positive predictive value with an accuracy of 97% [12]. Another comparable detection approach is medical imaging, with different imaging modalities, like computed tomography (CT) and X-ray, being considered the most commonly used technologies [13,14,15]. Although medical imaging has been proven to have limited specificity in identifying COVID-19 (due to overlapping features in chest CT images such as those characterizing adenoviruses, influenza, H1N1, SARS, and MERS) [16], imaging requires more commonly available medical equipment and provides higher sensitivity than the RT-PCR test [17,18]. In addition, medical imaging can be used to confirm diagnostic results when positive-negative RT-PCR test results occur [19,20,21]. Evidently, medical images based on CT and X-ray scans remain highly valuable for COVID-19 disease diagnosis.

1.2. Pneumonia

Pneumonia is an infection that inflames or swells the tissue to create something akin to air sacs (also known as alveoli) in the human respiratory organs, specifically the lungs [22,23,24]. An annually reported 450 million people are infected with pneumonia worldwide, with over 4 million confirmed deaths [25,26]. Hence, it is vital to identify pneumonia at an early stage and further defeat it with prompt medical treatment.
Identifying the responsible pathogen is a crucial part of diagnosing pneumonia, but this is time-consuming and necessitates medical knowledge. Thanks to the rapid development of medical imaging technology, chest CT and X-ray have been proven to be reliable diagnosis approaches since lesions can be directly observed in images. Comparing common pneumonia patients with COVID-19 patients, different features can be captured in medical images. According to Zhao, et al. [27], COVID-19 infections (89.47%) were most commonly distinguished from common pneumonia (6.67%) in patients with ground-glass opacity and multiple mottling in their lung scans. Interestingly, the applicability of AI technology to the task of classifying COVID-19 patients and non-COVID-19 pneumonia patients is theoretical and evidence-based.
Furthermore, the multi-classification task could be more practical compared with binary classification. The reason for this could be that the RT-PCR test has already shown great capability to identify the SARS-CoV-2 virus carried by COVID-19 patients with high accuracy, but distinguishing COVID-19 from other lung diseases still mainly depends on the patient’s medical images. On the other hand, common symptoms between COVID-19 patients and pneumonia patients, such as productive or dry cough, chest pain, fever, and difficulty breathing, confuse the clinic diagnosis. An auto-detection AI system based on chest scans could provide computer-aided detection (CAD) algorithms even if patients have similar clinical symptoms. Hence, this research aims to develop a deep learning (DL) approach to classify COVID-19 patients, non-COVID-19 pneumonia patients, and normal cases that could be applied in practice.
The large volume of research on computer-assisted technology significantly contributes to diagnosing and detecting coronaviruses in clinical applications. Common challenges could be described as follows: (a) information loss occurs when deepening the neural network; (b) complex architecture leads to resource waste and time-assuming problems; (c) the network is limited to generalizing different tasks; (d) prediction accuracy remains to be improved. In this paper, we used a novel approach:
  • ResNet-18 was selected as the backbone model, due to its superiority over the other six classic CNNs in the ablation experiment: AlexNet, VGG, ResNet-50, GoogleNet, DenseNet, and MobileNet.
  • Compared with the traditional pre-trained CNN, randomized neural networks (RNNs) improve the model performance by replacing the last five layers of the tailored CNN as well as addressing the problem of computing resource waste thanks to their lightweight architecture.
  • Our feature-based RNN framework, designed with an auto-selection algorithm, allows the most adaptive model obtained on various domains, which indicates the improvement in the generalizability of the model.
  • Our novel CovC-ReDRNet obtains the feature representation from the tailored CNN as well as auto-selects the deep random vector function link network (dRVFL) as the optimal classifier according to our feature-based RNN framework; further, it feeds the feature representation directly to the dRVFL to construct the entire network. A good model performance based on five-fold cross-validation was achieved, with an MA sensitivity, MA specificity, MA accuracy, MA precision, and MA F1-score of 94.94%, 97.01%, 97.56%, 96.81%, and 95.84%, respectively.
  • Compared with the other seven state-of-the-art (SOTA) methods (95.57%), our proposed model achieved the highest MA accuracy (97.56%).
This paper is structured as follows. Section 2 summarizes related work on classification tasks in the context of COVID-19. In Section 3, the material used for our research is introduced. In Section 4, the methodology of our proposed model is explained. In Section 5, the experiment results are compared and discussed. Finally, the conclusion is provided in Section 6.

2. Related Work

Classification tasks in the context of COVID-19 have become increasingly important as the pandemic continues to spread globally. Deep learning models have been applied to various classification problems related to COVID-19, including but not limited to diagnosis, severity assessment, and prognosis prediction. In this section, we highlight some of the recent developments in this field, discuss the challenges and limitations of the existing models, and further provide the motivation for our present research.
One of the earliest and most widely studied classification tasks in COVID-19 is the diagnosis of the disease. A number of studies have proposed deep learning models that can diagnose COVID-19 based on chest X-ray images and CT scans. In 2020, COVID-Net [28] boomed the application of deep learning for detecting COVID-19 cases from chest X-ray images. Additionally, the largest open access benchmark dataset of COVID-19-positive cases was generated, namely COVIDx, which comprises 13,975 chest X-ray images across 13,870 patient cases and is constantly expanding.
Subsequently, COVIDX-Net [29] was proposed to assist radiotherapists in automatically diagnosing COVID-19 based on chest X-ray images. The proposed framework included seven different architectures of deep convolutional neural networks (CNNs). Experimentally, a good performance was achieved by VGG-19 and DenseNet with F1-scores of 89% and 91% for normal and COVID-19 classes, respectively. More recent studies [30,31,32,33,34] supported deep learning approaches to learn discriminative patterns from chest X-ray images and CT scans as well as achieved high accuracy in COVID-19 detection tasks. The contributions and limitations of SOTA methods in the COVID-19 diagnosis task are analyzed in Table 1.
Another important branch in the COVID-19 classification task is the assessment of disease severity. The severity of COVID-19 can vary greatly from patient to patient, which indicates the importance of identifying patients who are at high risk of developing severe complications. For example, a multi-task vision transformer (ViT) that leverages a low-level chest X-ray feature corpus obtained from a backbone network to diagnose and quantify the severity of COVID-19 was proposed by Park, et al. [34]. The severity quantification performance of the proposed model was evaluated in terms of mean squared error (MSE) with a 95% confidence interval (CI) of 1.441 (0.760–2.122), 1.435 (1.195–1.676), and 1.458 (1.147–1.768) in three external datasets, respectively. Additionally, Goncharov, et al. [35] proposed a CNN-based network that leverages all available labels within a single model, which outperformed existing approaches and achieved a 97% Spearman correlation in severity quantification.
More advanced deep neural networks have been proposed based on various clinical and demographic factors for severity assessment [36,37,38,39]; CNNs and recurrent neural networks in particular have been applied to this task with promising results. The contributions of SOTA methods to the COVID-19 severity assessment task are highlighted in Table 2. Therefore, deep learning methods could be used to determine the prognosis of patients with COVID-19 and further guide clinical decision making.
A further remarkable application is the prognosis prediction of COVID-19, which refers to the prediction of the outcome of the disease, such as recovery or death. Prognosis prediction is imperative for clinical decision making and resource allocation, as well as for the development of effective treatments. A deep-learning-based study [40] demonstrated its potential to forecast the number of upcoming COVID-19 infections, and could thus significantly contribute to epidemic control. Four standard forecasting models were tested for predicting newly infected cases, deaths, and recoveries in the ten following days. Another study [41] pointed out the importance of prognosis prediction with the aim of triaging patients effectively; thus, mortality of COVID-19 patients was forecasted for one aspect of prognosis. Better performances were obtained using LASSO and linear SVM, with sensitivities of 90.7% and 92.0%, specificities of 91.4% and 91.8%, and area under the receiver operating characteristics curves (AUCs) of 96.3% and 96.2%, respectively.
More recently, several studies proposed various deep learning architectures for prognosis prediction [42,43,44,45], such as feedforward neural networks (FFNNs) and gradient boosting machines (GBMs), which showed that deep learning models could provide reliable predictions of patient condition, and further provide a deep understating of virology as well as aid in disease control. The contributions of SOTA methods to the COVID-19 prognosis task are highlighted in Table 3.
As mentioned above, deep learning technologies are effective in solving various classification tasks related to COVID-19, including diagnosis, severity assessment, and prognosis prediction. However, there have been a limited number of multi-category classification tasks developed. A multi-category classification task based on deep learning algorithms could be used to accurately diagnose COVID-19 and distinguish it from other respiratory illnesses such as the flu, pneumonia, and other viral infections. It is of considerable importance that the symptoms of COVID-19 are similar to those of many other respiratory illnesses, and misdiagnosis can cause serious consequences for both the patient and public health.
Some three-category classification frameworks that distinguish COVID-19 patients from pneumonia patients and normal cases have been proposed in recent years. Hussain, et al. [46] proposed a CNN-based model dedicated to COVID-19 diagnosis and classification, named CoroDet. A novel database, the COVID-R dataset, was constructed by merging and revising eight COVID-19 open sources, containing 7390 pulmonary images from 2843 COVID-19 patients, 3108 normal cases, and 1439 pneumonia patients. In their three-category classification experiments, the presence of the pulmonary lesion feature of COVID-19 disease in X-ray images was used to differentiate COVID-19 infection from non-COVID-19 pneumonia. CoreDet measured through sensitivity, specificity, precision, recall, F1-score, and accuracy, achieving a good performance based on the average of five-fold cross-validation, that is, 92.76%, 94.56%, 94.04%, 92.50%, 91.32%, and 94.20%, respectively.
Xu, et al. [47] proposed a novel approach for COVID-19 screening, distinguishing COVID-19 from other types of viral pneumonia, especially influenza-A viral pneumonia (IAVP), based on pulmonary CT images. A total of 618 CT images were obtained from three top hospitals in China, including 219 COVID-19 cases, 224 IAVP cases, and 175 normal cases. An advanced model was developed based on the classic ResNet-18 with a location attention mechanism, achieving an overall accuracy of 86.7%. The three different measurements considered, recall, precision, and F1-score, were 86.7%, 81.3%, and 83.9% in the COVID-19 group; 83.3%, 86.2%, and 84.7% in the IAVP group; and 90.0%, 93.1%, and 91.5% in the normal group, respectively.
However, the performance of this model leaves much to be desired. On the other hand, a specific framework was designed for an inner target that only tests on a fixed dataset, which limits the model generalization for different tasks. Hence, research aiming to develop a generalized deep learning framework with high accuracy for the related domain is critical.

3. Materials

3.1. Data Modality

X-rays rely on invisible radiation of electromagnetic energy to create images of internal tissues, bones, and organs on film or digital media. When the body receives X-rays, different parts of the tissues allow different quantities of radiation to pass through. The body’s soft tissues (such as blood, skin, fat, and muscle) permit the majority of X-rays to pass through, which show up as dark gray on film or digital media. Bones or tumors are more densely packed than soft tissues and allow very little radiation to pass through, appearing as white on the X-ray [48,49,50].
Chest X-rays are used to assist in diagnosis as they can indicate if there is a lesion feature of COVID-19 disease present or if another pulmonary problem is occurring. COVID-19 commonly leads to air sacs in the lungs filling with fluid, further producing bilateral peripheral opacities (normally observed as ground-glass opacities (GGOs) with areas of consolidation, being nodular or mass-like), with a lower lung distribution on patients’ chest CT images being the typical appearance of COVID-19 pneumonia [51,52,53]. Nowadays, X-ray machines are widely available in hospitals, and typical features can be observed with chest scans, which indicates that chest X-ray images are a powerful strategy in the early diagnosis of COVID-19.

3.2. Dataset

Our proposed framework was implemented on the publicly available resource Kaggle, in the chest X-ray repository (COVID-19 & Pneumonia) [54]. The dataset is available at https://www.kaggle.com/datasets/prashant268/chest-xray-COVID19-pneumonia (accessed on 4 November 2022). The owner of the dataset, Prashant Patel, collected X-ray images from four different publicly available databases: a COVID-19 image data collection [55], pneumonia on chest X-rays [56], Figure 1 COVID-19 Chest X-Ray Dataset Initiative [57], and Actualmed COVID-19 Chest X-ray Dataset Initiative [58].
From the above open resources, 6432 chest X-ray images were collected. The training and test sets contained 5144 (80%) and 1288 (20%) images, respectively. The dataset is organized into three categories, namely COVID-19, non-COVID-19 pneumonia (including the viral and bacterial types of pneumonia), and normal (healthy cases). Table 4 summarizes the data distribution in different categories, and Figure 1 provides ten templates for each category of the dataset (the letter is determined by the X-ray machine but makes no relation to the classes).

4. Methodology

4.1. Selected Backbone Network

For our model design, a typical pre-trained CNN model was considered for the backbone of our proposed model, namely ResNet-18 [59]. ResNet variants show its significant dominance in computer vision fields, particularly to achieve a deeper network without expanding computational complexity, which is attributed to its exclusive architecture.
Generally speaking, a notable way to improve the performance of a model is to increase the depth of the network [60,61,62,63]. The deep CNN integrates features at different levels with the model in a layer-by-layer forward inference, which results in a more hierarchical and robust image feature, and subsequently, a better model performance.
However, gradient disappearance or gradient explosion is very likely to occur when deepening the neural network [64,65,66,67]. ResNet has an advanced framework designed to address this problem that adopts batch normalization (BN) right after each convolution and before activation. The underlying cause of gradient disappearance or gradient explosion is the erratic updating of network weights, essentially due to the multiplicative effect in gradient back-propagation. The normalization in ResNet can be divided into normalization from the beginning and internal normalization, which optimizes the stochastic gradient descent (SGD) used for back-propagation. The BN strategies unify the measure of captured features, making photographic features easier to expose and conduct; in other words, the addition of BN layers stabilizes the iteration of the network weights, hence allowing the deeper network to converge. For this reason, an approach that introduces ResNet with a BN layer to minimize disappearing or exploding gradients in deep neural networks could be considered.
On the other hand, deep neural networks suffer from a ‘degeneracy dilemma’ [68,69,70,71]. The degradation dilemma can be interpreted as a situation wherein the network with optimal performance is located in the shallow network but is not found by the SDG, with the consequence being that the deeper network underperforms the shallower network. A plausible explanation could be that each input-to-output procedure is almost irreversible due to the presence of the non-linear activation function (normally referred to as the ReLU), which inevitably results in significant non-reversible information loss. The designers of ResNet have proposed a mechanism to reduce the degradation problem by using identity mapping. The mechanism is capable of deepening the network in such a way as to ensure that the performance of the deep network is at least equal to that of the shallow network. However, current neural networks are incredibly challenging to fit into a potentially constant mapping function. An alternative solution is to learn the difference between the input and output. If the difference converges to zero, constant mapping can be obtained, thereby indirectly achieving constant mapping by fitting the residuals.
In mathematical conception, the residual is defined as the difference between the predicted and observed values. One block of residuals is described as
x s + 1 = x s + ( x s ,   w s ) ,
x s + 1 refers to the predicted value after single-block residual learning, which can be interpreted as the output of the layer s + 1 . x s refers to the observed value before residual learning, which can be interpreted as the initial input of the layer s . ( x s ,   w s ) corresponds to the residual part of the s -layer network, which could be attributed to name this block as residual learning. Notably, the residual component is upgraded in Equation (2), which calculates the sum of each residual block when the residual learning module contains multiple residual blocks.
x d = x s + i = 1 n ( x s ,   w s i ) ,
where d     s and s   N + , indicating that the pattern from a shallower layer defined as x s could be mapped directly to a deeper layer defined as x d ; meanwhile, ( x s ,   w s i ) corresponds to the residual part of the weight layer i .
This paper chooses a ResNet with eighteen weight layers as the backbone of our model, referred to as ResNet-18. The structural diagram of a comparison of residual learning with shortcut connections is shown in Figure 2.
A residual-type connection is added to the original plain network, enabling the replication of features extracted at the shallow level to the additive deeper layer. It is a type of identification mapping that combines the features of the shallow network with the original plain stacking network output across one or more layers. The residual learning framework reduces the loss of information associated with deepening the network, thus achieving a better model performance.
Noteworthily, the deep system normally has a high training cost. If the system were to strengthen its training capability by simply stacking the plain layers, the number of parameters could exponentially explode. Understandably, training with tens of millions of parameters requires a GPU with a greater computational capacity, as well as a significant amount of training time. In addition, backtracking to the identity mapping algorithm, x s + 1 = x s + ( x s ,   w s ) , it can be observed that the shortcut connection requires only one-step additive operations, whereas the parameters are trained by the plain network. Therefore, it is possible to improve the performance of the model with no additional parameters or costly computational complexity.
Furthermore, in our experimental section, ResNet-50 is used as the backbone for the control. On the other hand, another five CNN-based backbone networks are compared in our ablation experiments, namely AlexNet (2012) [72], VGG (2015) [73], GoogleNet (2015) [74], DenseNet (2017) [75], and MobileNet (2017) [76].

4.2. Tailored CNN

The pre-trained ResNet-18 was selected as the backbone network of the proposed model. The CNN models that pre-trained on the ImageNet dataset acquired the ability to extract high-level image features. Therefore, the pre-trained CNN models could be extended to further image classification tasks according to the identified feature. However, some modifications should be made to the pre-trained ResNet-18 because of the difference between the ImageNet database and the public database used in this paper. The tailoring of the pre-trained ResNet-18 is presented in Figure 3.
Specifically, the ImageNet dataset has 1000 categories, but only 3 categories are referred to in this paper. Hence, ‘FC 1000’ is substituted with ‘FC 3’ because a three-class output is required for the group of COVID-19 patients, non-COVID-19 pneumonia patients, and normal cases. Further, the ‘FC 128’ layer is replaced with a ‘ReLU’ activation layer and a ‘BN’ layer is added for the purpose of mitigating the differences in dimensions between ‘Pool 5’ and ‘FC 3’. The resulting architecture of the tailored CNN is provided in Table 5.

4.3. Identified Feature Layer

Feature extraction is a crucial step in deep learning, motivated by automatically learning informative and discriminative representations directly from the input data. In very recent years, pre-trained models derived from large-scale databases have been widely applied with great success to extract features for new tasks or new datasets. In particular, the feature representation procedure followed in our task is displayed in Figure 4.
Basically, the pre-trained models’ shallower layers typically learn low-level features, such as edges and corners, while higher layers learn more abstract and semantically meaningful features. According to conventional knowledge, the fully connected layer closest to the max pooling layer can capture the pattern of the picture to the greatest extent possible. Regarding the tailored CNN (Figure 3), the ‘FC 128’ layer following the last pooling layer was selected to be the feature layer for the following framework.

4.4. Feature-Based RNN Framework

Deep CNN models have achieved success in many areas. However, training deep CNN models is time-consuming because of the massive number of layers and parameters. In this paper, RNNs were selected to alleviate this problem because there are only single hidden layers with shallow architecture and non-single hidden layers with deep architecture. Moreover, the training of an RNN is often based on pseudo-inverse, which could contribute to fast convergence. Table 6 provides the mathematical symbol definitions.

4.4.1. Shallow RNNs

Three leading RNNs, known as the extreme learning machine (ELM) [77], Schmidt neural network (SNN) [78], and random vector functional-link (RVFL) [79], were implemented initially within the model space of our proposed framework. In particular, the structure of the ELM is given in Figure 5.
For N arbitrary distinct samples, there is a training dataset, with its i -th sample being
x i = ( x i 1 , , x i n ) T R n ,   i = 1 , ,   N ,
y i = ( y i 1 , ,   y i m ) T R m ,   i = 1 , ,   N ,
where n and m represent the input dimension and the output dimension, respectively. The original input matrix and the ground-truth label are presented as
X = ( x 1 , , x N ) T ,
Y = ( y 1 ,   , y N ) T .
The first calculation step of the ELM is given in
M E L M ( i ) = j = 1 v g ( w j x i + b j ) , i = 1 ,   ,   N ,  
where g ( · ) is the activation function, w j is the weight which connects the input data with the j -th hidden node, b j is the bias of the j-th hidden node, and v is the number of hidden nodes. The second calculation step is to calculate the output weight:
p = M E L M + Y ,
where M E L M + denotes the pseudo-inverse matrix of M E L M . Finally, the final output obtained as
O i = ( O i 1 , ,   O i m ) T R m ,   i = 1 , ,   N ,
The structure of the SNN is similar to that of the ELM, and the only difference is that there is a bias to the output layer in the SNN. The structure of the SNN is given in Figure 6. The output of the hidden layer is calculated as
M S N N ( i ) = j = 1 v g ( w j x i + b j ) , i = 1 ,   ,   N .
The output weight is defined as
( p , e ) = M S N N + Y ,
where e refers to the output biases of the SNN.
The structure of the RVFL is different from that of the ELM and SNN in that there are direct connections from the input layer to the output layer. The framework of the RVFL is presented in Figure 7. The calculation steps of RVFL are different and the output of the hidden layer is calculated as
M R V F L ( i ) = j = 1 v g ( w j x i + b j ) , i = 1 ,   ,   N .
The input to the output layer is defined as
D R V F L = concat ( X , M ) ,
where X represents the original input. The output weight is calculated as
p = D R V F L + Y .

4.4.2. Deep RNNs

The previous section detailed three shallow RNNs with a single hidden layer. However, it is not very stable because there are many randomization operations in the RNN. It is well known that deep architecture is more robust and accurate than a single neural network. Therefore, we increases the depth of the RNN based on the RVFL, namely the deep random vector function link network (dRVFL) [80]. The dRVFL functions as a high-speed automatic classifier attached to our feature extractor. It is characterized by stacked hidden layers, as shown in Figure 8.
In the dRVFL, the input of each hidden layer is the output of the previous layer. The input of the output layer is the ensemble of the output of each hidden layer and the original input. The calculation steps of the dRVFL can be defined as follows. The output of the first hidden layer is calculated as
M d R V F L i 1 = j = 1 v 1 g ( w 1 j x i + b 1 j ) , i = 1 ,   ,   N .
For k > 1 , the calculation is defined as
M d R V F L i k = j = 1 v k g ( w k j M d R V F L i k 1 + b k j ) , i = 1 ,   ,   N ,   k = 2 , , l .
The calculation of the input of the output layer is performed as follows:
D d R V F L = concat ( X ,   M d R V F L i 1 , M d R V F L i 2 , M d R V F L i l ) .
The output weight of dRVFL is given as
p = D d R V F L + Y .
Notably, the deep architecture might not work for the SNN and ELM. A reasonable explanation for this could be that there is no interaction between the input layer and the output layer, whose instability could be augmented by more random weight in multiple hidden layers. Hence, only the dRVFL is added to the model space in our feature-based RNN framework.

4.4.3. The Proposed Feature-Based RNN Framework

The fully connected layer ‘FC 128’ is identified as the feature map for the customized feature extractor. The selected feature is fed directly to our novel RNN framework for the specific classification procedure, namely, the feature-based RNN framework. Figure 9 shows the intuitionistic structure of our proposed RNN framework. The model space consists of three shallow RNNs and one deep RNN. Each RNN pairs with the selected feature map separately, for the purpose of exploring the prior classifier. The RNNs are evaluated mainly considering classification accuracy, with one optimal RNN as the output of the feature-based RNN framework. Further, the best classifier is the one that achieves the best feature extractor.

4.5. The Proposed Model

A novel model is proposed to distinguish COVID-19 patients from non-COVID-19 pneumonia patients as well as normal cases, abbreviated as the ResNet-18-based dRVFL network for COVID-19 classification (CovC-ReDRNet). A ResNet-18 pre-trained on the ImageNet dataset is introduced as the backbone model. Referring to the BN layers, ResNet-18 addresses the problem of gradient disappearance or gradient explosion. Moreover, the residual learning framework reduces the ‘degeneracy dilemma’ by using identity mapping, reducing the loss of information when increasing the depth of the network. The shortcut connection requires only one-step additive operations, indicating no additional parameters or costly computational complexity, despite the improvement in model performance.
The pre-trained ResNet-18 are necessary tailored according to the difference between the ImageNet database and the public database used in this paper. The layer ‘FC 1000’ is substituted with ‘FC 3’ because only three classes of output are involved rather than one thousand categories in our classification task, in particular, the COVID-19 patients, non-COVID-19 pneumonia patients, and the normal cases, respectively. Further, the ‘FC 128’ layer is replaced with a ‘ReLU’ activation layer, and a ‘BN’ layer is added with the aim of mitigating the differences in dimensions between ‘Pool 5’ and ‘FC 3’. The tailored pre-trained ResNet-18 is presented in Figure 3, and the entire architecture of the tailored CNN is provided in Table 5.
RNNs as classifiers stand out thanks to their ability to alleviate the time-consuming problem in traditional deep CNN models. This is mainly due to the streamlined architecture of RNNs, which only employ three shallow layers or five deep layers rather than massive layers and the parameters of CNNs. Additionally, special random nodes in hidden layers conduct the pseudo-inverse when training an RNN, which could contribute to the fast convergence. Three advanced shallow RNNs are implemented initially within the model space of our proposed framework, known as the ELM, SNN, and RVFL. Additionally, deep architecture, that is, the dRVFL, is involved in our framework to enhance robustness and stability compared to shallow RNNs. In the end, a feature-based RNN framework was used to evaluate the performance of the above classifier and eventually led to the development of the proposed model for the classification task.
The proposed model was implemented with the output of the ‘FC 128’ layer as the optimal feature representation as well as the dRVFL as the optimal classifier. The classifier is selected with the auto-selection algorithm, which is advantageous in the identification of the best option while saving time and resources as well as ensuring consistent and fair decision making. A brief diagram is illustrated in Figure 10, and the pseudo-code is presented in Algorithm 1.
Furthermore, our novel auto-selection algorithm and feature-based RNN framework can be used to design the most adaptive model for a specific task. This demonstrates that our proposed framework maintains tremendous potential for improving the generalizability of a model, which could be adapted to more tasks in various domains.
Algorithm 1. The algorithm of the CovC-ReDRNet.
Algorithm CovC-ReDRNet
Input: dataset D
Output: the classification performance S of the trained CovC-ReDRNet
  • Import the original dataset D
  • Implement five-fold cross-validation, split the dataset into five equally sized sets { D 1 ,   D 2 , D 3 , D 4 , D 5 }
  • For each in { D 1 ,   D 2 , D 3 , D 4 , D 5 }  do
  •  data.test = D i
  •  data.train = D D i
  •  Load pre-trained ResNet-18 model M
  •  Remove the last three layers FC 1000, softmax, and classification layer from the M
  •  Replace with another six layers: FC128, ReLU, BN, FC 3, softmax, and classification layer
  •  Train the tailored CNN model T on data.train
  •  Set ‘FC 128’ following the last pooling layer as the feature layer
  •  Target the output of the feature layer as identified pattern I
  •  Fed the I into our feature-based randomized neural network (RNN) framework R
  •  Compete the RNNs in { R 1 ,   R 2 , R 3 , R 4 } the ELM, SNN, RVFL, and dRVFL
  •  Select the optimal automatically according to the MA accuracy from the confusion matrix
  •  Connect the optimal classifier with the feature layer
  •  Construct the entire architecture of the proposed network, namely CovC-ReDRNet
  •  Test the trained CovC-ReDRNet on the data.test
  • End For
  • Report the classification performance of the trained CovC-ReDRNet

4.6. Evaluation

Five-fold cross-validation is chosen to evaluate the performance of our proposed network. Five measurements are employed in this paper: accuracy, sensitivity, specificity, precision, and F1-score. These measurements for the classification of two categories are modified because there are three categories in this paper. When one category is defined as positive, the other two categories are set to negative. These measurements can be computed by
Accuracy ( c ) = TP ( c ) + TN ( c ) TP ( c ) + TN ( c ) + FP ( c ) + FN ( c ) ,
Sensitivity ( c ) = TP ( c ) TP ( c ) + FN ( c ) ,
Specificity ( c ) = TN ( c ) TN ( c ) + FP ( c ) ,
Precision ( c ) = TP ( c ) TP ( c ) + FP ( c ) ,
F 1 - score ( c ) = 2 × Sensitivity ( c ) × Precision ( c ) Sensitivity ( c ) + Precision ( c ) ,
where c represents the category in this paper, and TN, TP, FP, and FN denote true negative, true positive, false positive, and false negative, respectively.
To ensure a comprehensive evaluation, macro-averaging is introduced to measure the overall system, namely MA accuracy, MA sensitivity, MA specificity, MA precision, and MA F1-score, respectively.
MA   accuracy = 1 n i = 1 n Accuracy ( c i ) ,
MA   sensitivity = 1 n i = 1 n Sensitivity ( c i ) ,
MA   specificity = 1 n i = 1 n Specificity ( c i ) ,
MA   precision = 1 n i = 1 n Precision ( c i ) ,
MA   F 1 - score = 1 n i = 1 n F 1 - score ( c i ) ,
where n represents the number of categories in the experiment, and i = 1 ,   2 ,   3 stand for the categories of COVID-19, non-COVID-19 pneumonia, and normal cases, respectively.

5. Experiment Results and Discussions

5.1. Experimental Settings

The hyper-parameter setting is provided in Table 7. The maximum value of the epoch is set as 4 for the purpose of reducing the overfitting problem. The minimum value of the batch size is decreased to 10 due to the small size of the training set, in which only 5144 images are included. Based on convention, the learning rate is set to 10 4 . In terms of RNNs, 400 is an appropriate value for the number of hidden nodes according to the input dimension; thus, the number of hidden layers in the dRVFL is set as 4.

5.2. The Performance of CovC-ReDRNet

Five-fold cross-validation was implemented to evaluate the proposed model. The MA accuracies of five-fold cross-validation are shown in Table 8. It was revealed that the MA accuracy of each fold was greater than 97% and the average achieved 97.56%, which is outstanding because accuracy is regarded as one of the most significant indicators for clinical diagnosis. We used five other indicators to comprehensively evaluate the proposed model in three categories: MA accuracy, MA sensitivity, MA specificity, MA precision, and MA F1-score.
The results of these indicators for the three classes are given in Table 9. Typically, the accuracy for the COVID-19 group reached 99.44%, which indicates that CovC-ReDRNet could be an extraordinary network for distinguishing COVID-19 patients from non-COVID-19 pneumonia patients or normal cases. Moreover, all results achieved by the proposed model were greater than 90%. It can be inferred that the proposed model could be a good choice for COVID-19 diagnosis.
In addition, a series of ablation experiments were designed for the purpose of comparing our proposed model with relative architectures. In the following ablation experiment, five-fold cross-validation was consistently applied, and the same measurements are used for comparison.

5.3. Ablation Study

5.3.1. Superiority of ResNet-18 over Different Backbone Networks

This section contains two sections; for ablation experiment I, we compared ResNet-18 against six classic CNNs, AlexNet, VGG, GoogleNet, DenseNet, and MobileNet, respectively. For ablation experiment II, ResNet-18 competed with another ResNet variant, namely, ResNet-50.
The MA accuracies achieved in ablation experiment I are provided in Table 10. The average MA accuracies when using AlexNet, VGG, GoogleNet, DenseNet, MobileNet, and ResNet-18 (ours) backbones are shown in the last column, that is, 95.28%, 92.32%, 96.71%, 97.27%, 97.02%, and 97.56%, respectively. Our ResNet-18-based network, CovC-ReDRNet, achieved the highest MA accuracy among the six different backbones, which indicates that ResNet could provide a high-accuracy performance as our backbone model.
Additionally, according to Table 11, the performance is further discussed for COVID-19, non-COVID-19 pneumonia, and normal categories. In the COVID-19 group, the highest sensitivity and specificity were achieved by DenseNet-based and VGG-based networks, at 95.48% and 99.86%. Notedly, our model achieved the best performance in accuracy, precision, and F1-score at 99.44%, 98.40%, and 96.82%, respectively. Although the ResNet-18-based model did not have the best sensitivity and specificity, a slight difference of 0.17 and 0.01 percentage points could be observed. In both the non-COVID-19 pneumonia group and the normal group, CovC-ReDRNet defeated the other five backbone frameworks, achieving the most outstanding result in sensitivity, specificity, accuracy, precision, and F1-score.
For ablation experiment II, the MA accuracies and the measurements are elaborated in Table 12 and Table 13, accompanied by the top score. It was found that our CovC-ReDRNet model achieved a better performance with all the indicators, which might be explained by the information wastage caused by the deep layers.

5.3.2. Superiority of Deep RNNs over Traditional Classifiers

Compared with traditional transfer learning, our CovC-ReDRNet model outperformed the traditional classifier, the softmax modules typically used for ResNet-18, with a deep RNN classifier, and the dRVFL modules typically constructed in our novel model. In ablation experiment III, the different traditional classifiers were associated with their RNNs. In total, thirty architectures including our CovC-ReDRNet model were employed in this ablation experiment; that is, six pre-trained CNNs were modified with the transfer learning mechanism as well as with the RNN framework separately.
Along with the highest score, the MA accuracies based on five-fold cross-validation are presented in Table 14. It was found that the dRVFL technology was in the lead during the whole race. In particular, the dRVFLs stand out as basic classifiers, achieving 2.08, 3.22, 0.63, 0.15, 1.18, and 0.32 percentage points with the baseline of AlexNet, VGG, GoogleNet, DenseNet, MobileNet, and ResNet-18 (ours), respectively. According to the result, the dRVFL increases the MA accuracy significantly, which supports the superiority of RNNs over traditional classifiers. Moreover, our CovC-ReDRNet model had the top score among the thirty networks, which indicates the dRVFL framework demonstrates a greater capability of providing a more accurate algorithm for the COVID-19 classification task.

5.3.3. Superiority of Deep RNN over Shallow RNNs

According to Section 4.4.1, the deep RNN had better stability and robustness compared with single-layer RNNs. Ablation experiment IV was designed for investigating the capability of the deep RNN over the single-hidden-layer architecture, in particular, the dRVFL architecture in our framework.
In Table 15, the dRVFL architecture evidences its superiority by exceeding all three single-hidden-layer networks, which supports that our RVFL deepening strategy significantly improves the MA accuracy. Additionally, the RVFL comes out on top in the competition of all shallow RNNs, obtaining an MA accuracy of 97.37% and defeating the ELM and SNN, which obtained accuracies of 96.52% and 96.60%, respectively.
Regrettably, both the deep extreme learning machine (dELM) and deep Schmidt neural network (dSNN) achieved a limited accuracy of 77.62%. The unsatisfactory model performance implies that the model fails to reflect proper disease classification, which could be explained by the network structure of the dELM and dSNN. There is no direct correlation between the ELM and SNN input and output layers; therefore, the random weights added to multiple hidden layers cause a significant loss of image pattern in classification. Accordingly, the experimental results for dELM and dSNN are worse than those for the ELM and SNN with a single hidden layer. Furthermore, the equivalent results signify that multiple hidden layers expand the impact of random weights on the classification performance much more than an output layer that is identical to the SNN. Therefore, using an RVFL combined with a deepened component, that is, dRVFL, could be a sensible approach.
The measurements for different classes in ablation experiment IV are described in Table 16, accompanied by the highest score. It can be observed that our CovC-ReDRNet model achieved the best performance with almost all indicators, but scored marginally lower than the RVFL on sensitivity in the non-COVID-19 pneumonia group by 0.12 percentage points as well as on specificity in the normal group by 0.06 percentage points.

5.4. Comparison Study

Comparison studies play a crucial role in research, helping to advance the research in a particular field by identifying the latest and best models. In this section, the cutting-edge deep learning research related to COVID-19 disease diagnosis and classification, especially that on the three-category classification task, is compared with our study. Table 17 shows the performance of models based on the level of their knowledge and provides a longitudinal digital comparison with our proposed model. It was found that our proposed model achieved the highest MA accuracy, achieving 97.56% accuracy, compared with the state-of-the-art (SOTA) methods with their score of 95.57%. Hence, this indicates that our proposed model shows a good performance in the three-category classification task with respect to the COVID-19 domain.

6. Conclusions

A novel CovC-ReDRNet model is proposed to distinguish COVID-19 patients from non-COVID-19 pneumonia patients as well as normal cases. A ResNet-18 pre-trained on the ImageNet dataset is introduced as the backbone model, and afterwards tailored for feature representation. The feature map from the tailored CNN was fed to our feature-based RNN framework; subsequently, the feature representation was automatically paired with the optimal RNN. Finally, the proposed model was developed for the classification task.
The proposed model was implemented with the output of the last ‘ReLU’ layer as the optimal feature representation as well as the dRVFL as the optimal classifier. Results based on five-fold cross-validation reveal that our method achieved the highest MA accuracy at 97.56%. To be precise, the average sensitivity, specificity, accuracy, precision, and F1-score of the COVID-19 group were 95.31%, 99.85%, 99.44%, 98.40%, and 96.82%; the non-COVID-19 pneumonia group achieved scores of 91.09%, 98.68%, 96.81%, 95.75%, and 93.36%, and the normal group had scores of 98.43%, 92.50%, 96.44%, 96.29%, and 97.35%, respectively.
Furthermore, our novel feature-based RNN framework can be used to design the most adaptive model for a specific task. This demonstrates that our proposed framework maintains tremendous potential for improving the generalizability of a model, which could be adapted to more tasks on various domains.
Nevertheless, there are still some limitations to this project. It is difficult to know how this model achieves this classification performance, so interpretation and visualization of the networks is one of my future research directions. Furthermore, only one database was used in this paper. More database and validation sets could better support the generality of this model. I shall also try to employ semi-supervised learning and unsupervised learning methods to improve the classification performance as a massive number of medical images are unlabeled, and they can be helpful in feature generation and fusion.

Author Contributions

Conceptualization, H.Z.; methodology, H.Z. and Z.Z.; software, H.Z. and Z.Z.; validation, H.Z., S.W. and Y.Z.; formal analysis, H.Z. and Y.Z.; investigation, H.Z. and S.W.; resources, H.Z.; data curation, H.Z.; writing—original draft preparation, H.Z. and Z.Z.; writing—review and editing, H.Z., Z.Z., S.W. and Y.Z.; visualization, H.Z.; supervision, S.W. and Y.Z.; project administration, S.W. and Y.Z.; funding acquisition, S.W. and Y.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This paper was partially supported by MRC, UK (MC_PC_17171); Royal Society, UK (RP202G0230); BHF, UK (AA/18/3/34220); Hope Foundation for Cancer Research, UK (RM60G0680); GCRF, UK (P202PF11); Sino-UK Industrial Fund, UK (RP202G0289); LIAS, UK (P202ED10, P202RE969); Data Science Enhancement Fund, UK (P202RE237); Fight for Sight, UK (24NN201); Sino-British Education Fund, UK (OP202006); BBSRC, UK (RM32G0178B8).

Data Availability Statement

The datasets generated and analyzed in the current study are available from the corresponding author on reasonable request.

Acknowledgments

The authors would like to thank the reviewers and the Editor-in-Chief for their helpful comments and recommendations.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. World Health Organization. COVID-19 Weekly Epidemiological Update, Edition 134, 16 March 2023; World Health Organization: Geneva, Switzerland, 2023. [Google Scholar]
  2. Zhu, N.; Zhang, D.; Wang, W.; Li, X.; Yang, B.; Song, J.; Zhao, X.; Huang, B.; Shi, W.; Lu, R. A novel coronavirus from patients with pneumonia in China, 2019. N. Engl. J. Med. 2020, 382, 727–733. [Google Scholar] [CrossRef] [PubMed]
  3. Guo, Y.-R.; Cao, Q.-D.; Hong, Z.-S.; Tan, Y.-Y.; Chen, S.-D.; Jin, H.-J.; Tan, K.-S.; Wang, D.-Y.; Yan, Y. The origin, transmission and clinical therapies on coronavirus disease 2019 (COVID-19) outbreak—An update on the status. Mil. Med. Res. 2020, 7, 11. [Google Scholar] [CrossRef] [Green Version]
  4. Lotfi, M.; Hamblin, M.R.; Rezaei, N. COVID-19: Transmission, prevention, and potential therapeutic opportunities. Clin. Chim. Acta 2020, 508, 254–266. [Google Scholar] [CrossRef]
  5. Nouvellet, P.; Bhatia, S.; Cori, A.; Ainslie, K.E.; Baguelin, M.; Bhatt, S.; Boonyasiri, A.; Brazeau, N.F.; Cattarino, L.; Cooper, L.V. Reduction in mobility and COVID-19 transmission. Nat. Commun. 2021, 12, 1090. [Google Scholar] [CrossRef] [PubMed]
  6. Cook, T. Personal protective equipment during the coronavirus disease (COVID) 2019 pandemic—A narrative review. Anaesthesia 2020, 75, 920–927. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Crook, H.; Raza, S.; Nowell, J.; Young, M.; Edison, P. Long covid—Mechanisms, risk factors, and management. BMJ 2021, 374, n1648. [Google Scholar] [CrossRef] [PubMed]
  8. Ceban, F.; Ling, S.; Lui, L.M.; Lee, Y.; Gill, H.; Teopiz, K.M.; Rodrigues, N.B.; Subramaniapillai, M.; Di Vincenzo, J.D.; Cao, B. Fatigue and cognitive impairment in Post-COVID-19 Syndrome: A systematic review and meta-analysis. Brain Behav. Immun. 2022, 101, 93–135. [Google Scholar] [CrossRef]
  9. Bullard, J.; Dust, K.; Funk, D.; Strong, J.E.; Alexander, D.; Garnett, L.; Boodman, C.; Bello, A.; Hedley, A.; Schiffman, Z. Predicting infectious severe acute respiratory syndrome coronavirus 2 from diagnostic samples. Clin. Infect. Dis. 2020, 71, 2663–2666. [Google Scholar] [CrossRef]
  10. Tahamtan, A.; Ardebili, A. Real-time RT-PCR in COVID-19 detection: Issues affecting the results. Expert Rev. Mol. Diagn. 2020, 20, 453–454. [Google Scholar] [CrossRef] [Green Version]
  11. Sheikhzadeh, E.; Eissa, S.; Ismail, A.; Zourob, M. Diagnostic techniques for COVID-19 and new developments. Talanta 2020, 220, 121392. [Google Scholar] [CrossRef]
  12. Tsang, N.N.Y.; So, H.C.; Ng, K.Y.; Cowling, B.J.; Leung, G.M.; Ip, D.K.M. Diagnostic performance of different sampling approaches for SARS-CoV-2 RT-PCR testing: A systematic review and meta-analysis. Lancet Infect. Dis. 2021, 21, 1233–1245. [Google Scholar] [CrossRef]
  13. Dong, D.; Tang, Z.; Wang, S.; Hui, H.; Gong, L.; Lu, Y.; Xue, Z.; Liao, H.; Chen, F.; Yang, F. The role of imaging in the detection and management of COVID-19: A review. IEEE Rev. Biomed. Eng. 2020, 14, 16–29. [Google Scholar] [CrossRef] [PubMed]
  14. Salehi, S.; Abedi, A.; Balakrishnan, S.; Gholamrezanezhad, A. Coronavirus Disease 2019 (COVID-19): A Systematic Review of Imaging Findings in 919 Patients. Am. J. Roentgenol. 2020, 215, 87–93. [Google Scholar] [CrossRef] [PubMed]
  15. Li, Y.; Yao, L.; Li, J.; Chen, L.; Song, Y.; Cai, Z.; Yang, C. Stability issues of RT-PCR testing of SARS-CoV-2 for hospitalized patients clinically diagnosed with COVID-19. J. Med. Virol. 2020, 92, 903–908. [Google Scholar] [CrossRef] [Green Version]
  16. Li, Y.; Xia, L. Coronavirus Disease 2019 (COVID-19): Role of Chest CT in Diagnosis and Management. Am. J. Roentgenol. 2020, 214, 1280–1286. [Google Scholar] [CrossRef]
  17. Long, C.; Xu, H.; Shen, Q.; Zhang, X.; Fan, B.; Wang, C.; Zeng, B.; Li, Z.; Li, X.; Li, H. Diagnosis of the Coronavirus disease (COVID-19): rRT-PCR or CT? Eur. J. Radiol. 2020, 126, 108961. [Google Scholar] [CrossRef]
  18. Ai, T.; Yang, Z.; Hou, H.; Zhan, C.; Chen, C.; Lv, W.; Tao, Q.; Sun, Z.; Xia, L. Correlation of chest CT and RT-PCR testing in coronavirus disease 2019 (COVID-19) in China: A report of 1014 cases. Radiology 2020, 296, E32–E40. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  19. Xie, X.; Zhong, Z.; Zhao, W.; Zheng, C.; Wang, F.; Liu, J. Chest CT for typical 2019-nCoV pneumonia: Relationship to negative RT-PCR testing. Radiology 2020, 296, E41–E45. [Google Scholar] [CrossRef] [Green Version]
  20. Huang, P.; Liu, T.; Huang, L.; Liu, H.; Lei, M.; Xu, W.; Hu, X.; Chen, J.; Liu, B. Use of chest CT in combination with negative RT-PCR assay for the 2019 novel coronavirus but high clinical suspicion. Radiology 2020, 295, 22–23. [Google Scholar] [CrossRef] [Green Version]
  21. Chen, D.; Jiang, X.; Hong, Y.; Wen, Z.; Wei, S.; Peng, G.; Wei, X. Can Chest CT Features Distinguish Patients With Negative From Those With Positive Initial RT-PCR Results for Coronavirus Disease (COVID-19)? Am. J. Roentgenol. 2020, 216, 66–70. [Google Scholar] [CrossRef]
  22. Wolters, P.J.; Collard, H.R.; Jones, K.D. Pathogenesis of Idiopathic Pulmonary Fibrosis. Annu. Rev. Pathol. Mech. Dis. 2014, 9, 157–179. [Google Scholar] [CrossRef] [Green Version]
  23. Marrie, T.J. Community-acquired pneumonia. Clin. Infect. Dis. 1994, 18, 501–513. [Google Scholar] [CrossRef] [PubMed]
  24. Torres, A.; Catia, C.; Niederman, M.S.; Rosario, M.; Chalmers, J.D.; Wunderink, R.G. Pneumonia (Primer). Nat. Rev. Dis. Prim. 2021, 7, 25. [Google Scholar] [CrossRef]
  25. Ruuskanen, O.; Lahti, E.; Jennings, L.C.; Murdoch, D.R. Viral pneumonia. Lancet 2011, 377, 1264–1275. [Google Scholar] [CrossRef]
  26. Lodha, R.; Kabra, S.K.; Pandey, R.M. Antibiotics for community-acquired pneumonia in children. Cochrane Database Syst. Rev. 2013, 2013, CD004874. [Google Scholar] [CrossRef] [PubMed]
  27. Zhao, D.; Yao, F.; Wang, L.; Zheng, L.; Gao, Y.; Ye, J.; Guo, F.; Zhao, H.; Gao, R. A Comparative Study on the Clinical Features of Coronavirus 2019 (COVID-19) Pneumonia With Other Pneumonias. Clin. Infect. Dis. 2020, 71, 756–761. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. Wang, L.; Lin, Z.Q.; Wong, A. COVID-Net: A tailored deep convolutional neural network design for detection of COVID-19 cases from chest X-ray images. Sci. Rep. 2020, 10, 19549. [Google Scholar] [CrossRef]
  29. Hemdan, E.E.-D.; Shouman, M.A.; Karar, M.E. Covidx-net: A framework of deep learning classifiers to diagnose COVID-19 in X-ray images. arXiv 2020, arXiv:2003.11055. [Google Scholar]
  30. Narin, A.; Kaya, C.; Pamuk, Z. Automatic detection of coronavirus disease (COVID-19) using X-ray images and deep convolutional neural networks. Pattern Anal. Appl. 2021, 24, 1207–1220. [Google Scholar] [CrossRef]
  31. Rahman, T.; Akinbi, A.; Chowdhury, M.E.H.; Rashid, T.A.; Şengür, A.; Khandakar, A.; Islam, K.R.; Ismael, A.M. COV-ECGNET: COVID-19 detection using ECG trace images with deep convolutional neural network. Health Inf. Sci. Syst. 2022, 10, 1. [Google Scholar] [CrossRef]
  32. Abbas, A.; Abdelsamea, M.M.; Gaber, M.M. Classification of COVID-19 in chest X-ray images using DeTraC deep convolutional neural network. Appl. Intell. 2021, 51, 854–864. [Google Scholar] [CrossRef] [PubMed]
  33. Zhang, X.; Zhou, J.; Sun, W.; Jha, S.-K. A Lightweight CNN Based on Transfer Learning for COVID-19 Diagnosis. Comput. Mater. Contin. 2022, 72, 1123–1137. [Google Scholar] [CrossRef]
  34. Park, S.; Kim, G.; Oh, Y.; Seo, J.B.; Lee, S.M.; Kim, J.H.; Moon, S.; Lim, J.-K.; Ye, J.C. Multi-task vision transformer using low-level chest X-ray feature corpus for COVID-19 diagnosis and severity quantification. Med. Image Anal. 2022, 75, 102299. [Google Scholar] [CrossRef]
  35. Goncharov, M.; Pisov, M.; Shevtsov, A.; Shirokikh, B.; Kurmukov, A.; Blokhin, I.; Chernina, V.; Solovev, A.; Gombolevskiy, V.; Morozov, S.; et al. CT-Based COVID-19 triage: Deep multitask learning improves joint identification and severity quantification. Med. Image Anal. 2021, 71, 102054. [Google Scholar] [CrossRef] [PubMed]
  36. Signoroni, A.; Savardi, M.; Benini, S.; Adami, N.; Leonardi, R.; Gibellini, P.; Vaccher, F.; Ravanelli, M.; Borghesi, A.; Maroldi, R.; et al. BS-Net: Learning COVID-19 pneumonia severity on a large chest X-ray dataset. Med. Image Anal. 2021, 71, 102046. [Google Scholar] [CrossRef]
  37. Kollias, D.; Arsenos, A.; Kollias, S. AI-MIA: COVID-19 Detection and Severity Analysis through Medical Imaging; Springer: Cham, Switzerland, 2023; pp. 677–690. [Google Scholar]
  38. Chieregato, M.; Frangiamore, F.; Morassi, M.; Baresi, C.; Nici, S.; Bassetti, C.; Bnà, C.; Galelli, M. A hybrid machine learning/deep learning COVID-19 severity predictive model from CT images and clinical data. Sci. Rep. 2022, 12, 4329. [Google Scholar] [CrossRef]
  39. Bougourzi, F.; Distante, C.; Dornaika, F.; Taleb-Ahmed, A. CNR-IEMN-CD and CNR-IEMN-CSD Approaches for COVID-19 Detection and COVID-19 Severity Detection from 3D CT-Scans; Springer: Cham, Switzerland, 2023; pp. 593–604. [Google Scholar]
  40. Rustam, F.; Reshi, A.A.; Mehmood, A.; Ullah, S.; On, B.-W.; Aslam, W.; Choi, G.S. COVID-19 future forecasting using supervised machine learning models. IEEE Access 2020, 8, 101489–101499. [Google Scholar] [CrossRef]
  41. An, C.; Lim, H.; Kim, D.-W.; Chang, J.H.; Choi, Y.J.; Kim, S.W. Machine learning prediction for mortality of patients diagnosed with COVID-19: A nationwide Korean cohort study. Sci. Rep. 2020, 10, 18716. [Google Scholar] [CrossRef]
  42. Khan, I.U.; Aslam, N.; Aljabri, M.; Aljameel, S.S.; Kamaleldin, M.M.A.; Alshamrani, F.M.; Chrouf, S.M.B. Computational intelligence-based model for mortality rate prediction in COVID-19 patients. Int. J. Environ. Res. Public Health 2021, 18, 6429. [Google Scholar] [CrossRef]
  43. Ikemura, K.; Bellin, E.; Yagi, Y.; Billett, H.; Saada, M.; Simone, K.; Stahl, L.; Szymanski, J.; Goldstein, D.; Reyes Gil, M. Using automated machine learning to predict the mortality of patients with COVID-19: Prediction model development study. J. Med. Internet Res. 2021, 23, e23458. [Google Scholar] [CrossRef]
  44. Elshennawy, N.M.; Ibrahim, D.M.; Sarhan, A.M.; Arafa, M. Deep-Risk: Deep Learning-Based Mortality Risk Predictive Models for COVID-19. Diagnostics 2022, 12, 1847. [Google Scholar] [CrossRef]
  45. Carrington, A.M.; Manuel, D.G.; Fieguth, P.W.; Ramsay, T.; Osmani, V.; Wernly, B.; Bennett, C.; Hawken, S.; Magwood, O.; Sheikh, Y. Deep ROC analysis and AUC as balanced average accuracy, for improved classifier selection, audit and explanation. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 45, 329–341. [Google Scholar] [CrossRef]
  46. Hussain, E.; Hasan, M.; Rahman, M.A.; Lee, I.; Tamanna, T.; Parvez, M.Z. CoroDet: A deep learning based classification for COVID-19 detection using chest X-ray images. Chaos Solitons Fractals 2021, 142, 110495. [Google Scholar] [CrossRef]
  47. Xu, X.; Jiang, X.; Ma, C.; Du, P.; Li, X.; Lv, S.; Yu, L.; Ni, Q.; Chen, Y.; Su, J. A deep learning system to screen novel coronavirus disease 2019 pneumonia. Engineering 2020, 6, 1122–1129. [Google Scholar] [CrossRef]
  48. Beutel, J.; Kundel, H.L.; Kim, Y.; Van Metter, R.L.; Horii, S.C. Handbook of Medical Imaging; Spie Press: Bellingham, WA, USA, 2000; Volume 3. [Google Scholar]
  49. Kasban, H.; El-Bendary, M.; Salama, D. A comparative study of medical imaging techniques. Int. J. Inf. Sci. Intell. Syst. 2015, 4, 37–58. [Google Scholar]
  50. Farncombe, T.; Iniewski, K. Medical Imaging: Technology and Applications; CRC Press: Boca Raton, FL, USA, 2017. [Google Scholar]
  51. Kanne, J.P.; Bai, H.; Bernheim, A.; Chung, M.; Haramati, L.B.; Kallmes, D.F.; Little, B.P.; Rubin, G.; Sverzellati, N. COVID-19 imaging: What we know now and what remains unknown. Radiology 2021, 299, E262–E279. [Google Scholar] [CrossRef]
  52. Chung, M.; Bernheim, A.; Mei, X.; Zhang, N.; Huang, M.; Zeng, X.; Cui, J.; Xu, W.; Yang, Y.; Fayad, Z.A. CT imaging features of 2019 novel coronavirus (2019-nCoV). Radiology 2020, 295, 202–207. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  53. Simpson, S.; Kay, F.U.; Abbara, S.; Bhalla, S.; Chung, J.H.; Chung, M.; Henry, T.S.; Kanne, J.P.; Kligerman, S.; Ko, J.P. Radiological Society of North America expert consensus statement on reporting chest CT findings related to COVID-19. Endorsed by the Society of Thoracic Radiology, the American College of Radiology, and RSNA. J. Thorac. Imaging 2020, 35, 219–227. [Google Scholar] [CrossRef] [PubMed]
  54. Patel, P. Chest X-ray (COVID-19 & Pneumonia). 2020. Available online: https://www.kaggle.com/prashant268/chest-xray-covid19-pneumonia (accessed on 4 November 2022).
  55. Cohen, J.P.; Morrison, P.; Dao, L.; Roth, K.; Duong, T.Q.; Ghassemi, M. COVID-19 image data collection: Prospective predictions are the future. arXiv 2020, arXiv:2006.11988. [Google Scholar] [CrossRef]
  56. Kermany, D.S.; Goldbaum, M.; Cai, W.; Valentim, C.C.; Liang, H.; Baxter, S.L.; McKeown, A.; Yang, G.; Wu, X.; Yan, F. Identifying medical diagnoses and treatable diseases by image-based deep learning. Cell 2018, 172, 1122–1131.e9. [Google Scholar] [CrossRef]
  57. Chung, A. Figure 1 COVID-19 chest X-ray dataset initiative. Retrieved April 2020, 17, 2020. [Google Scholar]
  58. Wang, L.; Wong, A.; Lin, Z.; McInnis, P.; Chung, A.; Gunraj, H.; Lee, J.; Ross, M.; VanBerlo, B.; Ebadi, A. Actualmed COVID-19 Chest X-ray Dataset Initiative. 2020. Available online: https://github.com/agchung/actualmed-covid-chestxraydataset (accessed on 4 November 2022).
  59. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  60. Telgarsky, M. Benefits of depth in neural networks. In Proceedings of the Conference on Learning Theory, New York, NY, USA, 23–26 June 2016; pp. 1517–1539. [Google Scholar]
  61. Zhou, D.-X. Universality of deep convolutional neural networks. Appl. Comput. Harmon. Anal. 2020, 48, 787–794. [Google Scholar] [CrossRef] [Green Version]
  62. Bianchini, M.; Scarselli, F. On the complexity of neural network classifiers: A comparison between shallow and deep architectures. IEEE Trans. Neural Netw. Learn. Syst. 2014, 25, 1553–1565. [Google Scholar] [CrossRef] [PubMed]
  63. Shrestha, A.; Mahmood, A. Review of deep learning algorithms and architectures. IEEE Access 2019, 7, 53040–53065. [Google Scholar] [CrossRef]
  64. Chen, Z.; Deng, L.; Wang, B.; Li, G.; Xie, Y. A comprehensive and modularized statistical framework for gradient norm equality in deep neural networks. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 44, 13–31. [Google Scholar] [CrossRef]
  65. Allen-Zhu, Z.; Li, Y.; Song, Z. On the convergence rate of training recurrent neural networks. Adv. Neural Inf. Process. Syst. 2019, 32, 1–13. [Google Scholar]
  66. Antun, V.; Renna, F.; Poon, C.; Adcock, B.; Hansen, A.C. On instabilities of deep learning in image reconstruction and the potential costs of AI. Proc. Natl. Acad. Sci. USA 2020, 117, 30088–30095. [Google Scholar] [CrossRef]
  67. Glorot, X.; Bengio, Y. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, Sardinia, Italy, 13–15 May 2010; pp. 249–256. [Google Scholar]
  68. Chaudhari, P.; Soatto, S. Stochastic gradient descent performs variational inference, converges to limit cycles for deep networks. In Proceedings of the 2018 Information Theory and Applications Workshop (ITA), San Diego, CA, USA, 11–16 February 2018; pp. 1–10. [Google Scholar]
  69. Poggio, T.; Kawaguchi, K.; Liao, Q.; Miranda, B.; Rosasco, L.; Boix, X.; Hidary, J.; Mhaskar, H. Theory of deep learning III: Explaining the non-overfitting puzzle. arXiv 2017, arXiv:1801.00173. [Google Scholar]
  70. Zhu, Z.; Wu, J.; Yu, B.; Wu, L.; Ma, J. The anisotropic noise in stochastic gradient descent: Its behavior of escaping from sharp minima and regularization effects. arXiv 2018, arXiv:1803.00195. [Google Scholar]
  71. Arora, S.; Cohen, N.; Golowich, N.; Hu, W. A convergence analysis of gradient descent for deep linear neural networks. arXiv 2018, arXiv:1810.02281. [Google Scholar]
  72. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef] [Green Version]
  73. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  74. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
  75. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
  76. Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
  77. Huang, G.-B.; Zhu, Q.-Y.; Siew, C.-K. Extreme learning machine: Theory and applications. Neurocomputing 2006, 70, 489–501. [Google Scholar] [CrossRef]
  78. Schmidt, W.F.; Kraaijveld, M.A.; Duin, R.P. Feed forward neural networks with random weights. In Proceedings of the International Conference on Pattern Recognition, The Hague, The Netherlands, 30 August–3 September 1992; p. 1. [Google Scholar]
  79. Pao, Y.-H.; Park, G.-H.; Sobajic, D.J. Learning and generalization characteristics of the random vector functional-link net. Neurocomputing 1994, 6, 163–180. [Google Scholar] [CrossRef]
  80. Shi, Q.; Katuwal, R.; Suganthan, P.N.; Tanveer, M. Random vector functional link neural network based ensemble deep learning. Pattern Recognit. 2021, 117, 107978. [Google Scholar] [CrossRef]
  81. Wang, J.; Bao, Y.; Wen, Y.; Lu, H.; Luo, H.; Xiang, Y.; Li, X.; Liu, C.; Qian, D. Prior-attention residual learning for more discriminative COVID-19 screening in CT images. IEEE Trans. Med. Imaging 2020, 39, 2572–2583. [Google Scholar] [CrossRef] [PubMed]
  82. Rahaman, M.M.; Li, C.; Yao, Y.; Kulwa, F.; Rahman, M.A.; Wang, Q.; Qi, S.; Kong, F.; Zhu, X.; Zhao, X. Identification of COVID-19 samples from chest X-ray images using deep learning: A comparison of transfer learning approaches. J. X-ray Sci. Technol. 2020, 28, 821–839. [Google Scholar] [CrossRef]
  83. Apostolopoulos, I.D.; Mpesiana, T.A. COVID-19: Automatic detection from X-ray images utilizing transfer learning with convolutional neural networks. Phys. Eng. Sci. Med. 2020, 43, 635–640. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  84. Li, L.; Qin, L.; Xu, Z.; Yin, Y.; Wang, X.; Kong, B.; Bai, J.; Lu, Y.; Fang, Z.; Song, Q. Using artificial intelligence to detect COVID-19 and community-acquired pneumonia based on pulmonary CT: Evaluation of the diagnostic accuracy. Radiology 2020, 296, E65–E71. [Google Scholar] [CrossRef] [Green Version]
  85. Khan, A.I.; Shah, J.L.; Bhat, M.M. CoroNet: A deep neural network for detection and diagnosis of COVID-19 from chest X-ray images. Comput. Methods Programs Biomed. 2020, 196, 105581. [Google Scholar] [CrossRef] [PubMed]
  86. Rahimzadeh, M.; Attar, A. A modified deep convolutional neural network for detecting COVID-19 and pneumonia from chest X-ray images based on the concatenation of Xception and ResNet50V2. Inform. Med. Unlocked 2020, 19, 100360. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Templates for each category of the dataset.
Figure 1. Templates for each category of the dataset.
Make 05 00037 g001
Figure 2. Comparison of residual learning with shortcut connections. Given the plain network (a), the single-block residual learning (b) is compared with the muti-block residual learning (c). In particular, x s refers to the patterns in shallow layers and x d refers to the deep layers. Notably, x d remains in line with x s + 1 in Equation (1) and x d with x d in Equation (2).
Figure 2. Comparison of residual learning with shortcut connections. Given the plain network (a), the single-block residual learning (b) is compared with the muti-block residual learning (c). In particular, x s refers to the patterns in shallow layers and x d refers to the deep layers. Notably, x d remains in line with x s + 1 in Equation (1) and x d with x d in Equation (2).
Make 05 00037 g002
Figure 3. Tailoring the pre-trained ResNet-18.
Figure 3. Tailoring the pre-trained ResNet-18.
Make 05 00037 g003
Figure 4. The procedure of the feature representation.
Figure 4. The procedure of the feature representation.
Make 05 00037 g004
Figure 5. The structure of ELM.
Figure 5. The structure of ELM.
Make 05 00037 g005
Figure 6. The structure of SNN.
Figure 6. The structure of SNN.
Make 05 00037 g006
Figure 7. The framework of RVFL.
Figure 7. The framework of RVFL.
Make 05 00037 g007
Figure 8. The framework of dRVFL.
Figure 8. The framework of dRVFL.
Make 05 00037 g008
Figure 9. The structure of the feature-based RNN framework.
Figure 9. The structure of the feature-based RNN framework.
Make 05 00037 g009
Figure 10. A diagram of the proposed CovC-ReDRNet model.
Figure 10. A diagram of the proposed CovC-ReDRNet model.
Make 05 00037 g010
Table 1. The analysis of SOTA methods in COVID-19 diagnosis task.
Table 1. The analysis of SOTA methods in COVID-19 diagnosis task.
MethodologyTasksContributionsLimitations
Wang, et al. [28]DiagnosisThe study developed COVID-Net as one of the earliest open-source networks for COVID-19 diagnosis.
The study assembled one of the largest publicly available datasets of COVID-19-positive cases.
The study achieved limited model performance with an accuracy of 93.57%.
Hemdan, et al. [29]DiagnosisThe study tested seven different architectures of deep CNN models in COVIDX-Net.The study merely implemented a binary classification task.
The study achieved limited model performance with the highest accuracy being 90%.
Narin, et al. [30]DiagnosisThree different datasets were tested in the experiments.The study merely implemented a binary classification task.
Rahman, et al. [31]DiagnosisSix different architectures of deep CNN models were investigated for multiclassification tasks.The classification task mainly focused on distinguishing COVID-19 from other brain diseases but not lung disease.
Abbas, et al. [32]DiagnosisThe study addressed the problem of irregularities in annotated data using the DeTraC network.The classification task achieved limited model performance with the highest accuracy being 93.1%.
Zhang, et al. [33]DiagnosisThe study proposed a lightweight architecture that takes only 1.06 s on average to diagnose a chest CT image.The study merely implemented a binary classification task.
Park, et al. [34]Diagnosis/Severity
Assessment
The multi-task model is capable of both classification and severity prediction tasks.The classification task achieved limited model performance with the highest accuracy being 86.8%.
Table 2. The contributions of SOTA methods to the COVID-19 severity assessment task.
Table 2. The contributions of SOTA methods to the COVID-19 severity assessment task.
MethodologyTasksContributions
Park, et al. [34]Severity Assessment/DiagnosisThe multi-task model is capable of both classification and severity prediction tasks.
Goncharov, et al. [35]Severity AssessmentThe study obtained the most abundant spatial feature patterns, thus improving the quality and accuracy of the model classification significantly.
Signoroni, et al. [36]Severity AssessmentThe study proposed an end-to-end multi-network, highlighting its multi-task self-attentive behavior with high robustness for all variabilities from the medical domain.
Kollias, et al. [37]Severity Assessment/DiagnosisThe study created a 3D database, COV19-CT-DB, consisting of chest CT scans from COVID-19 patients.
The study developed a hybrid CNN-RNN model for severity classification.
Chieregato, et al. [38]Severity AssessmentIn this study, CT patterns were captured using the 3D-CNN model and in turn, the Boruta algorithm was used to select the theoretical values for the SHAP game, where an AUC of 94.9% was achieved.
Bougourzi, et al. [39]Severity Assessment/DiagnosisThe study proposed an Inception-based ensemble architecture for COVID-19 severity assessment, named CNR-IEMN-CSD. The novel network ranked third in the second COV19D competition, demonstrating an improvement of 6.81% over the baseline results.
Table 3. The contributions of SOTA methods in the COVID-19 prognosis task.
Table 3. The contributions of SOTA methods in the COVID-19 prognosis task.
MethodologyTaskContribution
Rustam, et al. [40]PrognosisThe study contained four traditional prediction machine learning models, that is, linear regression, least absolute shrinkage and selection operator (LASSO), support vector machine (SVM), and exponential smoothing (ES), with the aim of predicting the risk level of COVID-19 spread.
An, et al. [41]PrognosisThe study investigated LASSO, SVM, random forest (RF), and K-nearest neighbor (KNN) in order to predict mortality and thus achieve accurate prognostic predictions to triage patients effectively.
Khan, et al. [42]PrognosisThe study examined RF, KNN, decision tree (DT), logistic regression (LR), extreme gradient boosting (XGBoost), and deep learning networks to forecast mortality in COVID-19 cases. The models were trained on confirmed COVID-19 patients from 146 countries.
Ikemura, et al. [43]PrognosisThe study developed the autoML framework to investigate 20 machine learning models to generate the best ensemble model based on 48 variables. In particular, an AUPRC of 80.7% was recorded.
The study identified critical variables associated with mortality and accurately forecasted the survival of COVID-19 patients.
Elshennawy, et al. [44]PrognosisThe study developed three architectures, a basic CNN-based (named CV-CNN) model, a hybrid CNN combining a long short-term memory (LSTM) mechanism with a CNN model (named CV-LSTM + CNN), and a hybrid model trained using transformed images (named IMG-CNN). In particular, the average accuracy of the IMG-CNN prognostic model reached 94.14%.
Table 4. Data distribution in different categories.
Table 4. Data distribution in different categories.
DatasetCategoryTotal
COVID-19Non-COVID-19 PneumoniaNormal
Training460341812665144
Test1168553171288
Total576427315836432
Table 5. The architecture of the tailored CNN.
Table 5. The architecture of the tailored CNN.
Operation LayersProperty of LayersNumber of ChannelsSize of FilterNumber of FiltersStride ValueSize of PaddingSize of Output
Image input-----224 × 224 × 3
conv1Convolution37 × 7642 × 23 × 3 × 3 × 3112 × 112 × 64
pool1Max Pooling643 × 3-2 × 21 × 1 × 1 × 156 × 56 × 64
conv2a_branch2aConvolution643 × 3641 × 11 × 1 × 1 × 156 × 56 × 64
conv2a_branch2bConvolution643 × 3641 × 11 × 1 × 1 × 156 × 56 × 64
conv2aAddition64----56 × 56 × 64
Add output of two branches element-wise
conv2b_branch2aConvolution643 × 3641 × 11 × 1 × 1 × 156 × 56 × 64
conv2b_branch2bConvolution643 × 3641 × 11 × 1 × 1 × 156 × 56 × 64
conv2bAddition64----56 × 56 × 64
Add output of two branches element-wise
conv3a_branch1Convolution641 × 11282 × 20 × 0 × 0 × 028 × 28 × 128
conv3a_branch2aConvolution643 × 31282 × 21 × 1 × 1 × 128 × 28 × 128
conv3a_branch2bConvolution1283 × 31281 × 11 × 1 × 1 × 128 × 28 × 128
conv3aAddition128----28 × 28 × 128
Add output of two branches element-wise
conv3b_branch2aConvolution1283 × 31281 × 11 × 1 × 1 × 128 × 28 × 128
conv3b_branch2bConvolution1283 × 31281 × 11 × 1 × 1 × 128 × 28 × 128
conv3bAddition128----28 × 28 × 128
Add output of two branches element-wise
conv4a_branch1Convolution1281 × 12562 × 20 × 0 × 0 × 014 × 14 × 256
conv4a_branch2aConvolution1283 × 32562 × 21 × 1 × 1 × 114 × 14 × 256
conv4a_branch2bConvolution2563 × 32561 × 11 × 1 × 1 × 114 × 14 × 256
conv4aAddition256----14 × 14 × 256
Add output of two branches element-wise
conv4b_branch2aConvolution2563 × 32561 × 11 × 1 × 1 × 114 × 14 × 256
conv4b_branch2bConvolution2563 × 32561 × 11 × 1 × 1 × 114 × 14 × 256
conv4bAddition256----14 × 14 × 256
Add output of two branches element-wise
conv5a_branch1Convolution2561 × 15122 × 20 × 0 × 0 × 07 × 7 × 512
conv5a_branch2aConvolution2563 × 35122 × 21 × 1 × 1 × 17 × 7 × 512
conv5a_branch2bConvolution5123 × 35121 × 11 × 1 × 1 × 17 × 7 × 512
conv5aAddition512----7 × 7 × 512
Add output of two branches element-wise
conv5b_branch2aConvolution5123 × 35121 × 11 × 1 × 1 × 17 × 7 × 512
conv5b_branch2bConvolution5123 × 35121 × 11 × 1 × 1 × 17 × 7 × 512
conv5bAddition512----7 × 7 × 512
Add output of two branches element-wise
pool5Global Average Pooling512----1 × 1 × 512
fc128Fully Connected512----1 × 1 × 128
fc3Fully Connected128----1 × 1 × 3
softmax_outSoftmax3----1 × 1 × 3
Classification Output3----1 × 1 × 3
Table 6. The definition of the mathematical symbols used.
Table 6. The definition of the mathematical symbols used.
SymbolMeaning
( x i , y i )The given dataset of the i -th sample
n The input dimension
m The output dimension
X The original input matrix
Y The ground-truth label matrix
M The output matrix of the hidden layer
M + The pseudo-inverse matrix of M
g ( ) The sigmoid function
w j The weights of the j-th hidden node
b j The bias of the j-th hidden node
p The output weights
v The number of hidden nodes
O i The final output
e The output biases of the SNN
D The input of the output layer
l The number of hidden layers
cThe number of categories
TNThe true-negative value according to the confusion matrix
TPThe true-positive value according to the confusion matrix
FPThe false-positive value according to the confusion matrix
FNThe false-negative value according to the confusion matrix
Table 7. The hyper-parameter settings of the proposed CovC-ReDRNet.
Table 7. The hyper-parameter settings of the proposed CovC-ReDRNet.
Hyper-ParameterValue
Mini-batch size10
Max epoch4
Learning rate 10 4
Number of hidden nodes400
Number of hidden layers4
Table 8. The MA accuracies of five-fold cross-validation.
Table 8. The MA accuracies of five-fold cross-validation.
Five-fold Cross-ValidationCovC-ReDRNet (Ours)
Fold 197.62%
Fold 297.82%
Fold 397.20%
Fold 497.57%
Fold 597.62%
Average97.56%
Table 9. The results of CovC-ReDRNet.
Table 9. The results of CovC-ReDRNet.
--SensitivitySpecificityAccuracyPrecisionF1-Score
COVID-1995.31%99.85%99.44%98.40%96.82%
non-COVID-19 Pneumonia91.09%98.68%96.81%95.75%93.36%
Normal98.43%92.50%96.44%96.29%97.35%
MA94.94%97.01%97.56%96.81%95.84%
Table 10. The MA accuracies based on different backbone models.
Table 10. The MA accuracies based on different backbone models.
BackboneFold 1Fold 2Fold 3Fold 4Fold 5Average
AlexNet93.10%92.27%95.96%97.46%97.62%95.28%
VGG94.82%96.12%91.15%88.95%90.56%92.32%
GoogleNet97.04%96.89%97.00%96.42%96.22%96.71%
DenseNet97.72%97.10%96.94%97.77%96.84%97.27%
MobileNet97.10%96.84%97.30%97.05%96.79%97.02%
ResNet-18 (Ours)97.62%97.82%97.20%97.57%97.62%97.56%
Table 11. The measurements in three categories based on different backbone models.
Table 11. The measurements in three categories based on different backbone models.
CategoryBackboneSensitivitySpecificityAccuracyPrecisionF1-Score
COVID-19AlexNet74.96%99.73%97.51%96.14%82.17%
VGG36.00%99.86%94.14%NANA
GoogleNet94.10%99.57%99.08%95.64%94.84%
DenseNet95.48%99.80%99.41%97.87%96.66%
MobileNet91.85%99.83%99.11%98.15%94.88%
ResNet-18 (Ours)95.31%99.85%99.44%98.40%96.82%
non-COVID-19 PneumoniaAlexNet96.82%86.19%93.25%93.40%95.05%
VGG96.47%78.54%90.45%90.01%93.10%
GoogleNet96.98%91.94%95.29%95.97%96.47%
DenseNet98.03%92.22%96.08%96.15%97.08%
MobileNet98.32%90.46%95.68%95.33%96.80%
ResNet-18 (Ours)98.43%92.50%96.44%96.29%97.35%
NormalAlexNet88.94%97.09%95.09%90.91%89.90%
VGG86.03%94.43%92.37%84.00%84.87%
GoogleNet90.27%97.57%95.77%92.46%91.32%
DenseNet90.34%98.29%96.33%94.57%92.39%
MobileNet89.33%98.52%96.25%95.18%92.14%
ResNet-18 (Ours)91.09%98.68%96.81%95.75%93.36%
Table 12. The MA accuracies based on ResNet variants.
Table 12. The MA accuracies based on ResNet variants.
ArchitectureFold 1Fold 2Fold 3Fold 4Fold 5Average
ResNet-5096.37%97.15%96.74%96.89%96.94%96.82%
ResNet-18 (Ours)97.62%97.82%97.20%97.57%97.62%97.56%
Table 13. The measurements in three categories based on ResNet variants.
Table 13. The measurements in three categories based on ResNet variants.
CategoryBackboneSensitivitySpecificityAccuracyPrecisionF1-Score
COVID-19ResNet-5094.62%99.85%99.38%98.39%96.46%
ResNet-18 (Ours)95.31%99.85%99.44%98.40%96.82%
non-COVID-19 PneumoniaResNet-5097.85%90.23%95.29%95.20%96.50%
ResNet-18 (Ours)98.43%92.50%96.44%96.29%97.35%
NormalResNet-5088.38%98.21%95.79%94.18%91.17%
ResNet-18 (Ours)91.09%98.68%96.81%95.75%93.36%
Table 14. The MA accuracies based on RNN technology compared to traditional classifiers.
Table 14. The MA accuracies based on RNN technology compared to traditional classifiers.
TechnologiesBackboneFold 1Fold 2Fold 3Fold 4Fold 5Average
Traditional classifierAlexNet89.68%89.52%94.87%97.05%94.87%93.20%
ELM79.61%77.64%93.47%96.38%95.91%88.60%
SNN77.64%87.65%93.73%96.95%96.74%90.54%
RVFL89.62%89.57%92.54%96.95%96.48%93.03%
dRVFL93.10%92.27%95.96%97.46%97.62%95.28%
Traditional classifierVGG83.73%94.61%90.48%88.17%88.48%89.10%
ELM92.07%87.36%77.59%77.64%78.21%82.57%
SNN91.56%88.66%77.59%77.64%78.94%82.88%
RVFL90.11%92.54%91.36%88.90%89.16%90.41%
dRVFL94.82%96.12%91.15%88.95%90.56%92.32%
Traditional classifierGoogleNet96.47%96.37%96.58%96.32%94.67%96.08%
ELM95.95%95.60%95.86%95.80%93.74%95.39%
SNN96.26%95.86%96.27%95.90%94.82%95.82%
RVFL96.58%96.06%96.74%96.06%95.03%96.09%
dRVFL97.04%96.89%97.00%96.42%96.22%96.71%
Traditional classifierDenseNet97.51%97.15%96.89%97.77%96.32%97.13%
ELM97.46%97.35%97.25%97.88%96.53%97.29%
SNN97.92%97.30%97.20%97.41%96.43%97.25%
RVFL97.92%97.51%97.72%98.14%96.68%97.27%
dRVFL97.72%97.10%96.94%97.77%96.84%97.27%
Traditional classifierMobileNet96.37%95.86%95.54%95.96%95.45%95.83%
ELM96.47%96.01%96.32%96.17%96.17%96.23%
SNN96.73%96.06%96.42%95.86%96.33%96.28%
RVFL97.20%96.58%97.04%96.79%96.89%96.90%
dRVFL97.10%96.84%97.30%97.05%96.79%97.02%
Traditional classifierResNet-1897.00%97.56%96.99%97.67%97.00%97.24%
ELM96.53%96.37%96.58%96.84%96.27%96.52%
SNN96.01%96.58%96.58%97.10%96.74%96.60%
RVFL97.00%97.77%97.15%97.62%97.31%97.37%
dRVFL (Ours)97.62%97.82%97.20%97.57%97.62%97.56%
Table 15. The MA accuracies of a deep RNN along with shallow RNNs.
Table 15. The MA accuracies of a deep RNN along with shallow RNNs.
RNNFold 1Fold 2Fold 3Fold 4Fold 5Average
ELM96.53%96.37%96.58%96.84%96.27%96.52%
SNN96.01%96.58%96.58%97.10%96.74%96.60%
RVFL97.00%97.77%97.15%97.62%97.31%97.37%
dELM77.64%77.59%77.62%77.62%77.64%77.62%
dSNN77.64%77.59%77.62%77.62%77.64%77.62%
dRVFL (Ours)97.62%97.82%97.20%97.57%97.62%97.56%
Table 16. The measurements in three categories along with shallow RNNs.
Table 16. The measurements in three categories along with shallow RNNs.
CategoryRNNSensitivitySpecificityAccuracyPrecisionF1-Score
COVID-19ELM88.37%99.61%98.60%95.68%91.87%
SNN90.63%99.66%98.85%96.31%93.36%
RVFL94.79%99.80%99.35%97.88%96.29%
dELM0%100%91.05%NANA
dSNN0%100%91.05%NANA
dRVFL (Ours)95.31%99.85%99.44%98.40%96.82%
non-COVID-19 PneumoniaELM97.59%90.09%95.07%95.12%96.34%
SNN97.75%90.14%95.20%95.15%96.43%
RVFL98.55%91.62%96.22%95.88%97.20%
dELM0%100%75.39%NANA
dSNN0%100%75.39%NANA
dRVFL (Ours)98.43%92.50%96.44%96.29%97.35%
NormalELM89.51%97.96%95.88%93.48%91.45%
SNN88.76%98.04%95.76%93.67%91.14%
RVFL89.77%98.74%96.53%95.89%92.72%
dELM100%0%66.43%66.43%79.83%
dSNN100%0%66.43%66.43%79.83%
dRVFL (Ours)91.09%98.68%96.81%95.75%93.36%
Table 17. Comparison of SOTA methods with our proposed model.
Table 17. Comparison of SOTA methods with our proposed model.
(a)
MethodSample SizeCategory DistributionModel Performance
MAClass I: COVID-19Class II: Non-COVID-19 PneumoniaClass III: Normal
CovC-ReDRNet (Ours)6432 chest X-ray images576 COVID-19 patients
4273 non-COVID-19 pneumonia cases
1583 normal cases
MA accuracy = 97.56%
MA sensitivity = 94.94%
MA specificity =97.01%
MA precision = 96.81%
MA F1-score = 95.84%
Accuracy = 99.44%
Sensitivity = 95.31%
Specificity = 99.85%
Precision = 98.40%
F1-score = 96.82%
Accuracy = 96.81%
Sensitivity = 91.09%
Specificity = 98.68%
Precision = 95.75%
F1-score = 93.36%
Accuracy = 96.44%
Sensitivity = 98.43%
Specificity = 92.50%
Precision = 96.29%
F1-score = 97.35%
ResNet-18 with location-attention mechanism [47]618 chest CT images219 COVID-19 patients
224 IAVP cases
175 normal cases
MA accuracy = 91.11%
MA sensitivity = 86.67%
MA specificity = 93.33%
MA precision = 86.85%
MA F1-score = 86.71%
Accuracy = 88.89%
Sensitivity = 86.7%
Specificity = 90.00%
Precision = 81.3%
F1-score = 83.9%
Accuracy = 90.00%
Sensitivity = 83.3%
Specificity = 93.33%
Precision = 86.2%
F1-score = 84.7%
Accuracy = 94.44%
Sensitivity = 90.0%
Specificity = 96.67%
Precision = 93.1%
F1-score = 91.5%
3D-ResNets with the prior-attention mechanism [81]4657 chest
CT images
1315 COVID-19 patients
2406 interstitial lung disease (ILD) cases
936 normal cases
MA accuracy = 91.40%
MA sensitivity = 86.13%
MA specificity = 93.20%
MA precision = 84.70%
MA F1-score = 85.20%
Accuracy = 93.3%
Sensitivity = 87.6%
Specificity = 95.5%
Precision = 88.4%
F1-score = 87.8%
Accuracy = 89.4%
Sensitivity = 88.5%
Specificity = 90.6%
Precision = 91.9%
F1-score = 90.2%
Accuracy = 91.5%
Sensitivity = 82.3%
Specificity = 93.5%
Precision = 73.8%
F1-score = 77.6%
Pre-trained VGG-19 [82]860 chest X-ray images260 COVID-19 patients
300 non-COVID-19 pneumonia cases
300 normal cases
MA accuracy = 92.86%
MA sensitivity = 89.67%
MA specificity = 94.52%
MA precision = 90.83%
MA F1-score = 89.65%
Accuracy = 97.14%
Sensitivity = 95.00%
Specificity = 98.00%
Precision = 95.00%
F1-score = 95.00%
Accuracy = 91.43%
Sensitivity = 78.00%
Specificity = 98.89%
Precision = 97.50%
F1-score = 86.60%
Accuracy = 90.00%
Sensitivity = 96.00%
Specificity = 86.67%
Precision = 80.00%
F1-score = 87.30%
(b)
MethodSample SizeCategory DistributionModel Performance
MAClass I: COVID-19Class II: Non-COVID-19 PneumoniaClass III: Normal/Other Lung Disease
Pre-trained VGG-19 [83]1428 chest X-ray images224 COVID-19 patients
700 bacterial pneumonia cases
504 normal cases
MA accuracy = 95.57%
MA sensitivity = 91.66%
MA specificity = 96.28%
MA precision = 92.09%
MA F1-score = 91.86%
Accuracy = 97.58%
Sensitivity = 88.41%
Specificity = 98.69%
Precision = 89.05%
F1-score= 88.73%
Accuracy = 94.87%
Sensitivity = 91.27%
Specificity = 96.84%
Precision = 94.07%
F1-score= 92.65%
Accuracy = 94.27%
Sensitivity = 95.29%
Specificity = 93.3%
Precision = 93.16%
F1-score= 94.21%
COVNet (RestNet5 based)
[84]
4352 chest X-ray images1292 COVID-19 patients
1735 community-acquired pneumonia (CAP) cases
1325 non-pneumonia lung disease
MA sensitivity = 90.33%
MA specificity = 94.67%
Sensitivity = 90%
Specificity = 96%
Sensitivity = 87%
Specificity = 92%
Sensitivity = 94%
Specificity = 96%
CoroNet
(Xception-based) [85]
1157 chest X-ray images157 COVID-19 patients
500 non-COVID-19 pneumonia cases
500 normal cases
MA accuracy = 93.47%
MA sensitivity = 89.95%
MA specificity = 94.18%
MA precision = 92.11%
MA F1-score = 90.87%
Accuracy = 98.52%
Sensitivity = 89.19%
Specificity = 99.67%
Precision = 97.06%
F1-score = 92.96%
Accuracy = 91.69%
Sensitivity = 95.33%
Specificity = 88.77%
Precision = 87.20%
F1-score = 91.08%
Accuracy = 90.21%
Sensitivity = 85.33%
Specificity = 94.12%
Precision = 92.09%
F1-score = 88.58%
Concatenation model with Xception and ResNet50V2 [86]15,085 chest X-ray images180 COVID-19 patients
6054 non-COVID-19 pneumonia cases
8851 normal cases
MA accuracy = 94.27%
MA sensitivity = 87.31%
MA specificity = 93.99%
Accuracy = 99.5%
Sensitivity = 80.53%
Specificity = 99.56%
Accuracy = 91.6%
Sensitivity = 87.35%
Specificity = 94.32%
Accuracy = 91.71%
Sensitivity = 94.06%
Specificity = 88.09%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhu, H.; Zhu, Z.; Wang, S.; Zhang, Y. CovC-ReDRNet: A Deep Learning Model for COVID-19 Classification. Mach. Learn. Knowl. Extr. 2023, 5, 684-712. https://doi.org/10.3390/make5030037

AMA Style

Zhu H, Zhu Z, Wang S, Zhang Y. CovC-ReDRNet: A Deep Learning Model for COVID-19 Classification. Machine Learning and Knowledge Extraction. 2023; 5(3):684-712. https://doi.org/10.3390/make5030037

Chicago/Turabian Style

Zhu, Hanruo, Ziquan Zhu, Shuihua Wang, and Yudong Zhang. 2023. "CovC-ReDRNet: A Deep Learning Model for COVID-19 Classification" Machine Learning and Knowledge Extraction 5, no. 3: 684-712. https://doi.org/10.3390/make5030037

APA Style

Zhu, H., Zhu, Z., Wang, S., & Zhang, Y. (2023). CovC-ReDRNet: A Deep Learning Model for COVID-19 Classification. Machine Learning and Knowledge Extraction, 5(3), 684-712. https://doi.org/10.3390/make5030037

Article Metrics

Back to TopTop