CovC-ReDRNet: A Deep Learning Model for COVID-19 Classification
Abstract
:1. Introduction
1.1. COVID-19
1.2. Pneumonia
- ResNet-18 was selected as the backbone model, due to its superiority over the other six classic CNNs in the ablation experiment: AlexNet, VGG, ResNet-50, GoogleNet, DenseNet, and MobileNet.
- Compared with the traditional pre-trained CNN, randomized neural networks (RNNs) improve the model performance by replacing the last five layers of the tailored CNN as well as addressing the problem of computing resource waste thanks to their lightweight architecture.
- Our feature-based RNN framework, designed with an auto-selection algorithm, allows the most adaptive model obtained on various domains, which indicates the improvement in the generalizability of the model.
- Our novel CovC-ReDRNet obtains the feature representation from the tailored CNN as well as auto-selects the deep random vector function link network (dRVFL) as the optimal classifier according to our feature-based RNN framework; further, it feeds the feature representation directly to the dRVFL to construct the entire network. A good model performance based on five-fold cross-validation was achieved, with an MA sensitivity, MA specificity, MA accuracy, MA precision, and MA F1-score of 94.94%, 97.01%, 97.56%, 96.81%, and 95.84%, respectively.
- Compared with the other seven state-of-the-art (SOTA) methods (95.57%), our proposed model achieved the highest MA accuracy (97.56%).
2. Related Work
3. Materials
3.1. Data Modality
3.2. Dataset
4. Methodology
4.1. Selected Backbone Network
4.2. Tailored CNN
4.3. Identified Feature Layer
4.4. Feature-Based RNN Framework
4.4.1. Shallow RNNs
4.4.2. Deep RNNs
4.4.3. The Proposed Feature-Based RNN Framework
4.5. The Proposed Model
Algorithm 1. The algorithm of the CovC-ReDRNet. |
Algorithm CovC-ReDRNet Input: dataset Output: the classification performance of the trained CovC-ReDRNet
|
4.6. Evaluation
5. Experiment Results and Discussions
5.1. Experimental Settings
5.2. The Performance of CovC-ReDRNet
5.3. Ablation Study
5.3.1. Superiority of ResNet-18 over Different Backbone Networks
5.3.2. Superiority of Deep RNNs over Traditional Classifiers
5.3.3. Superiority of Deep RNN over Shallow RNNs
5.4. Comparison Study
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- World Health Organization. COVID-19 Weekly Epidemiological Update, Edition 134, 16 March 2023; World Health Organization: Geneva, Switzerland, 2023. [Google Scholar]
- Zhu, N.; Zhang, D.; Wang, W.; Li, X.; Yang, B.; Song, J.; Zhao, X.; Huang, B.; Shi, W.; Lu, R. A novel coronavirus from patients with pneumonia in China, 2019. N. Engl. J. Med. 2020, 382, 727–733. [Google Scholar] [CrossRef] [PubMed]
- Guo, Y.-R.; Cao, Q.-D.; Hong, Z.-S.; Tan, Y.-Y.; Chen, S.-D.; Jin, H.-J.; Tan, K.-S.; Wang, D.-Y.; Yan, Y. The origin, transmission and clinical therapies on coronavirus disease 2019 (COVID-19) outbreak—An update on the status. Mil. Med. Res. 2020, 7, 11. [Google Scholar] [CrossRef] [Green Version]
- Lotfi, M.; Hamblin, M.R.; Rezaei, N. COVID-19: Transmission, prevention, and potential therapeutic opportunities. Clin. Chim. Acta 2020, 508, 254–266. [Google Scholar] [CrossRef]
- Nouvellet, P.; Bhatia, S.; Cori, A.; Ainslie, K.E.; Baguelin, M.; Bhatt, S.; Boonyasiri, A.; Brazeau, N.F.; Cattarino, L.; Cooper, L.V. Reduction in mobility and COVID-19 transmission. Nat. Commun. 2021, 12, 1090. [Google Scholar] [CrossRef] [PubMed]
- Cook, T. Personal protective equipment during the coronavirus disease (COVID) 2019 pandemic—A narrative review. Anaesthesia 2020, 75, 920–927. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Crook, H.; Raza, S.; Nowell, J.; Young, M.; Edison, P. Long covid—Mechanisms, risk factors, and management. BMJ 2021, 374, n1648. [Google Scholar] [CrossRef] [PubMed]
- Ceban, F.; Ling, S.; Lui, L.M.; Lee, Y.; Gill, H.; Teopiz, K.M.; Rodrigues, N.B.; Subramaniapillai, M.; Di Vincenzo, J.D.; Cao, B. Fatigue and cognitive impairment in Post-COVID-19 Syndrome: A systematic review and meta-analysis. Brain Behav. Immun. 2022, 101, 93–135. [Google Scholar] [CrossRef]
- Bullard, J.; Dust, K.; Funk, D.; Strong, J.E.; Alexander, D.; Garnett, L.; Boodman, C.; Bello, A.; Hedley, A.; Schiffman, Z. Predicting infectious severe acute respiratory syndrome coronavirus 2 from diagnostic samples. Clin. Infect. Dis. 2020, 71, 2663–2666. [Google Scholar] [CrossRef]
- Tahamtan, A.; Ardebili, A. Real-time RT-PCR in COVID-19 detection: Issues affecting the results. Expert Rev. Mol. Diagn. 2020, 20, 453–454. [Google Scholar] [CrossRef] [Green Version]
- Sheikhzadeh, E.; Eissa, S.; Ismail, A.; Zourob, M. Diagnostic techniques for COVID-19 and new developments. Talanta 2020, 220, 121392. [Google Scholar] [CrossRef]
- Tsang, N.N.Y.; So, H.C.; Ng, K.Y.; Cowling, B.J.; Leung, G.M.; Ip, D.K.M. Diagnostic performance of different sampling approaches for SARS-CoV-2 RT-PCR testing: A systematic review and meta-analysis. Lancet Infect. Dis. 2021, 21, 1233–1245. [Google Scholar] [CrossRef]
- Dong, D.; Tang, Z.; Wang, S.; Hui, H.; Gong, L.; Lu, Y.; Xue, Z.; Liao, H.; Chen, F.; Yang, F. The role of imaging in the detection and management of COVID-19: A review. IEEE Rev. Biomed. Eng. 2020, 14, 16–29. [Google Scholar] [CrossRef] [PubMed]
- Salehi, S.; Abedi, A.; Balakrishnan, S.; Gholamrezanezhad, A. Coronavirus Disease 2019 (COVID-19): A Systematic Review of Imaging Findings in 919 Patients. Am. J. Roentgenol. 2020, 215, 87–93. [Google Scholar] [CrossRef] [PubMed]
- Li, Y.; Yao, L.; Li, J.; Chen, L.; Song, Y.; Cai, Z.; Yang, C. Stability issues of RT-PCR testing of SARS-CoV-2 for hospitalized patients clinically diagnosed with COVID-19. J. Med. Virol. 2020, 92, 903–908. [Google Scholar] [CrossRef] [Green Version]
- Li, Y.; Xia, L. Coronavirus Disease 2019 (COVID-19): Role of Chest CT in Diagnosis and Management. Am. J. Roentgenol. 2020, 214, 1280–1286. [Google Scholar] [CrossRef]
- Long, C.; Xu, H.; Shen, Q.; Zhang, X.; Fan, B.; Wang, C.; Zeng, B.; Li, Z.; Li, X.; Li, H. Diagnosis of the Coronavirus disease (COVID-19): rRT-PCR or CT? Eur. J. Radiol. 2020, 126, 108961. [Google Scholar] [CrossRef]
- Ai, T.; Yang, Z.; Hou, H.; Zhan, C.; Chen, C.; Lv, W.; Tao, Q.; Sun, Z.; Xia, L. Correlation of chest CT and RT-PCR testing in coronavirus disease 2019 (COVID-19) in China: A report of 1014 cases. Radiology 2020, 296, E32–E40. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Xie, X.; Zhong, Z.; Zhao, W.; Zheng, C.; Wang, F.; Liu, J. Chest CT for typical 2019-nCoV pneumonia: Relationship to negative RT-PCR testing. Radiology 2020, 296, E41–E45. [Google Scholar] [CrossRef] [Green Version]
- Huang, P.; Liu, T.; Huang, L.; Liu, H.; Lei, M.; Xu, W.; Hu, X.; Chen, J.; Liu, B. Use of chest CT in combination with negative RT-PCR assay for the 2019 novel coronavirus but high clinical suspicion. Radiology 2020, 295, 22–23. [Google Scholar] [CrossRef] [Green Version]
- Chen, D.; Jiang, X.; Hong, Y.; Wen, Z.; Wei, S.; Peng, G.; Wei, X. Can Chest CT Features Distinguish Patients With Negative From Those With Positive Initial RT-PCR Results for Coronavirus Disease (COVID-19)? Am. J. Roentgenol. 2020, 216, 66–70. [Google Scholar] [CrossRef]
- Wolters, P.J.; Collard, H.R.; Jones, K.D. Pathogenesis of Idiopathic Pulmonary Fibrosis. Annu. Rev. Pathol. Mech. Dis. 2014, 9, 157–179. [Google Scholar] [CrossRef] [Green Version]
- Marrie, T.J. Community-acquired pneumonia. Clin. Infect. Dis. 1994, 18, 501–513. [Google Scholar] [CrossRef] [PubMed]
- Torres, A.; Catia, C.; Niederman, M.S.; Rosario, M.; Chalmers, J.D.; Wunderink, R.G. Pneumonia (Primer). Nat. Rev. Dis. Prim. 2021, 7, 25. [Google Scholar] [CrossRef]
- Ruuskanen, O.; Lahti, E.; Jennings, L.C.; Murdoch, D.R. Viral pneumonia. Lancet 2011, 377, 1264–1275. [Google Scholar] [CrossRef]
- Lodha, R.; Kabra, S.K.; Pandey, R.M. Antibiotics for community-acquired pneumonia in children. Cochrane Database Syst. Rev. 2013, 2013, CD004874. [Google Scholar] [CrossRef] [PubMed]
- Zhao, D.; Yao, F.; Wang, L.; Zheng, L.; Gao, Y.; Ye, J.; Guo, F.; Zhao, H.; Gao, R. A Comparative Study on the Clinical Features of Coronavirus 2019 (COVID-19) Pneumonia With Other Pneumonias. Clin. Infect. Dis. 2020, 71, 756–761. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Wang, L.; Lin, Z.Q.; Wong, A. COVID-Net: A tailored deep convolutional neural network design for detection of COVID-19 cases from chest X-ray images. Sci. Rep. 2020, 10, 19549. [Google Scholar] [CrossRef]
- Hemdan, E.E.-D.; Shouman, M.A.; Karar, M.E. Covidx-net: A framework of deep learning classifiers to diagnose COVID-19 in X-ray images. arXiv 2020, arXiv:2003.11055. [Google Scholar]
- Narin, A.; Kaya, C.; Pamuk, Z. Automatic detection of coronavirus disease (COVID-19) using X-ray images and deep convolutional neural networks. Pattern Anal. Appl. 2021, 24, 1207–1220. [Google Scholar] [CrossRef]
- Rahman, T.; Akinbi, A.; Chowdhury, M.E.H.; Rashid, T.A.; Şengür, A.; Khandakar, A.; Islam, K.R.; Ismael, A.M. COV-ECGNET: COVID-19 detection using ECG trace images with deep convolutional neural network. Health Inf. Sci. Syst. 2022, 10, 1. [Google Scholar] [CrossRef]
- Abbas, A.; Abdelsamea, M.M.; Gaber, M.M. Classification of COVID-19 in chest X-ray images using DeTraC deep convolutional neural network. Appl. Intell. 2021, 51, 854–864. [Google Scholar] [CrossRef] [PubMed]
- Zhang, X.; Zhou, J.; Sun, W.; Jha, S.-K. A Lightweight CNN Based on Transfer Learning for COVID-19 Diagnosis. Comput. Mater. Contin. 2022, 72, 1123–1137. [Google Scholar] [CrossRef]
- Park, S.; Kim, G.; Oh, Y.; Seo, J.B.; Lee, S.M.; Kim, J.H.; Moon, S.; Lim, J.-K.; Ye, J.C. Multi-task vision transformer using low-level chest X-ray feature corpus for COVID-19 diagnosis and severity quantification. Med. Image Anal. 2022, 75, 102299. [Google Scholar] [CrossRef]
- Goncharov, M.; Pisov, M.; Shevtsov, A.; Shirokikh, B.; Kurmukov, A.; Blokhin, I.; Chernina, V.; Solovev, A.; Gombolevskiy, V.; Morozov, S.; et al. CT-Based COVID-19 triage: Deep multitask learning improves joint identification and severity quantification. Med. Image Anal. 2021, 71, 102054. [Google Scholar] [CrossRef] [PubMed]
- Signoroni, A.; Savardi, M.; Benini, S.; Adami, N.; Leonardi, R.; Gibellini, P.; Vaccher, F.; Ravanelli, M.; Borghesi, A.; Maroldi, R.; et al. BS-Net: Learning COVID-19 pneumonia severity on a large chest X-ray dataset. Med. Image Anal. 2021, 71, 102046. [Google Scholar] [CrossRef]
- Kollias, D.; Arsenos, A.; Kollias, S. AI-MIA: COVID-19 Detection and Severity Analysis through Medical Imaging; Springer: Cham, Switzerland, 2023; pp. 677–690. [Google Scholar]
- Chieregato, M.; Frangiamore, F.; Morassi, M.; Baresi, C.; Nici, S.; Bassetti, C.; Bnà, C.; Galelli, M. A hybrid machine learning/deep learning COVID-19 severity predictive model from CT images and clinical data. Sci. Rep. 2022, 12, 4329. [Google Scholar] [CrossRef]
- Bougourzi, F.; Distante, C.; Dornaika, F.; Taleb-Ahmed, A. CNR-IEMN-CD and CNR-IEMN-CSD Approaches for COVID-19 Detection and COVID-19 Severity Detection from 3D CT-Scans; Springer: Cham, Switzerland, 2023; pp. 593–604. [Google Scholar]
- Rustam, F.; Reshi, A.A.; Mehmood, A.; Ullah, S.; On, B.-W.; Aslam, W.; Choi, G.S. COVID-19 future forecasting using supervised machine learning models. IEEE Access 2020, 8, 101489–101499. [Google Scholar] [CrossRef]
- An, C.; Lim, H.; Kim, D.-W.; Chang, J.H.; Choi, Y.J.; Kim, S.W. Machine learning prediction for mortality of patients diagnosed with COVID-19: A nationwide Korean cohort study. Sci. Rep. 2020, 10, 18716. [Google Scholar] [CrossRef]
- Khan, I.U.; Aslam, N.; Aljabri, M.; Aljameel, S.S.; Kamaleldin, M.M.A.; Alshamrani, F.M.; Chrouf, S.M.B. Computational intelligence-based model for mortality rate prediction in COVID-19 patients. Int. J. Environ. Res. Public Health 2021, 18, 6429. [Google Scholar] [CrossRef]
- Ikemura, K.; Bellin, E.; Yagi, Y.; Billett, H.; Saada, M.; Simone, K.; Stahl, L.; Szymanski, J.; Goldstein, D.; Reyes Gil, M. Using automated machine learning to predict the mortality of patients with COVID-19: Prediction model development study. J. Med. Internet Res. 2021, 23, e23458. [Google Scholar] [CrossRef]
- Elshennawy, N.M.; Ibrahim, D.M.; Sarhan, A.M.; Arafa, M. Deep-Risk: Deep Learning-Based Mortality Risk Predictive Models for COVID-19. Diagnostics 2022, 12, 1847. [Google Scholar] [CrossRef]
- Carrington, A.M.; Manuel, D.G.; Fieguth, P.W.; Ramsay, T.; Osmani, V.; Wernly, B.; Bennett, C.; Hawken, S.; Magwood, O.; Sheikh, Y. Deep ROC analysis and AUC as balanced average accuracy, for improved classifier selection, audit and explanation. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 45, 329–341. [Google Scholar] [CrossRef]
- Hussain, E.; Hasan, M.; Rahman, M.A.; Lee, I.; Tamanna, T.; Parvez, M.Z. CoroDet: A deep learning based classification for COVID-19 detection using chest X-ray images. Chaos Solitons Fractals 2021, 142, 110495. [Google Scholar] [CrossRef]
- Xu, X.; Jiang, X.; Ma, C.; Du, P.; Li, X.; Lv, S.; Yu, L.; Ni, Q.; Chen, Y.; Su, J. A deep learning system to screen novel coronavirus disease 2019 pneumonia. Engineering 2020, 6, 1122–1129. [Google Scholar] [CrossRef]
- Beutel, J.; Kundel, H.L.; Kim, Y.; Van Metter, R.L.; Horii, S.C. Handbook of Medical Imaging; Spie Press: Bellingham, WA, USA, 2000; Volume 3. [Google Scholar]
- Kasban, H.; El-Bendary, M.; Salama, D. A comparative study of medical imaging techniques. Int. J. Inf. Sci. Intell. Syst. 2015, 4, 37–58. [Google Scholar]
- Farncombe, T.; Iniewski, K. Medical Imaging: Technology and Applications; CRC Press: Boca Raton, FL, USA, 2017. [Google Scholar]
- Kanne, J.P.; Bai, H.; Bernheim, A.; Chung, M.; Haramati, L.B.; Kallmes, D.F.; Little, B.P.; Rubin, G.; Sverzellati, N. COVID-19 imaging: What we know now and what remains unknown. Radiology 2021, 299, E262–E279. [Google Scholar] [CrossRef]
- Chung, M.; Bernheim, A.; Mei, X.; Zhang, N.; Huang, M.; Zeng, X.; Cui, J.; Xu, W.; Yang, Y.; Fayad, Z.A. CT imaging features of 2019 novel coronavirus (2019-nCoV). Radiology 2020, 295, 202–207. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Simpson, S.; Kay, F.U.; Abbara, S.; Bhalla, S.; Chung, J.H.; Chung, M.; Henry, T.S.; Kanne, J.P.; Kligerman, S.; Ko, J.P. Radiological Society of North America expert consensus statement on reporting chest CT findings related to COVID-19. Endorsed by the Society of Thoracic Radiology, the American College of Radiology, and RSNA. J. Thorac. Imaging 2020, 35, 219–227. [Google Scholar] [CrossRef] [PubMed]
- Patel, P. Chest X-ray (COVID-19 & Pneumonia). 2020. Available online: https://www.kaggle.com/prashant268/chest-xray-covid19-pneumonia (accessed on 4 November 2022).
- Cohen, J.P.; Morrison, P.; Dao, L.; Roth, K.; Duong, T.Q.; Ghassemi, M. COVID-19 image data collection: Prospective predictions are the future. arXiv 2020, arXiv:2006.11988. [Google Scholar] [CrossRef]
- Kermany, D.S.; Goldbaum, M.; Cai, W.; Valentim, C.C.; Liang, H.; Baxter, S.L.; McKeown, A.; Yang, G.; Wu, X.; Yan, F. Identifying medical diagnoses and treatable diseases by image-based deep learning. Cell 2018, 172, 1122–1131.e9. [Google Scholar] [CrossRef]
- Chung, A. Figure 1 COVID-19 chest X-ray dataset initiative. Retrieved April 2020, 17, 2020. [Google Scholar]
- Wang, L.; Wong, A.; Lin, Z.; McInnis, P.; Chung, A.; Gunraj, H.; Lee, J.; Ross, M.; VanBerlo, B.; Ebadi, A. Actualmed COVID-19 Chest X-ray Dataset Initiative. 2020. Available online: https://github.com/agchung/actualmed-covid-chestxraydataset (accessed on 4 November 2022).
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Telgarsky, M. Benefits of depth in neural networks. In Proceedings of the Conference on Learning Theory, New York, NY, USA, 23–26 June 2016; pp. 1517–1539. [Google Scholar]
- Zhou, D.-X. Universality of deep convolutional neural networks. Appl. Comput. Harmon. Anal. 2020, 48, 787–794. [Google Scholar] [CrossRef] [Green Version]
- Bianchini, M.; Scarselli, F. On the complexity of neural network classifiers: A comparison between shallow and deep architectures. IEEE Trans. Neural Netw. Learn. Syst. 2014, 25, 1553–1565. [Google Scholar] [CrossRef] [PubMed]
- Shrestha, A.; Mahmood, A. Review of deep learning algorithms and architectures. IEEE Access 2019, 7, 53040–53065. [Google Scholar] [CrossRef]
- Chen, Z.; Deng, L.; Wang, B.; Li, G.; Xie, Y. A comprehensive and modularized statistical framework for gradient norm equality in deep neural networks. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 44, 13–31. [Google Scholar] [CrossRef]
- Allen-Zhu, Z.; Li, Y.; Song, Z. On the convergence rate of training recurrent neural networks. Adv. Neural Inf. Process. Syst. 2019, 32, 1–13. [Google Scholar]
- Antun, V.; Renna, F.; Poon, C.; Adcock, B.; Hansen, A.C. On instabilities of deep learning in image reconstruction and the potential costs of AI. Proc. Natl. Acad. Sci. USA 2020, 117, 30088–30095. [Google Scholar] [CrossRef]
- Glorot, X.; Bengio, Y. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, Sardinia, Italy, 13–15 May 2010; pp. 249–256. [Google Scholar]
- Chaudhari, P.; Soatto, S. Stochastic gradient descent performs variational inference, converges to limit cycles for deep networks. In Proceedings of the 2018 Information Theory and Applications Workshop (ITA), San Diego, CA, USA, 11–16 February 2018; pp. 1–10. [Google Scholar]
- Poggio, T.; Kawaguchi, K.; Liao, Q.; Miranda, B.; Rosasco, L.; Boix, X.; Hidary, J.; Mhaskar, H. Theory of deep learning III: Explaining the non-overfitting puzzle. arXiv 2017, arXiv:1801.00173. [Google Scholar]
- Zhu, Z.; Wu, J.; Yu, B.; Wu, L.; Ma, J. The anisotropic noise in stochastic gradient descent: Its behavior of escaping from sharp minima and regularization effects. arXiv 2018, arXiv:1803.00195. [Google Scholar]
- Arora, S.; Cohen, N.; Golowich, N.; Hu, W. A convergence analysis of gradient descent for deep linear neural networks. arXiv 2018, arXiv:1810.02281. [Google Scholar]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef] [Green Version]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
- Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
- Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
- Huang, G.-B.; Zhu, Q.-Y.; Siew, C.-K. Extreme learning machine: Theory and applications. Neurocomputing 2006, 70, 489–501. [Google Scholar] [CrossRef]
- Schmidt, W.F.; Kraaijveld, M.A.; Duin, R.P. Feed forward neural networks with random weights. In Proceedings of the International Conference on Pattern Recognition, The Hague, The Netherlands, 30 August–3 September 1992; p. 1. [Google Scholar]
- Pao, Y.-H.; Park, G.-H.; Sobajic, D.J. Learning and generalization characteristics of the random vector functional-link net. Neurocomputing 1994, 6, 163–180. [Google Scholar] [CrossRef]
- Shi, Q.; Katuwal, R.; Suganthan, P.N.; Tanveer, M. Random vector functional link neural network based ensemble deep learning. Pattern Recognit. 2021, 117, 107978. [Google Scholar] [CrossRef]
- Wang, J.; Bao, Y.; Wen, Y.; Lu, H.; Luo, H.; Xiang, Y.; Li, X.; Liu, C.; Qian, D. Prior-attention residual learning for more discriminative COVID-19 screening in CT images. IEEE Trans. Med. Imaging 2020, 39, 2572–2583. [Google Scholar] [CrossRef] [PubMed]
- Rahaman, M.M.; Li, C.; Yao, Y.; Kulwa, F.; Rahman, M.A.; Wang, Q.; Qi, S.; Kong, F.; Zhu, X.; Zhao, X. Identification of COVID-19 samples from chest X-ray images using deep learning: A comparison of transfer learning approaches. J. X-ray Sci. Technol. 2020, 28, 821–839. [Google Scholar] [CrossRef]
- Apostolopoulos, I.D.; Mpesiana, T.A. COVID-19: Automatic detection from X-ray images utilizing transfer learning with convolutional neural networks. Phys. Eng. Sci. Med. 2020, 43, 635–640. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Li, L.; Qin, L.; Xu, Z.; Yin, Y.; Wang, X.; Kong, B.; Bai, J.; Lu, Y.; Fang, Z.; Song, Q. Using artificial intelligence to detect COVID-19 and community-acquired pneumonia based on pulmonary CT: Evaluation of the diagnostic accuracy. Radiology 2020, 296, E65–E71. [Google Scholar] [CrossRef] [Green Version]
- Khan, A.I.; Shah, J.L.; Bhat, M.M. CoroNet: A deep neural network for detection and diagnosis of COVID-19 from chest X-ray images. Comput. Methods Programs Biomed. 2020, 196, 105581. [Google Scholar] [CrossRef] [PubMed]
- Rahimzadeh, M.; Attar, A. A modified deep convolutional neural network for detecting COVID-19 and pneumonia from chest X-ray images based on the concatenation of Xception and ResNet50V2. Inform. Med. Unlocked 2020, 19, 100360. [Google Scholar] [CrossRef] [PubMed]
Methodology | Tasks | Contributions | Limitations |
---|---|---|---|
Wang, et al. [28] | Diagnosis | The study developed COVID-Net as one of the earliest open-source networks for COVID-19 diagnosis. The study assembled one of the largest publicly available datasets of COVID-19-positive cases. | The study achieved limited model performance with an accuracy of 93.57%. |
Hemdan, et al. [29] | Diagnosis | The study tested seven different architectures of deep CNN models in COVIDX-Net. | The study merely implemented a binary classification task. The study achieved limited model performance with the highest accuracy being 90%. |
Narin, et al. [30] | Diagnosis | Three different datasets were tested in the experiments. | The study merely implemented a binary classification task. |
Rahman, et al. [31] | Diagnosis | Six different architectures of deep CNN models were investigated for multiclassification tasks. | The classification task mainly focused on distinguishing COVID-19 from other brain diseases but not lung disease. |
Abbas, et al. [32] | Diagnosis | The study addressed the problem of irregularities in annotated data using the DeTraC network. | The classification task achieved limited model performance with the highest accuracy being 93.1%. |
Zhang, et al. [33] | Diagnosis | The study proposed a lightweight architecture that takes only 1.06 s on average to diagnose a chest CT image. | The study merely implemented a binary classification task. |
Park, et al. [34] | Diagnosis/Severity Assessment | The multi-task model is capable of both classification and severity prediction tasks. | The classification task achieved limited model performance with the highest accuracy being 86.8%. |
Methodology | Tasks | Contributions |
---|---|---|
Park, et al. [34] | Severity Assessment/Diagnosis | The multi-task model is capable of both classification and severity prediction tasks. |
Goncharov, et al. [35] | Severity Assessment | The study obtained the most abundant spatial feature patterns, thus improving the quality and accuracy of the model classification significantly. |
Signoroni, et al. [36] | Severity Assessment | The study proposed an end-to-end multi-network, highlighting its multi-task self-attentive behavior with high robustness for all variabilities from the medical domain. |
Kollias, et al. [37] | Severity Assessment/Diagnosis | The study created a 3D database, COV19-CT-DB, consisting of chest CT scans from COVID-19 patients. The study developed a hybrid CNN-RNN model for severity classification. |
Chieregato, et al. [38] | Severity Assessment | In this study, CT patterns were captured using the 3D-CNN model and in turn, the Boruta algorithm was used to select the theoretical values for the SHAP game, where an AUC of 94.9% was achieved. |
Bougourzi, et al. [39] | Severity Assessment/Diagnosis | The study proposed an Inception-based ensemble architecture for COVID-19 severity assessment, named CNR-IEMN-CSD. The novel network ranked third in the second COV19D competition, demonstrating an improvement of 6.81% over the baseline results. |
Methodology | Task | Contribution |
---|---|---|
Rustam, et al. [40] | Prognosis | The study contained four traditional prediction machine learning models, that is, linear regression, least absolute shrinkage and selection operator (LASSO), support vector machine (SVM), and exponential smoothing (ES), with the aim of predicting the risk level of COVID-19 spread. |
An, et al. [41] | Prognosis | The study investigated LASSO, SVM, random forest (RF), and K-nearest neighbor (KNN) in order to predict mortality and thus achieve accurate prognostic predictions to triage patients effectively. |
Khan, et al. [42] | Prognosis | The study examined RF, KNN, decision tree (DT), logistic regression (LR), extreme gradient boosting (XGBoost), and deep learning networks to forecast mortality in COVID-19 cases. The models were trained on confirmed COVID-19 patients from 146 countries. |
Ikemura, et al. [43] | Prognosis | The study developed the autoML framework to investigate 20 machine learning models to generate the best ensemble model based on 48 variables. In particular, an AUPRC of 80.7% was recorded. The study identified critical variables associated with mortality and accurately forecasted the survival of COVID-19 patients. |
Elshennawy, et al. [44] | Prognosis | The study developed three architectures, a basic CNN-based (named CV-CNN) model, a hybrid CNN combining a long short-term memory (LSTM) mechanism with a CNN model (named CV-LSTM + CNN), and a hybrid model trained using transformed images (named IMG-CNN). In particular, the average accuracy of the IMG-CNN prognostic model reached 94.14%. |
Dataset | Category | Total | ||
---|---|---|---|---|
COVID-19 | Non-COVID-19 Pneumonia | Normal | ||
Training | 460 | 3418 | 1266 | 5144 |
Test | 116 | 855 | 317 | 1288 |
Total | 576 | 4273 | 1583 | 6432 |
Operation Layers | Property of Layers | Number of Channels | Size of Filter | Number of Filters | Stride Value | Size of Padding | Size of Output |
---|---|---|---|---|---|---|---|
Image input | - | - | - | - | - | 224 × 224 × 3 | |
conv1 | Convolution | 3 | 7 × 7 | 64 | 2 × 2 | 3 × 3 × 3 × 3 | 112 × 112 × 64 |
pool1 | Max Pooling | 64 | 3 × 3 | - | 2 × 2 | 1 × 1 × 1 × 1 | 56 × 56 × 64 |
conv2a_branch2a | Convolution | 64 | 3 × 3 | 64 | 1 × 1 | 1 × 1 × 1 × 1 | 56 × 56 × 64 |
conv2a_branch2b | Convolution | 64 | 3 × 3 | 64 | 1 × 1 | 1 × 1 × 1 × 1 | 56 × 56 × 64 |
conv2a | Addition | 64 | - | - | - | - | 56 × 56 × 64 |
Add output of two branches element-wise | |||||||
conv2b_branch2a | Convolution | 64 | 3 × 3 | 64 | 1 × 1 | 1 × 1 × 1 × 1 | 56 × 56 × 64 |
conv2b_branch2b | Convolution | 64 | 3 × 3 | 64 | 1 × 1 | 1 × 1 × 1 × 1 | 56 × 56 × 64 |
conv2b | Addition | 64 | - | - | - | - | 56 × 56 × 64 |
Add output of two branches element-wise | |||||||
conv3a_branch1 | Convolution | 64 | 1 × 1 | 128 | 2 × 2 | 0 × 0 × 0 × 0 | 28 × 28 × 128 |
conv3a_branch2a | Convolution | 64 | 3 × 3 | 128 | 2 × 2 | 1 × 1 × 1 × 1 | 28 × 28 × 128 |
conv3a_branch2b | Convolution | 128 | 3 × 3 | 128 | 1 × 1 | 1 × 1 × 1 × 1 | 28 × 28 × 128 |
conv3a | Addition | 128 | - | - | - | - | 28 × 28 × 128 |
Add output of two branches element-wise | |||||||
conv3b_branch2a | Convolution | 128 | 3 × 3 | 128 | 1 × 1 | 1 × 1 × 1 × 1 | 28 × 28 × 128 |
conv3b_branch2b | Convolution | 128 | 3 × 3 | 128 | 1 × 1 | 1 × 1 × 1 × 1 | 28 × 28 × 128 |
conv3b | Addition | 128 | - | - | - | - | 28 × 28 × 128 |
Add output of two branches element-wise | |||||||
conv4a_branch1 | Convolution | 128 | 1 × 1 | 256 | 2 × 2 | 0 × 0 × 0 × 0 | 14 × 14 × 256 |
conv4a_branch2a | Convolution | 128 | 3 × 3 | 256 | 2 × 2 | 1 × 1 × 1 × 1 | 14 × 14 × 256 |
conv4a_branch2b | Convolution | 256 | 3 × 3 | 256 | 1 × 1 | 1 × 1 × 1 × 1 | 14 × 14 × 256 |
conv4a | Addition | 256 | - | - | - | - | 14 × 14 × 256 |
Add output of two branches element-wise | |||||||
conv4b_branch2a | Convolution | 256 | 3 × 3 | 256 | 1 × 1 | 1 × 1 × 1 × 1 | 14 × 14 × 256 |
conv4b_branch2b | Convolution | 256 | 3 × 3 | 256 | 1 × 1 | 1 × 1 × 1 × 1 | 14 × 14 × 256 |
conv4b | Addition | 256 | - | - | - | - | 14 × 14 × 256 |
Add output of two branches element-wise | |||||||
conv5a_branch1 | Convolution | 256 | 1 × 1 | 512 | 2 × 2 | 0 × 0 × 0 × 0 | 7 × 7 × 512 |
conv5a_branch2a | Convolution | 256 | 3 × 3 | 512 | 2 × 2 | 1 × 1 × 1 × 1 | 7 × 7 × 512 |
conv5a_branch2b | Convolution | 512 | 3 × 3 | 512 | 1 × 1 | 1 × 1 × 1 × 1 | 7 × 7 × 512 |
conv5a | Addition | 512 | - | - | - | - | 7 × 7 × 512 |
Add output of two branches element-wise | |||||||
conv5b_branch2a | Convolution | 512 | 3 × 3 | 512 | 1 × 1 | 1 × 1 × 1 × 1 | 7 × 7 × 512 |
conv5b_branch2b | Convolution | 512 | 3 × 3 | 512 | 1 × 1 | 1 × 1 × 1 × 1 | 7 × 7 × 512 |
conv5b | Addition | 512 | - | - | - | - | 7 × 7 × 512 |
Add output of two branches element-wise | |||||||
pool5 | Global Average Pooling | 512 | - | - | - | - | 1 × 1 × 512 |
fc128 | Fully Connected | 512 | - | - | - | - | 1 × 1 × 128 |
fc3 | Fully Connected | 128 | - | - | - | - | 1 × 1 × 3 |
softmax_out | Softmax | 3 | - | - | - | - | 1 × 1 × 3 |
Classification Output | 3 | - | - | - | - | 1 × 1 × 3 |
Symbol | Meaning |
---|---|
(, ) | The given dataset of the -th sample |
The input dimension | |
The output dimension | |
The original input matrix | |
The ground-truth label matrix | |
The output matrix of the hidden layer | |
The pseudo-inverse matrix of | |
The sigmoid function | |
The weights of the j-th hidden node | |
The bias of the j-th hidden node | |
The output weights | |
The number of hidden nodes | |
The final output | |
The output biases of the SNN | |
The input of the output layer | |
The number of hidden layers | |
c | The number of categories |
TN | The true-negative value according to the confusion matrix |
TP | The true-positive value according to the confusion matrix |
FP | The false-positive value according to the confusion matrix |
FN | The false-negative value according to the confusion matrix |
Hyper-Parameter | Value |
---|---|
Mini-batch size | 10 |
Max epoch | 4 |
Learning rate | |
Number of hidden nodes | 400 |
Number of hidden layers | 4 |
Five-fold Cross-Validation | CovC-ReDRNet (Ours) |
---|---|
Fold 1 | 97.62% |
Fold 2 | 97.82% |
Fold 3 | 97.20% |
Fold 4 | 97.57% |
Fold 5 | 97.62% |
Average | 97.56% |
-- | Sensitivity | Specificity | Accuracy | Precision | F1-Score |
---|---|---|---|---|---|
COVID-19 | 95.31% | 99.85% | 99.44% | 98.40% | 96.82% |
non-COVID-19 Pneumonia | 91.09% | 98.68% | 96.81% | 95.75% | 93.36% |
Normal | 98.43% | 92.50% | 96.44% | 96.29% | 97.35% |
MA | 94.94% | 97.01% | 97.56% | 96.81% | 95.84% |
Backbone | Fold 1 | Fold 2 | Fold 3 | Fold 4 | Fold 5 | Average |
---|---|---|---|---|---|---|
AlexNet | 93.10% | 92.27% | 95.96% | 97.46% | 97.62% | 95.28% |
VGG | 94.82% | 96.12% | 91.15% | 88.95% | 90.56% | 92.32% |
GoogleNet | 97.04% | 96.89% | 97.00% | 96.42% | 96.22% | 96.71% |
DenseNet | 97.72% | 97.10% | 96.94% | 97.77% | 96.84% | 97.27% |
MobileNet | 97.10% | 96.84% | 97.30% | 97.05% | 96.79% | 97.02% |
ResNet-18 (Ours) | 97.62% | 97.82% | 97.20% | 97.57% | 97.62% | 97.56% |
Category | Backbone | Sensitivity | Specificity | Accuracy | Precision | F1-Score |
---|---|---|---|---|---|---|
COVID-19 | AlexNet | 74.96% | 99.73% | 97.51% | 96.14% | 82.17% |
VGG | 36.00% | 99.86% | 94.14% | NA | NA | |
GoogleNet | 94.10% | 99.57% | 99.08% | 95.64% | 94.84% | |
DenseNet | 95.48% | 99.80% | 99.41% | 97.87% | 96.66% | |
MobileNet | 91.85% | 99.83% | 99.11% | 98.15% | 94.88% | |
ResNet-18 (Ours) | 95.31% | 99.85% | 99.44% | 98.40% | 96.82% | |
non-COVID-19 Pneumonia | AlexNet | 96.82% | 86.19% | 93.25% | 93.40% | 95.05% |
VGG | 96.47% | 78.54% | 90.45% | 90.01% | 93.10% | |
GoogleNet | 96.98% | 91.94% | 95.29% | 95.97% | 96.47% | |
DenseNet | 98.03% | 92.22% | 96.08% | 96.15% | 97.08% | |
MobileNet | 98.32% | 90.46% | 95.68% | 95.33% | 96.80% | |
ResNet-18 (Ours) | 98.43% | 92.50% | 96.44% | 96.29% | 97.35% | |
Normal | AlexNet | 88.94% | 97.09% | 95.09% | 90.91% | 89.90% |
VGG | 86.03% | 94.43% | 92.37% | 84.00% | 84.87% | |
GoogleNet | 90.27% | 97.57% | 95.77% | 92.46% | 91.32% | |
DenseNet | 90.34% | 98.29% | 96.33% | 94.57% | 92.39% | |
MobileNet | 89.33% | 98.52% | 96.25% | 95.18% | 92.14% | |
ResNet-18 (Ours) | 91.09% | 98.68% | 96.81% | 95.75% | 93.36% |
Architecture | Fold 1 | Fold 2 | Fold 3 | Fold 4 | Fold 5 | Average |
---|---|---|---|---|---|---|
ResNet-50 | 96.37% | 97.15% | 96.74% | 96.89% | 96.94% | 96.82% |
ResNet-18 (Ours) | 97.62% | 97.82% | 97.20% | 97.57% | 97.62% | 97.56% |
Category | Backbone | Sensitivity | Specificity | Accuracy | Precision | F1-Score |
---|---|---|---|---|---|---|
COVID-19 | ResNet-50 | 94.62% | 99.85% | 99.38% | 98.39% | 96.46% |
ResNet-18 (Ours) | 95.31% | 99.85% | 99.44% | 98.40% | 96.82% | |
non-COVID-19 Pneumonia | ResNet-50 | 97.85% | 90.23% | 95.29% | 95.20% | 96.50% |
ResNet-18 (Ours) | 98.43% | 92.50% | 96.44% | 96.29% | 97.35% | |
Normal | ResNet-50 | 88.38% | 98.21% | 95.79% | 94.18% | 91.17% |
ResNet-18 (Ours) | 91.09% | 98.68% | 96.81% | 95.75% | 93.36% |
Technologies | Backbone | Fold 1 | Fold 2 | Fold 3 | Fold 4 | Fold 5 | Average |
---|---|---|---|---|---|---|---|
Traditional classifier | AlexNet | 89.68% | 89.52% | 94.87% | 97.05% | 94.87% | 93.20% |
ELM | 79.61% | 77.64% | 93.47% | 96.38% | 95.91% | 88.60% | |
SNN | 77.64% | 87.65% | 93.73% | 96.95% | 96.74% | 90.54% | |
RVFL | 89.62% | 89.57% | 92.54% | 96.95% | 96.48% | 93.03% | |
dRVFL | 93.10% | 92.27% | 95.96% | 97.46% | 97.62% | 95.28% | |
Traditional classifier | VGG | 83.73% | 94.61% | 90.48% | 88.17% | 88.48% | 89.10% |
ELM | 92.07% | 87.36% | 77.59% | 77.64% | 78.21% | 82.57% | |
SNN | 91.56% | 88.66% | 77.59% | 77.64% | 78.94% | 82.88% | |
RVFL | 90.11% | 92.54% | 91.36% | 88.90% | 89.16% | 90.41% | |
dRVFL | 94.82% | 96.12% | 91.15% | 88.95% | 90.56% | 92.32% | |
Traditional classifier | GoogleNet | 96.47% | 96.37% | 96.58% | 96.32% | 94.67% | 96.08% |
ELM | 95.95% | 95.60% | 95.86% | 95.80% | 93.74% | 95.39% | |
SNN | 96.26% | 95.86% | 96.27% | 95.90% | 94.82% | 95.82% | |
RVFL | 96.58% | 96.06% | 96.74% | 96.06% | 95.03% | 96.09% | |
dRVFL | 97.04% | 96.89% | 97.00% | 96.42% | 96.22% | 96.71% | |
Traditional classifier | DenseNet | 97.51% | 97.15% | 96.89% | 97.77% | 96.32% | 97.13% |
ELM | 97.46% | 97.35% | 97.25% | 97.88% | 96.53% | 97.29% | |
SNN | 97.92% | 97.30% | 97.20% | 97.41% | 96.43% | 97.25% | |
RVFL | 97.92% | 97.51% | 97.72% | 98.14% | 96.68% | 97.27% | |
dRVFL | 97.72% | 97.10% | 96.94% | 97.77% | 96.84% | 97.27% | |
Traditional classifier | MobileNet | 96.37% | 95.86% | 95.54% | 95.96% | 95.45% | 95.83% |
ELM | 96.47% | 96.01% | 96.32% | 96.17% | 96.17% | 96.23% | |
SNN | 96.73% | 96.06% | 96.42% | 95.86% | 96.33% | 96.28% | |
RVFL | 97.20% | 96.58% | 97.04% | 96.79% | 96.89% | 96.90% | |
dRVFL | 97.10% | 96.84% | 97.30% | 97.05% | 96.79% | 97.02% | |
Traditional classifier | ResNet-18 | 97.00% | 97.56% | 96.99% | 97.67% | 97.00% | 97.24% |
ELM | 96.53% | 96.37% | 96.58% | 96.84% | 96.27% | 96.52% | |
SNN | 96.01% | 96.58% | 96.58% | 97.10% | 96.74% | 96.60% | |
RVFL | 97.00% | 97.77% | 97.15% | 97.62% | 97.31% | 97.37% | |
dRVFL (Ours) | 97.62% | 97.82% | 97.20% | 97.57% | 97.62% | 97.56% |
RNN | Fold 1 | Fold 2 | Fold 3 | Fold 4 | Fold 5 | Average |
---|---|---|---|---|---|---|
ELM | 96.53% | 96.37% | 96.58% | 96.84% | 96.27% | 96.52% |
SNN | 96.01% | 96.58% | 96.58% | 97.10% | 96.74% | 96.60% |
RVFL | 97.00% | 97.77% | 97.15% | 97.62% | 97.31% | 97.37% |
dELM | 77.64% | 77.59% | 77.62% | 77.62% | 77.64% | 77.62% |
dSNN | 77.64% | 77.59% | 77.62% | 77.62% | 77.64% | 77.62% |
dRVFL (Ours) | 97.62% | 97.82% | 97.20% | 97.57% | 97.62% | 97.56% |
Category | RNN | Sensitivity | Specificity | Accuracy | Precision | F1-Score |
---|---|---|---|---|---|---|
COVID-19 | ELM | 88.37% | 99.61% | 98.60% | 95.68% | 91.87% |
SNN | 90.63% | 99.66% | 98.85% | 96.31% | 93.36% | |
RVFL | 94.79% | 99.80% | 99.35% | 97.88% | 96.29% | |
dELM | 0% | 100% | 91.05% | NA | NA | |
dSNN | 0% | 100% | 91.05% | NA | NA | |
dRVFL (Ours) | 95.31% | 99.85% | 99.44% | 98.40% | 96.82% | |
non-COVID-19 Pneumonia | ELM | 97.59% | 90.09% | 95.07% | 95.12% | 96.34% |
SNN | 97.75% | 90.14% | 95.20% | 95.15% | 96.43% | |
RVFL | 98.55% | 91.62% | 96.22% | 95.88% | 97.20% | |
dELM | 0% | 100% | 75.39% | NA | NA | |
dSNN | 0% | 100% | 75.39% | NA | NA | |
dRVFL (Ours) | 98.43% | 92.50% | 96.44% | 96.29% | 97.35% | |
Normal | ELM | 89.51% | 97.96% | 95.88% | 93.48% | 91.45% |
SNN | 88.76% | 98.04% | 95.76% | 93.67% | 91.14% | |
RVFL | 89.77% | 98.74% | 96.53% | 95.89% | 92.72% | |
dELM | 100% | 0% | 66.43% | 66.43% | 79.83% | |
dSNN | 100% | 0% | 66.43% | 66.43% | 79.83% | |
dRVFL (Ours) | 91.09% | 98.68% | 96.81% | 95.75% | 93.36% |
(a) | ||||||
Method | Sample Size | Category Distribution | Model Performance | |||
MA | Class I: COVID-19 | Class II: Non-COVID-19 Pneumonia | Class III: Normal | |||
CovC-ReDRNet (Ours) | 6432 chest X-ray images | 576 COVID-19 patients 4273 non-COVID-19 pneumonia cases 1583 normal cases | MA accuracy = 97.56% MA sensitivity = 94.94% MA specificity =97.01% MA precision = 96.81% MA F1-score = 95.84% | Accuracy = 99.44% Sensitivity = 95.31% Specificity = 99.85% Precision = 98.40% F1-score = 96.82% | Accuracy = 96.81% Sensitivity = 91.09% Specificity = 98.68% Precision = 95.75% F1-score = 93.36% | Accuracy = 96.44% Sensitivity = 98.43% Specificity = 92.50% Precision = 96.29% F1-score = 97.35% |
ResNet-18 with location-attention mechanism [47] | 618 chest CT images | 219 COVID-19 patients 224 IAVP cases 175 normal cases | MA accuracy = 91.11% MA sensitivity = 86.67% MA specificity = 93.33% MA precision = 86.85% MA F1-score = 86.71% | Accuracy = 88.89% Sensitivity = 86.7% Specificity = 90.00% Precision = 81.3% F1-score = 83.9% | Accuracy = 90.00% Sensitivity = 83.3% Specificity = 93.33% Precision = 86.2% F1-score = 84.7% | Accuracy = 94.44% Sensitivity = 90.0% Specificity = 96.67% Precision = 93.1% F1-score = 91.5% |
3D-ResNets with the prior-attention mechanism [81] | 4657 chest CT images | 1315 COVID-19 patients 2406 interstitial lung disease (ILD) cases 936 normal cases | MA accuracy = 91.40% MA sensitivity = 86.13% MA specificity = 93.20% MA precision = 84.70% MA F1-score = 85.20% | Accuracy = 93.3% Sensitivity = 87.6% Specificity = 95.5% Precision = 88.4% F1-score = 87.8% | Accuracy = 89.4% Sensitivity = 88.5% Specificity = 90.6% Precision = 91.9% F1-score = 90.2% | Accuracy = 91.5% Sensitivity = 82.3% Specificity = 93.5% Precision = 73.8% F1-score = 77.6% |
Pre-trained VGG-19 [82] | 860 chest X-ray images | 260 COVID-19 patients 300 non-COVID-19 pneumonia cases 300 normal cases | MA accuracy = 92.86% MA sensitivity = 89.67% MA specificity = 94.52% MA precision = 90.83% MA F1-score = 89.65% | Accuracy = 97.14% Sensitivity = 95.00% Specificity = 98.00% Precision = 95.00% F1-score = 95.00% | Accuracy = 91.43% Sensitivity = 78.00% Specificity = 98.89% Precision = 97.50% F1-score = 86.60% | Accuracy = 90.00% Sensitivity = 96.00% Specificity = 86.67% Precision = 80.00% F1-score = 87.30% |
(b) | ||||||
Method | Sample Size | Category Distribution | Model Performance | |||
MA | Class I: COVID-19 | Class II: Non-COVID-19 Pneumonia | Class III: Normal/Other Lung Disease | |||
Pre-trained VGG-19 [83] | 1428 chest X-ray images | 224 COVID-19 patients 700 bacterial pneumonia cases 504 normal cases | MA accuracy = 95.57% MA sensitivity = 91.66% MA specificity = 96.28% MA precision = 92.09% MA F1-score = 91.86% | Accuracy = 97.58% Sensitivity = 88.41% Specificity = 98.69% Precision = 89.05% F1-score= 88.73% | Accuracy = 94.87% Sensitivity = 91.27% Specificity = 96.84% Precision = 94.07% F1-score= 92.65% | Accuracy = 94.27% Sensitivity = 95.29% Specificity = 93.3% Precision = 93.16% F1-score= 94.21% |
COVNet (RestNet5 based) [84] | 4352 chest X-ray images | 1292 COVID-19 patients 1735 community-acquired pneumonia (CAP) cases 1325 non-pneumonia lung disease | MA sensitivity = 90.33% MA specificity = 94.67% | Sensitivity = 90% Specificity = 96% | Sensitivity = 87% Specificity = 92% | Sensitivity = 94% Specificity = 96% |
CoroNet (Xception-based) [85] | 1157 chest X-ray images | 157 COVID-19 patients 500 non-COVID-19 pneumonia cases 500 normal cases | MA accuracy = 93.47% MA sensitivity = 89.95% MA specificity = 94.18% MA precision = 92.11% MA F1-score = 90.87% | Accuracy = 98.52% Sensitivity = 89.19% Specificity = 99.67% Precision = 97.06% F1-score = 92.96% | Accuracy = 91.69% Sensitivity = 95.33% Specificity = 88.77% Precision = 87.20% F1-score = 91.08% | Accuracy = 90.21% Sensitivity = 85.33% Specificity = 94.12% Precision = 92.09% F1-score = 88.58% |
Concatenation model with Xception and ResNet50V2 [86] | 15,085 chest X-ray images | 180 COVID-19 patients 6054 non-COVID-19 pneumonia cases 8851 normal cases | MA accuracy = 94.27% MA sensitivity = 87.31% MA specificity = 93.99% | Accuracy = 99.5% Sensitivity = 80.53% Specificity = 99.56% | Accuracy = 91.6% Sensitivity = 87.35% Specificity = 94.32% | Accuracy = 91.71% Sensitivity = 94.06% Specificity = 88.09% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhu, H.; Zhu, Z.; Wang, S.; Zhang, Y. CovC-ReDRNet: A Deep Learning Model for COVID-19 Classification. Mach. Learn. Knowl. Extr. 2023, 5, 684-712. https://doi.org/10.3390/make5030037
Zhu H, Zhu Z, Wang S, Zhang Y. CovC-ReDRNet: A Deep Learning Model for COVID-19 Classification. Machine Learning and Knowledge Extraction. 2023; 5(3):684-712. https://doi.org/10.3390/make5030037
Chicago/Turabian StyleZhu, Hanruo, Ziquan Zhu, Shuihua Wang, and Yudong Zhang. 2023. "CovC-ReDRNet: A Deep Learning Model for COVID-19 Classification" Machine Learning and Knowledge Extraction 5, no. 3: 684-712. https://doi.org/10.3390/make5030037
APA StyleZhu, H., Zhu, Z., Wang, S., & Zhang, Y. (2023). CovC-ReDRNet: A Deep Learning Model for COVID-19 Classification. Machine Learning and Knowledge Extraction, 5(3), 684-712. https://doi.org/10.3390/make5030037