Conv-Random Forest-Based IoT A Deep Learning Model Based On CNN and Random Forest For Classification and Analysis of Valvular Heart Diseases
Conv-Random Forest-Based IoT A Deep Learning Model Based On CNN and Random Forest For Classification and Analysis of Valvular Heart Diseases
Conv-Random Forest-Based IoT A Deep Learning Model Based On CNN and Random Forest For Classification and Analysis of Valvular Heart Diseases
The review of this paper was arranged by Associate Editor Amitava Chatterjee.
Digital Object Identifier 10.1109/OJIM.2023.3320765
2 Electronics and Communication Engineering Department, Narula Institute of Technology, Kolkata 700109, India
3 Electronics Engineering Department, Indian Institute of Technology (Indian School of Mines) Dhanbad, Dhanbad 826004, India
ABSTRACT Cardiovascular diseases are growing rapidly in this world. Around 70% of the world’s
population is suffering from the same. The entire research work is grouped into the classification and
analysis of heart sound. We defined a new squeeze network-based deep learning model—convolutional
random forest (RF) for real-time valvular heart sound classification and analysis using industrial Raspberry
Pi 4B. The proposed electronic stethoscope is Internet enabled using ESP32, and Raspberry Pi. The said
Internet of Things (IoT)-based model is also low cost, portable, and can be reachable to distant remote
places where doctors are not available. As far as the classification part is concerned, the multiclass
classification is done for seven types of valvular heart sounds. The RF classifier scored a good accuracy
among other ensemble methods in small training set data. The CNN-based squeeze net model achieved
a decent accuracy of 98.65% after its hyperparameters were optimized for heart sound analysis. The
proposed IoT-based model overcomes the drawbacks faced individually in both squeeze network and
RF. CNN-based squeeze net model and RF classifier combined together improved the performance of
classification accuracy. The squeeze net model plays a pivotal part in the feature extraction of heart
sound, and an RF classifier acts as a classifier in the class prediction layer for predicting class labels.
Experimental results on several datasets like the Kaggle dataset, the Physio net challenge, and the Pascal
Challenge showed that the Conv-RF model works the best. The proposed IoT-based Conv-RF model is
also applied on the selected subjects with different age groups and genders having a history of heart
diseases. The Conv-RF method scored an accuracy of 99.37 ± 0.05% on the different test datasets with
a sensitivity of 99.5 ± 0.12% and specificity of 98.9 ± 0.03%. The proposed model is also examined
with the current state-of-the-art models in terms of accuracy.
INDEX TERMS Cardiovascular disorder, convolutional neural network, electronic stethoscope, ensemble
learning, PCG signal, random forest (RF), Raspberry Pi, squeeze network.
TABLE 2. Cardiac sound dataset1 [50], [51]. TABLE 4. Cardiac sound dataset3 [50], [51].
Preprocessing Unit: Preamplifier, notch filter, bandpass C. PROPOSED CONV-RANDOM FOREST LEARNING
filter, and unity gain buffer. ALGORITHM
Feature Extraction Unit: Let X = {(xj , yj ); 1 ≤ j ≤ T}, where T denotes the length
Acoustic Features: MFCCs, Mel, chroma, contrast, and of the training data set, xj = [x1√, x2√, . . . , xN ] is the set of
Tonnetz. N feature matrices in R9 or R 9x 9 , and yj denotes the
FIGURE 7. Schematic of the input sensor unit and computational unit used in the proposed electronic stethoscope.
the output, γ i (l), of the lth layer for the ith fea-
ture map, is considered from the output of the earlier
layer, γ i (l − 1). For every layer, l, in 1 . . . L: compute
the convolutions to produce the γi (l) for layer, l
f (l−1)
γi (l) = ∅ (Bi (l) + kij (l) ∗ γj (l − 1)
j=1
FIGURE 16. Operational process flow diagram in the proposed Conv-RF model.
past medical history. The comparison of the past medical Fig. 16 shows the operational process flow diagram in
history of the volunteers with the result obtained from the the proposed Conv-RF model, which highlights the different
said stethoscope is carried out. For evaluation purposes, a tensor transformations in the input convolutional layer and
score value is obtained in the range of 1 to 5. reshape layer. Table 11 is the architecture of the proposed
TABLE 10. Analysis of PCG recordings produced using the proposed stethoscope with past medical history [50], [51].
in terms of valvular heart sound analysis. Figs. 19 and 20 Fig. 21 is the comparison of the developed Conv-RF
highlight the comparison of the RF algorithm with other model with other CNN-based models for different datasets.
ensemble algorithms and other models for different datasets The electronic stethoscopes available in the market cost
used in this article. around €300–€399. The total expenditure incurred for the
FIGURE 17. Characteristics curve of accuracy versus epoch in conv-RF. TABLE 12. Comparative study of the developed stethoscope with other
stethoscopes.
FIGURE 21. Conv-RF model versus other models for different datasets.
TABLE 13. Runtime of the proposed Conv-RF model compared with other models.
TABLE 14. Runtime of the proposed Conv-RF model compared with other ensemble methods.
auscultation. Some of the future scopes of this research are proposed electronic stethoscope is based on the combination
auscultation time which is closely around 2 min and requires of Raspberry Pi and ESP32.
further minimization. More number of volunteers with clin- The Signal conditioned output goes to the node MCU
ical assessment are needed for statistical validation of the (ESP32) that is WiFi enabled and contains a 12-bit ADC with
developed model. a sampling frequency of 44.1 kHz. Finally, the converted
digital PCG signal goes to the Raspberry Pi through its
enabled WiFi for further signal processing and analysis. The
VII. DISCUSSION classified heart sound is displayed on a 7-inch LCD screen
In the hardware development part, this work deals with the attached with the Raspberry Pi and is heard in a Bluetooth-
design of an Internet-enabled electronic stethoscope. The enabled microspeaker.
[24] T. H. Chowdhury, K. N. Poudel, and Y. Hu, “Time-frequency [43] F. Li et al., “Feature extraction and classification of heart sound using
analysis, denoising, compression, segmentation, and classifica- 1D convolutional neural networks,” EURASIP J. Adv. Signal Process.,
tion of PCG signals,” IEEE Access, vol. 8, pp. 160882–160890, vol. 10, p. 59, Dec. 2019.
2020. [44] J. M.-T. Wu et al., “Applying an ensemble convolutional neural
[25] J. K. Roy, T. S. Roy, and S. C. Mukhopadhyay, “Heart network with Savitzky–Golay filter to construct a phonocardio-
sound: Detection and analytical approach towards diseases,” in gram prediction model,” Appl. Soft Comput., vol. 78, May 2019,
Modern Sensing Technologies, S. Mukhopadhyay, K. Jayasundera, pp. 29–40.
O. Postolache, Eds. Cham, Switzerland: Springer, 2019, pp. 103–145. [45] T. C. Yang and H. Hsieh, “Classification of acoustic physiologi-
[Online]. Available: https://doi.org/10.1007/978-3-319-99540-3_7 cal signals based on deep learning neural networks with augmented
[26] F. Li, H. Tang, S. Shang, K. Mathiak, and F. Cong, “Classification of features,” in Proc. Comput. Cardiol. Conf. (CinC), Vancouver, BC,
heart sounds using convolutional neural network,” Appl. Sci., vol. 10, Canada, Sep. 2016, pp. 569–572.
no. 11, p. 3956, 2020, doi:10.3390/app10113956. [46] Q. Suo et al., “Deep patient similarity learning for personalized
[27] J. K. Roy, T. S. Roy, N. Mandal, and O. A. Postolache, “A simple healthcare,” IEEE Trans. NanoBiosci., vol. 17, no. 3, pp. 219–227,
technique for heart sound detection and identification using Kalman Jul. 2018, doi: 10.1109/TNB.2018.2837622.
filter in real time analysis,” in Proc. Int. Symp. Sens. Instrum. IoT Era [47] A. Mario et al., “Cardiac conduction model for generating 12
(ISSI), 2018, pp. 1–8. lead ECG signals with realistic heart rate dynamics,” IEEE Trans.
[28] D. B. Springer, L. Tarassenko, and G. D. Clifford, “Support vector NanoBiosci., vol. 17, no. 4, pp. 525–532, Oct. 2018.
machine hidden semi Markov model-based heart sound segmenta- [48] D. Li, M. Huang, X. Li, Y. Ruan, and L. Yao, “MfeCNN:
tion,” in Proc. Comput. Cardiol., 2014, pp. 625–628. Mixture feature embedding convolutional neural network for data
[29] A. Cheema and M. Singh, “Steps involved in heart sound analysis—A mapping,” IEEE Trans. NanoBiosci., vol. 17, no. 3, pp. 165–171,
review of existing trends,” Int. J. Eng. Trends Technol., vol. 4, no. 7, Jul. 2018, doi: 10.1109/TNB.2018.2841053.
pp. 2921–2925, 2013. [49] “Kaggle heartbeat sounds.” kaggle.com. Accessed: Feb. 28, 2021.
[30] J. B. Wu, S. Zhou, Z. Wu, and X. M. Wu, “Research on the method [Online]. Available: https://www.kaggle.com/datasets/kinguistics/
of characteristic extraction and classification of phonocardiogram,” in heartbeat-sounds.
Proc. Int. Conf. Syst. Informat. (ICSAI), 2012, pp. 1732–1735. [50] T. S. Roy, J. K. Roy, and N. Mandal, “Classifier identification using
[31] C. D. Papadaniil and L. J. Hadjileontiadis, “Efficient heart sound deep learning and machine learning algorithms for the detection
segmentation and extraction using ensemble empirical mode decompo- of valvular heart diseases,” Biomed. Eng. Adv., vol. 3, Jun. 2022,
sition and kurtosis features,” IEEE J. Biomed. Health Inform., vol. 18, Art. no. 100035.
no. 4, pp. 1138–1152, Jul. 2014. [51] T. S. Roy, J. K. Roy, and N. Mandal, “Design of ear-contactless
[32] C. Liu et al., “An open-access database for the evaluation of heart stethoscope and improvement in the performance of deep learning
sound algorithms,” Physiol. Meas., vol. 37, no. 12, pp. 2181–2213, based on CNN to classify the heart sound,” Med. Biol. Eng. Comput.,
2016. [Online]. Available: https://iopscience.iop.org/article/10.1088/ vol. 61, pp. 2417–2439, Apr. 2023. [Online]. Available: https://doi.
0967-3334/37/12/2181 org/10.1007/s11517-023-02827-w
[33] M. Deng, T. Meng, J. Cao, S. Wang, J. Zhang, and H. Fan, “Heart [52] T. S. Roy, J. K. Roy, and N. Mandal, “Early screening of valvular
sound classification based on improved MFCC features and convolu- heart disease prediction using CNN-based mobile network,” in Proc.
tional recurrent neural networks,” Neural Netw., vol. 130, pp. 22–32, Int. Conf. Comput., Electr. Commun. Eng. (ICCECE), Kolkata, India,
Oct. 2020. 2023, pp. 1–8, doi: 10.1109/ICCECE51049.2023.10085513.
[34] Z. Abduh, E. A. Nehary, M. A. Wahed, and Y. M. Kadah,
“Classification of heart sounds using fractional fourier transform based
mel-frequency spectral coefficients and traditional classifiers,” Biomed.
Signal Process. Control, vol. 57, Mar. 2020, Art. no. 101788.
[35] T. Alafif, M. Boulares, A. Barnawi, T. Alafif, H. Althobaiti, and
A. Alferaidi, “Normal and abnormal heart rates recognition using
transfer learning,” in Proc. 12th Int. Conf. Knowl. Syst. Eng. (KSE),
2020, pp. 275–280.
[36] F. Demir, A. Sengür, V. Bajaj, and K. Polat, “Towards the classification
of heart sounds based on convolutional deep neural network,” Health
Inf. Sci. Syst., vol. 7, p. 16, Aug. 2019.
[37] B. Xiao et al., “Follow the sound of children’s heart:
A deep-learning-based computer-aided pediatric CHDs diagnosis
system,” IEEE Internet Things J., vol. 7, no. 3, pp. 1994–2004,
Mar. 2020.
[38] F. A. Khan, A. Abid, and M. S. Khan, “Automatic heart sound classi-
fication from segmented/unsegmented phonocardiogram signals using
time and frequency features,” Physiol. Meas., vol. 41, no. 5, 2020,
Art. no. 55006. TANMAY SINHA ROY was born in India, in 1988.
[39] A. Raza, A. Mehmood, S. Ullah, M. Ahmad, G. S. Choi, and B.-W. On, He received the B.Tech. degree in instrumen-
“Heartbeat sound signal classification using deep learning,” Sensors, tation and control engineering and the M.Tech.
vol. 19, no. 21, p. 4819, 2019. degree in applied electronics and instrumentation
[40] H. Ryu, J. Park, and H. Shin, “Classification of heart sound recordings engineering from the West Bengal University of
using convolution neural network,” in Proc. Comput. Cardiol. Conf. Technology, Kolkata, India, in 2009 and 2011,
(CinC), Vancouver, BC, Canada, Sep. 2016, pp. 1153–1156. respectively. He is currently pursuing the Ph.D.
[41] J. Rubin, R. Abreu, A. Ganguli, S. Nelaturi, I. Matei, and K. Sricharan, degree with the Indian Institute of Technology
“Classifying heart sound recordings using deep convolutional neu- (Indian School of Mines) Dhanbad, Dhanbad,
ral networks and mel-frequency cepstral coefficients,” in Proc. India.
Comput. Cardiol. Conf. (CinC), Vancouver, BC, Canada, 2016, He is currently an Assistant Professor with the
pp. 813–816. Electrical Engineering Department, Haldia Institute of Technology, West
[42] V. Maknickas and A. Maknickas, “Recognition of normal-abnormal Bengal University of Technology. His research interests include PCG signal
phonocardiographic signals using deep convolutional neural networks analysis, developing systems for heart sound acquisition, instrumentation,
and mel-frequency spectral coefficients,” Physiol. Meas., vol. 38, no. 8, and control, and designing low-cost acoustic stethoscopes for diseased
pp. 1671–1684, 2017. patients.