Next Article in Journal
Driver Fatigue Detection Using Heart Rate Variability Features from 2-Minute Electrocardiogram Signals While Accounting for Sex Differences
Previous Article in Journal
Machine Learning Model Development to Predict Power Outage Duration (POD): A Case Study for Electric Utilities
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

mmWave-RM: A Respiration Monitoring and Pattern Classification System Based on mmWave Radar

1
College of Computer Science and Engineering, Northwest Normal University, Lanzhou 730070, China
2
Gansu Province Internet of Things Engineering Research Center, Lanzhou 730070, China
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(13), 4315; https://doi.org/10.3390/s24134315
Submission received: 30 April 2024 / Revised: 11 June 2024 / Accepted: 1 July 2024 / Published: 2 July 2024
(This article belongs to the Special Issue AI-Based Sensing and Analysis on Healthcare Applications)

Abstract

:
Breathing is one of the body’s most basic functions and abnormal breathing can indicate underlying cardiopulmonary problems. Monitoring respiratory abnormalities can help with early detection and reduce the risk of cardiopulmonary diseases. In this study, a 77 GHz frequency-modulated continuous wave (FMCW) millimetre-wave (mmWave) radar was used to detect different types of respiratory signals from the human body in a non-contact manner for respiratory monitoring (RM). To solve the problem of noise interference in the daily environment on the recognition of different breathing patterns, the system utilised breathing signals captured by the millimetre-wave radar. Firstly, we filtered out most of the static noise using a signal superposition method and designed an elliptical filter to obtain a more accurate image of the breathing waveforms between 0.1 Hz and 0.5 Hz. Secondly, combined with the histogram of oriented gradient (HOG) feature extraction algorithm, K-nearest neighbours (KNN), convolutional neural network (CNN), and HOG support vector machine (G-SVM) were used to classify four breathing modes, namely, normal breathing, slow and deep breathing, quick breathing, and meningitic breathing. The overall accuracy reached up to 94.75%. Therefore, this study effectively supports daily medical monitoring.

1. Introduction

Respiration is an essential life-sustaining process, regulated by the exchange of external and internal respiration, as well as by neural and metabolic mechanisms for gas exchange and acid–base balance. The central nervous system controls human respiration [1]. Numerous factors influence respiration, including oxygen and carbon dioxide levels, acid–base balance, and emotional and physical activity. Understanding the background and fundamentals of respiration is essential for studying and monitoring respiration-related health states. The automatic regulation of respiratory rate and depth adjusts to the body’s needs and environment, with normal or abnormal breathing reflecting cardiorespiratory health and well-being. An abnormal respiratory state may indicate the presence of a disease [2]. When a person’s respiratory value significantly deviates from the normal range, there may be a potential respiratory disease. Therefore, daily monitoring of respiratory status provides valuable insights into a person’s health and well-being.
Respiratory diseases often manifest through abnormalities in respiratory depth, rate, and rhythm, creating a range of distinct respiratory states that can indicate illness [3]. However, in today’s work environment, a sedentary lifestyle has become the norm for many. Sitting limits the expansion of the chest and lungs, potentially leading to reduced lung function and an elevated risk of chronic respiratory conditions. Consequently, there is growing concern for the respiratory health of those working in offices. Traditional monitoring methods typically involve wearable devices, which can be uncomfortable for daily use and are often forgotten, thus compromising continuous health monitoring. In response to these challenges, researchers have begun exploring non-contact respiratory detection methods, such as wireless sensing. Among these, non-contact radar sensors stand out as they eliminate the need for physical contact with the subject, offering an enhanced user experience and enabling more flexible, round-the-clock human health monitoring. This approach has garnered increasing attention as a viable alternative to traditional methods [4].
Although using millimetre-wave radar for respiratory monitoring and classification has non-contact and high-resolution advantages, it still faces many challenges. Firstly, the respiratory signals from the millimetre-wave radar are susceptible to interference from environmental noise and multipath effects. Although existing studies have used filtering and signal-processing techniques to suppress noise, the effect is limited in the case of severe multipath interference [5]. Second, most studies use statistical features and traditional machine learning algorithms (e.g., support vector machine) for respiratory pattern classification, which perform poorly when dealing with complex respiratory patterns [6]. In 2022, He et al. used ultra-wideband (UWB) radar sensors for non-contact respiratory pattern recognition and developed a classification method to incorporate random forests, but the classification accuracy was insufficient [7]. In addition, most existing studies have focused on respiration monitoring under single conditions, with less discussion on factors such as different environments, distances, and angles. Alizadeh et al. used a millimetre-wave radar for respiration and heart rate detection. Although they validated the system’s effectiveness in static environments, there was a lack of evaluation of the system’s performance in complex environments [8]. These challenges limit the wide application of millimetre wave radar technology in respiratory monitoring and classification. To this end, this paper proposes an improved millimetre-wave radar respiration monitoring method that eliminates static noise through signal superposition, employs image processing techniques for respiration pattern classification, and systematically discusses the recognition effects in different environments, distances, and angles, thus effectively addressing the shortcomings of existing techniques. The main contributions of this study are summarised as follows:
(1) This paper uses multiple antenna superposition to suppress static noise, enhance the submerged vital sign signals, and achieve effective extraction of respiratory signals under different influencing factors.
(2) This paper systematically discusses the effects of different factors on monitoring different respiratory patterns and the extent to which they are affected by different distances and other factors.
(3) The mmWave-RM system developed in this paper captures human breathing non-contact and uses 2D images to classify different breathing patterns in a daily office environment. The G-SVM has proved to be effective after extensive experiments, with an accuracy of 94.75%.

2. Related Work

To evaluate the health status of the human respiratory system, the traditional method employed by doctors involves auscultating breath sounds [9]. These methods are inexpensive and easy to operate but suffer from subjective errors and tend to result in misdiagnosis, particularly when performed by inexperienced auscultators. In contrast, with the advancement of vital signal detection technology, the advent of devices such as cardiac tracers [10], electrocardiographs [11,12], and piezoresistive respiratory sensing systems with wearable housings for respiratory measurements [13,14] has provided more advanced tools for the medical field. With increasing public awareness of health, smart bracelets and watches on the market are garnering increasing attention. Including a respiratory health research function in the latest smart bracelet launched by HUAWEI highlights the growing emphasis companies place on respiratory status monitoring.
Existing health monitoring methods are primarily categorised into medical image and wearable device [15,16] monitoring. Medical imaging techniques, such as computed tomography (CT) scanning [17] and X-ray technology [18], provide high-resolution images, but they are expensive, bulky, and highly radioactive, making them unsuitable for daily personnel monitoring. These techniques are typically used after a person has developed symptoms of discomfort, lacking better predictability. Wearable devices generally require physical contact, may cause discomfort when worn for long periods, and are more limited in some scenarios.
The research trend in recent years has gradually shifted towards non-contact respiratory monitoring techniques. In non-contact monitoring, Wi-Fi sensing technology [19] is one of the main tools. Respiratory rate measurements can be successfully achieved via peak detection, channel state information (CSI) amplitude, CSI phase, and received signal strength (RSS). However, there are some limitations in their sensitivity and measurement accuracy. Kontou et al. [20] used an 80 MHz Wi-Fi device to collect fine-grained wireless channel state information (CSI) with a simple, shallow artificial neural network for respiratory frequency detection. Guo et al. [21] introduced BreatheBand, a fine-grained and robust respiratory monitoring system based on commercial Wi-Fi signals. Subcarrier selection and independent component analysis were used to extract respiratory components from raw CSI signals. However, CSI and RSS are not sensitive enough to detect subtle respiratory motion variations, and the measurement accuracy decreases significantly when the subject’s position is beyond a specified distance.
In contrast, radar technology performs much better in fine-grained sensing problems. FMCW radars [22,23] provide wider-range, higher-resolution detection than other radars through frequency modulation and can capture micromovements such as breathing [24] and heartbeat movements. Many current studies use high-frequency band FMCW millimetre wave radars [25] for vital signal detection. Therefore, this paper aims to design a system capable of classifying four respiratory patterns based on 77 GHz FMCW millimetre wave radar.
In previous studies, Miao et al. [26] developed an SVM-based classifier combining three features to classify four respiratory patterns with up to 93% accuracy in 2017. Feng et al. [27] developed a K-nearest neighbour (KNN)-based classifier using FPGA to implement six respiratory patterns for classification, achieving an overall accuracy of 73%. In the literature [28], CNN and SVM were combined to solve the classification problem of breath sounds, and the best classification accuracy reached up to 83%. In 2023, Hong et al. [29] proposed a 1D-SNN-based human breathing pattern detection model and a merged segmentation algorithm to classify multiple breathing patterns.
In this paper, based on FMCW radar combined with machine learning, the mmWave-RM system is proposed to classify human respiratory patterns, and the effects of multiple factors on respiratory patterns are considered. Unlike previous studies that used statistical features for classification [30], this system uses image processing to accurately classify four respiratory modes: normal breathing, slow and deep breathing, quick breathing, and meningitic breathing. The mmWave-RM system performs more accurately in respiratory modes classification, has a wide range of application prospects, and can be used for daily detection and medical assistance.

3. Proposed System

This study presents an overview of the general architecture of the mmWave-RM system and a detailed description of the system’s modules and their workflow.

3.1. Respiratory Signal Modelling

The basic principle of millimetre-wave radar detection of human respiration is through the signal sensing of small vibrations caused by chest undulation. The radar transmits a specific waveform of electromagnetic waves, which irradiates the movement of the chest wall. This generates echoes through the demodulation of the human chest wall after Doppler modulation of the echoes, to obtain information on thoracic cavity displacement containing respiratory and heartbeat parameters. The detection principle and the signal-processing process are shown in Figure 1.
The FMCW radar operates with the synthesiser, generating a linear FM pulse emitted by the transmitting antenna (TX antenna), called a Chirp. A Chirp is a single transmission, and the transmitting signal of the nth Chirp can be expressed as the following equation:
F n t = A T e x p j 2 π f c t + π B T t 2  
where A T is the amplitude of the transmit signal, f c is the transmit signal carrier, B is the transmit signal bandwidth, T is the transmit signal sweep time, and B T is the FM slope.
When the pulse reaches the object under test, the reflection of the linear FM pulse by the object generates a reflective FM pulse and is captured by the receiving antenna (RX antenna). The echo signal R n t can be expressed as the following equation:
R n t = A R e x p j [ ( 2 π f c ( t t d ) + π B T t t d 2 ]
where A R is the amplitude of the received signal, and t d is the delay time of the echo signal.
The mixer then combines the TX and RX signals to produce an intermediate frequency (IF) signal, expressed as follows:
S I F t = F n t · R n * t = A T A R e x p j 2 π f c t + π B T t 2 + j [ ( 2 π f c ( t t d ) + π B T t t d 2 ] = A e x p j 2 π f c t d + π 2 B T t t d π B T t d 2
where A is the product of A T and A R , and R n * t is the complex conjugate of R n t .
Respiratory motion modelling accurately describes the periodic motion of the human thoracic cavity, which is essential for extracting useful respiratory information from radar echo signals. Therefore, after obtaining the IF signal, the human chest displacement motion is modelled based on the characteristics of human respiration. The frequency of human respiration is between 0.1 Hz and 0.5 Hz, and the thoracic cavity expands and contracts rhythmically during respiration. The respiration motion model can be obtained by repeating this periodic cycle [31]:
x r ( t ) = A r ( 0.5 s i n p π f r t t · f r f r )   0   t 1 / f r
where t is the time, f r is the respiratory rate, the exponent p of the s i n function controls the tip rounding as well as the overall shape, and 1 / f r is the repetition interval.

3.2. System Overview

This study focuses on the non-contact system mmWave-RM, which relies on FMCW millimetre-wave radar. The overall system flow is shown in Figure 2. The FMCW-RM system primarily comprises three modules: (1) The Signal Processing Module. This module handles the initial processing of respiratory data collected by IWR1843 millimetre-wave radar. It performs signal processing, including phase difference calculation and fixed band filtering, to obtain a respiratory signal. This ensures that the processed data truly reflects respiration. (2) The Feature Extraction Module. In this module, an algorithm based on HOG is utilised to extract features from the respiratory waveform image. (3) The Breathing pattern classification Module. This module collaborates with an SVM classifier to classify different breathing patterns.

3.3. Breathing Pattern Definition

In this study, four distinct breathing patterns were detected: (a) normal breathing, (b) slow and deep breathing, (c) quick breathing, and (d) meningitic breathing. The time-domain waveforms of these breathing patterns are shown in Figure 3. As evident from Figure 3, there are notable differences in the waveforms of the four breathing patterns. The first pattern, normal breathing, exhibits a relatively stable waveform, as illustrated in Figure 3a. The second pattern, slow and deep breathing, is characterised by deeper breathing with more significant amplitude than the other breathing patterns, as shown in Figure 3b. The third pattern, quick breathing, features an intense respiratory waveform with a significantly increased number of breaths, as presented in Figure 3c. Finally, the fourth pattern, meningitic breathing, involves a breath-holding period under normal respiratory conditions followed by a return to normal breathing, as presented in Figure 3d.
The primary purpose of this study was to distinguish the disparities in respiratory signals among individuals under normal and abnormal physiological conditions. Therefore, respiratory patterns were categorised as either normal breathing or those associated with respiratory disease. As illustrated in Figure 3, the first category represents the respiratory state of an adult with normal breathing. Typically, the respiratory rate for adults in a resting state ranges from 12 to 20 breaths per minute [32,33]. The remaining three breathing patterns chosen for this study are modelled based on the characteristics of different respiratory disease symptoms.
The second type of breathing, slow and deep breathing, occurs during severe metabolic acidosis. When the extracellular fluid lacks bicarbonate, the pH value decreases, promoting the body to deepen respiration to discharge carbon dioxide and compensate for the extracellular acid–base imbalance. Common scenarios include diabetic ketoacidosis and uremic acidosis [34].
The third type of quick breathing, hyperventilation, is characterised by an elevated breathing rate. It often occurs in individuals with asthma, who may experience chest tightness, shortness of breath, and accelerated breathing during an attack. In mild cases, patients typically require regular medication. However, in severe cases, supervised treatment is necessary, as a sudden onset of breathlessness can be life-threatening.
The fourth type of meningitic breathing is characterised by alternating periods of respiration and apnoea. This phenomenon occurs due to decreased respiratory centre excitability or severe hypoxia, which prevents chemoreceptors from stimulating the normal concentration of carbon dioxide in the blood to excite the respiratory centre. Consequently, respiration gradually weakens until it stops temporarily, allowing the blood’s carbon dioxide concentration to accumulate during the apnoea period. This buildup stimulates the respiratory centre, initiating the next respiration cycle. This type of respiration is most prevalent in encephalitis and meningitis.
The respiratory states chosen for this study are intricately linked to prevalent respiratory ailments [35], and their differentiation not only facilitates daily surveillance of respiratory conditions but also helps patients identify the potential nature of their illness, thereby serving as a valuable tool for medical intervention.

3.4. Signal Processing Module

FMCW radar systems acquire the object’s distance, velocity, and angle under test by capturing the reflected signal. By analysing the IF signal S I F t , the displacement information of the target can be further extracted. Firstly, the signals at different distances are grouped together by range bin to achieve accurate target positioning. Then, the target phase is obtained by performing FFT on the signal S I F t .
The human respiration-induced thoracic displacement x r ( t ) is a periodic signal reflected in the phase change of the IF signal, and the displacement d of the target induces a phase change in the FMCW signal, which is then the phase change between successive measurements:
b = 4 π λ d
where b is the phase change of the beat signal, d is the slight change in the thoracic cavity induced by the human body when breathing, and λ is the wavelength.
In millimetre-wave radar propagation, factors such as target distance, target absorption, and environmental interference can cause the life signal to be easily overwhelmed by noise. Figure 4 illustrates the effectiveness of three different denoising methods applied to the original signal. The average phase cancellation method partially attenuates the noise but leaves noticeable residual noise. The moving target indication method further reduces noise, yet fails to eliminate it completely. In contrast, the signal overlay method demonstrates superior denoising performance, nearly eliminating the noise and significantly enhancing the signal. Consequently, this module primarily employs the signal overlay method to effectively extract vital signals.
The data collected from multiple receiving antennas of the FMCW radar provide insights into the radar’s perception of human respiration at various time points [36]. By performing IQ complex summation and phase alignment using the cross-correlation function, we superimpose the signals from the four receiving antennas to achieve a more accurate and robust detection of vital signals.
As illustrated in Figure 4, the static noise is effectively filtered following signal superposition, and the respiratory signal is also enhanced. The enhanced signal processing process is depicted in Figure 5, and the specific processing steps are as follows:
(1) A one-dimensional FFT is performed on the IF signal to determine the correct range bin, which aids in localising the human thoracic position, as shown in Figure 5a.
(2) The extracted phase information in Figure 5b is unwrapped, and phase expansion is achieved by subtracting 2π from the phase to produce Figure 5c.
(3) Taking d ( t )   as the unwrapped phase, the phase difference operation is performed on the unwound phase using the backward difference, i.e., d t d ( t 1 ) , which enhances the respiratory signal and removes the phase drift to produce Figure 5d.
(4) Since the frequency of human respiration is in the range of 0.1–0.5 Hz, this paper designs an elliptical filter according to the respiration frequency range, which only allows signals from 0.1 Hz to 0.5 Hz to pass through. The respiration signal can be obtained by smoothing and filtering the image after this step, as shown in Figure 5e.
(5) Figure 5f is obtained by performing a fast Fourier transform on the respiratory signal. The analysis revealed that the resulting waveform’s frequency was 0.28 Hz, which falls within the respiratory frequency range. The four modes’ processed respiratory waveforms serve as the module’s outputs.

3.5. Feature Extraction Module

After processing the data in Section 3.4, specific feature extraction algorithms were utilised to extract the characteristics of the respiratory waveform image. Abnormalities in frequency, depth, and rhythm often indicate respiratory disorders. As the human body experiences different respiratory states, the features of the respiratory signals may vary slightly. Therefore, selecting and extracting appropriate features is crucial for accurately assessing various respiratory conditions. Since the respiratory signals are enhanced in the signal processing module, more distinguishable differences can be observed in the extracted waveforms of different respiratory states. Therefore, this study adopted the image classification method to classify the waveforms of different respiratory modes. For feature extraction, this study chose the histogram of oriented gradient (HOG) algorithm that Navneet Dalal and Bill Triggs proposed in 2005 [37]. This algorithm offers significant advantages in image processing and effectively captures local texture and edge orientation information. It constructs features based on the gradient direction statistics in the image’s local regions, enhancing classification accuracy. The algorithm is insensitive to colour, illumination, and scale variations, and can extract valuable features from the waveform’s shape, texture, and gradient information. The overall flow of the HOG feature extraction algorithm is shown in Figure 6.
The detailed process of working on the feature extraction module is as follows:
Step 1: Pre-processing operations are performed on the images, which include four steps: image loading, resizing, greyscale, and data storage segmentation. (1) The image data is retrieved from the file system and diverse categories of images are stored in separate folders. (2) The images are resized using the resize function to ensure that each image has a size of 256 × 256 pixels. (3) The greyscale process transforms each pixel in the colour image from an RGB value to a greyscale value. (4) The processed images are segmented into four categories, normal breathing, slow and deep breathing, quick breathing, and meningitic breathing, which are then stored.
Step 2: The horizontal and vertical gradient values are calculated for each pixel and combined to calculate the total gradient value. The HOG within the rectangular region is utilised to obtain the desired feature description. The formulae for calculating the gradient and orientation are shown in (6) and (7):
G x , y = G x 2 x , y + G y 2 x , y
θ x , y = tan 1 G y x , y G x x , y
where G x x , y and G y x , y denote the gradient values of the image pixels in the x , y directions, G x , y denotes the total gradient magnitude, and θ x , y denotes the gradient direction.
Step 3: The image is divided into cell-like units containing several pixels. Then, the gradient histogram of each cell-like unit is calculated, and the cellular units are projected onto the gradient direction based on their gradient direction and magnitude. The gradient direction range is divided into nine direction intervals, and the sum of the gradient magnitude within each interval is statistically calculated.
Step 4: Several adjacent cell-like units are combined into a cell block, and the gradient histogram of each cell block can be considered a feature vector. The feature vectors of all cell blocks are concatenated to obtain the final HOG feature vector.
Step 5: Using principal component analysis (PCA) for dimensionality reduction, all samples are centred by subtracting the mean x i x i i = 1 n x i . Subsequently, the covariance matrix of the samples x x T is computed, and the eigenvalue decomposition is performed on the covariance matrix. The unit eigenvector corresponding to the largest m eigenvalues is taken, ω 1 ,       ω 2 ,     ω 3 , ω m . Finally, the projection matrix is obtained, which is the output of module II.

3.6. Breathing Pattern Classification Module

In this study, an SVM classifier was chosen to classify breathing patterns. SVM, as a supervised learning algorithm, aims to find a hyperplane that can optimally separate different classes as in Equation (8). In this module, we let the set of linearly separable samples be x 1 , y 1 , , x l , y l , x i R n , y i = 1 , 1 , i = 1 , , l . Then, the hyperplane can be expressed as Equation (9) so that the positive and negative class inputs in the training samples are on both sides of this hyperplane. It is shown as follows:
w × x b = 0
ω T x + b = 0
At this point, a parameter par ( ω , b ) exists, enabling y i = s g n ω T x + b , i = 1 , , l , maximizing the interval between the two classes. Then, the problem of finding the optimal plane is transformed into the optimization problem defined by Equation (10).
m i n :   J w , b , a = 1 2 ω T ω i = 1 N a i y i ω T x i + b 1 y i ω T x i + b + 1      
where a i are the constrained Lagrange multipliers, and since they are all inequality constraints, these multipliers are all non-negative. The partial derivatives of Equation ① give Equation (11):
Q a = j = 1 N a i 1 2 i = 1 N j = 1 N a i a j y i y j ( x i , x j )
This equation is referred to as the dual form of Equation ①. Meanwhile, the optimal solution to the optimization problem must satisfy a i y i ω T x i + b 1 = 0 . Moreover, any a i 0 can be found as b . Since i = 1 N a i y i = 0, it can be inferred that most of the a i are equal to 0. The samples corresponding to a i not equal to 0 are called support vectors. The model performs well on relatively small datasets and has an excellent ability to generalise. It excels at overcoming the challenges of machine learning on small samples, can handle high-dimensional data, and avoids the problems of structure selection and local minima in neural networks. The model is trained using reshaped image data as input features and corresponding category labels.
In this study, we employed the SVM method to classify different breathing patterns based on the feature extraction of the HOG algorithm, resulting in HOG-SVM (G-SVM). The model was trained for classification after preprocessing and feature extraction of the data. The classifier’s training was based on a fivefold cross-validation method, where all the breathing samples were divided into five groups, and the samples with different breathing states were labelled as four predefined types of normal breathing, slow and deep breathing, quick breathing, and meningitic breathing, respectively. Four groups were used for training and one was used for testing, resulting in an overall ratio of 4:1 for training to testing sets.
The FMCW-RM system pseudo-code is as Algorithm 1:
Algorithm 1: Steps of FMCW-RM
Input: Dataset (Mi,Ni)(i = 1,2,⋯⋯,n)
Output: Classification accuracy
  1. img=resize(img,(256,256))
  2. G_magnitude = sqrt(power(Gx, 2) + power(Gy, 2))//Calculate gradient value
  3. G_angle =arctan2(Gx, Gy)
  4.   bins=Get_bins(G_magnitude, G_angle, cell_size, bin_count)// Calculate the histogram of the gradient
  5. function Block_Vector(bins, cell_x, cell_y, bin_count)
  6. For i in range(0, self.cell_x − 1):
  7.  For j in range(0, self.cell_y − 1):
  8.    magnitude =mag(feature)// calculates the magnitude of feature
  9.  end for
10.  end for
11. return block_vector
12. end function
13. clf = svm.SVC( )//model training
14. clf.fit(train_data, train_target)
15. pred = clf.predict(test_data)// model prediction
16. accuracy = calculate_accuracy(test_target, pred)
17. return accuracy

4. Experimentation and Evaluation

The breathing signal extraction methods proposed in this study were primarily validated across varying distances, clothing, angles, and environments, ensuring the extraction of accurate and reliable breathing waveforms for classification. Additionally, the overall performance of the FMCW-RM system was evaluated in this section by comparing the recognition rates of different classifiers on the four breathing patterns.

4.1. Experimental Parameters and Environment Settings

The equipment used in this study was the IWR1843BOOST radar and the DCA1000EVM data acquisition board, both from Texas Instruments, Dallas, TX, USA, for the acquisition of human vital signals. The IWR1843BOOST is a single-chip millimeter-wave radar sensor operating in the 76~81 GHz band with the parameter settings shown in Table 1. The radar featured three transmit antennas and four receive antennas, and the experimental setup is shown in Figure 7. For this experiment, the IWR1843BOOST millimetre-wave radar and the DCA1000EVM data acquisition board were positioned directly in front of the target to acquire the raw ADC data and transmit it to the computer via the USB data cable. When the computer received the raw data, the data was parsed and processed using MATLAB on a computer with an AMD Ryzen7 5800H processor and 16 G of RAM.
Data collection was performed in an office environment. The experimenter was seated in a chair at rest, with the radar positioned 0.8 m in front of the subject. The detectable area of the radar was aimed directly at the human chest at approximately 0.9 m. The acquired data were stored as a binary bin file. Table 2 shows the details of the subjects. In the experiment, the subjects needed to simulate three respiratory states, in addition to normal respiration. Each respiratory state needed to last for 25 s of collection time, and each respiratory mode collected 300 sets of data, of which 240 sets were used as training data and 60 sets were used as test data. Although the vital signals were easily drowned out by random body movements and external environmental noise, the denoising method used in this paper was more effective in suppressing static noise. Therefore, the subjects only needed to avoid significant limb movements during the experiment. All collected samples were sorted into four breathing states: normal breathing, slow and deep breathing, quick breathing, and meningitic breathing.

4.2. Reliability Validation of Millimetre Wave Radar Measurement Methods

To verify the reliability of the millimetre-wave radar respiration measurements, the respiration waveforms were used to estimate the respiration rate via peak detection, spectral estimation, and autocorrelation after the millimetre-wave radar had acquired the signals. In this section, all signal data was normalised so that the signal amplitude was between −1 and 1. As external influences and human respiration are unstable, we set the threshold for solving the counts to ±0.2 to allow for sufficient amplitude variation between respiration waveforms and to exclude inconspicuous peaks to mitigate over-detection problems. This process was discarded if the first waveform obtained was a trough. Secondly, if a peak remains a peak after one wave, it is not counted until the next trough is encountered. This trough, along with the previous peak, is considered as one full breath. As in Figure 3b, P 3 , V 3 is a complete breath and, P 3 is discarded. The breathing training function of the HUAWEI WATCH GT2 also recorded the number of breaths during the experiment. The process confirmed the validity of the acquired signal by calculating the error between the measured respiratory rate and the number of gusts recorded. The total error was 0.5 times. The experimental data is shown in Table 3.
Furthermore, this study conducted a series of control experiments. In these experiments, the millimetre wave radar faced the wall at a distance of 0.8 m, and data was collected. The collected data was then processed and filtered between 0.1 Hz and 0.5 Hz. Figure 8 shows the control experiment scene and the waveforms obtained from the radar when placed on the chest cavity and when it faced the wall. The amplitude of the waveform when no breathing activity was present was close to zero, indicating that the signal detected in this frequency range was indeed a breathing signal.

4.3. Physical Environment Analysis

4.3.1. Distance Analysis

This section examines the effectiveness of monitoring respiratory signals at different distances, with experiments conducted at six distance points. The distances between the radar and the subject’s chest were set at 0.4 m, 0.6 m, 0.8 m, 1 m, 1.2 m, and 1.5 m. At each distance, 20 datasets were collected. Each subject’s respiratory signals were recorded for 25 s, and the corresponding waveforms are shown in Figure 9. As shown in Figure 9, the respiratory waveforms of the subjects were extracted at six different distances. At the positions of 0.4 m and 0.6 m, due to their closer distance, the signal was enhanced, however, the noise was similarly enhanced, and thus the acquired image will be shown as a wave with two small spikes, as shown by the circled portion in Figure 9a,b. The signal was significantly weaker at a 1.2 m and 1.5 m distance. At a distance of 0.8 m and 1 m, the waveform effect was optimal, but at a distance of 0.8 m, when the signal amplitude was more significant, the signal was more stable. Therefore, based on a combination of five experimental objects of data, in most cases, the distance of 0.8 m yielded the best results, making 0.8 m the preferred experimental distance.
Figure 10 further analyses the effect of different distances on the four modes of normal, slow and deep, quick, and meningitic breathing, demonstrating the changes in the average energy of each breathing mode at different distances. It can be seen that the average energy of all breathing modes decreased significantly with increasing distance. However, the effects on different breathing modes were not the same. The energy was always the highest in slow and deep breathing since chest vibration is most pronounced in this group. Although the average energy of normal and slow and deep breathing decreased with distance, the magnitude of the energy decrease was similar, reflecting the small difference in the effect of distance on the two breathing modes. Intermittent pauses in breathing, on the other hand, showed the most pronounced decrease in energy, which could be attributed to the fact that the amplitude of breathing is much greater at closer distances than at farther distances, and that the breath-holding phase greatly reduces the overall energy of the signal.

4.3.2. Analysis of Diversity in Personnel Status

The essence of radar-based vital signs extraction is detecting minute vibrations in the chest cavity. However, clothing worn by subjects can have an impact on the signals. To investigate the influence of clothing on the extraction of various breathing patterns, this study collected respiratory signals from five volunteers wearing three types of clothing, T-shirts, thin jackets, and coats, in an office environment.
When subjects wear different types of clothing, the localisation of the chest during respiratory signal extraction can be inaccurate due to variations in thickness, potentially leading to signal distortion. Therefore, in this set of experiments, we evaluated three clothing types: a 2 mm thick T-shirt, a 4 mm thick jacket, and a 7 mm thick coat. Figure 11a shows the errors in chest localisation for different clothing thicknesses. Despite differences in thickness ranging from 2–5 mm, the error in chest positioning exceeded 1 cm. Figure 11b shows the effect of the three clothing thicknesses on the four breathing patterns: normal, slow and deep, quick, and meningitic. Notably, the instantaneous energy of slow and deep breathing was highest when wearing thinner clothing, as the chest vibrations are most pronounced during deep breathing. However, as the clothing thickness increased, slow and deep breathing energy decreased the most, although the energy of all other breathing modes also decreased. Thiscan be attributed to the thickened clothing absorbing some signals, leading to reduced instantaneous energy across all breathing patterns. Therefore, it can be concluded that clothing thickness has a notable impact on respiratory signals, particularly during deep breathing, emphasising the importance of the subject wearing thin clothing for accurate signal detection. In practice, external noise will drown out part of the signal, and a slight increase in the thickness of the clothes will affect the accuracy, so in this paper, we chose a jacket with moderate thickness for subsequent experiments. Exploring the detection of life signals in more complex environments will be a problem that subsequent research endeavours will need to overcome.

4.3.3. Perspective Analysis

To enhance the target detection accuracy and conserve energy, the antenna arrangement of the radar often assumes a fan-shaped configuration, resulting in a fan-shaped sensing area during data collection. This study aimed to monitor human respiration in an office setting when subjects simulate their typical sitting posture at work. We compared the orthogonal signals, where the radar was aimed at the centre of the human chest cavity, with signals at three angles, at zero degrees, thirty degrees to the left, and thirty degrees to the right. As Figure 12 demonstrates, 20 datasets were collected at each angle in this section to verify the extracted respiratory signals, yielding a total of 60 datasets.
The stability of the positively aligned signals shows superior performance, as shown in Figure 12. This is because when the main flap of the radar and the human chest target are optimally aligned, the signal path is straighter, resulting in relatively uniform reflected signal intensity. Conversely, when the radar and the human chest are misaligned to the left or right, signal stability significantly diminishes, and signal amplitude also experiences a significant reduction. This occurs because the alignment between the radar’s main flap and the target is suboptimal, leading to increased signal dispersion and weakened reflection. In these scenarios, multipath effects and signal attenuation become more pronounced, further compromising signal stability. As shown in Figure 13, when the radar position is shifted by 30° to the left or right, the strength and energy of the respiratory signal are significantly reduced. Therefore, ensuring the stability of the radar’s alignment is critical for optimal signal quality in practical applications.

4.3.4. Analysis of Different Experimental Environments

The respiratory heartbeat serves as a micromotion signal if prone to being overwhelmed by noise, and both indoor and outdoor devices can cause the signal to undergo multiple reflections, ultimately interfering with the extraction of the respiratory signal. The indoor devices are stationary and invariant. In this section, we compared the extraction condition of the respiratory signal in a quiet nighttime environment, a typical daytime environment, and a noisy environment. In this case, the normal environment had people other than the subject sitting in the experimental environment but not in the radar monitoring range, to simulate a work scene with no other noise generation. The noisy environment incorporated mobile phones playing music and the sound of personnel talking. The experimental results are shown in Figure 14 and Figure 15.
Figure 14 shows the instantaneous energy variations of respiratory signals across different environments. In a quiet environment, the maximum energy of the respiratory signal ranged around 10; in a typical daytime working environment, the energy decreased to 5 to 7; and in a noisy environment, the energy decreased further. This indicates that as environmental noise increases, the energy of the respiratory signal decays rapidly, exhibiting various degrees of change. As shown in Figure 15a, the amplitude of human respiration was most pronounced in a quiet nighttime environment. The waveforms obtained in a daytime setting were slightly disturbed by noise, resulting in an average decrease of 0.248 in the amplitude of extracted respiration signals. On average, the amplitude decreased by 0.586 in noisy environments compared to that in quiet environments. Figure 15b reflects the amplitudes and energies of various breathing patterns across the three settings. It shows that meningitic breathing was least affected by changes in external ambient noise, presumably due to the breath-holding period inherent in this type of breathing pattern, causing respiratory signals to be less influenced by filtering out the high-frequency noise components. Respiratory signals obtained across various environments can count breaths, further validating the robustness of this study’s signal extraction and noise reduction techniques.

4.4. Classification Results

This section classified four types of respiratory pattern data collected in the daily office environment. To comprehensively evaluate and quantify the model’s performance and visualise its classification effect, the confusion matrix was used to visualise the classification accuracy of different types of respiration, and the results are shown in Figure 16. Table 4 shows the accuracy of the model’s training and test sets using various features, which serve as metrics for evaluating the effectiveness and performance of the machine learning model.
All four methods can classify different kinds of respiratory states, with KNN achieving an overall classification accuracy of 84.75%. The model was more accurate in recognising meningitic respiration, while the recognition rate of the other three respiratory patterns was not very high. The overall classification accuracy of the traditional SVM was 91%, and this method had a high recognition rate for normal breathing, slow and deep breathing, and meningitic breathing, but only 64% for quick breathing. The mainstream CNN combined with LSTM had an overall recognition accuracy of 92%, but it was ineffective at recognising slow and deep breathing. To solve the problem of low recognition accuracy for both quick breathing and slow and deep breathing modes, this paper adopted the G-SVM method to extract features and classify them. This improved the classification accuracy of quick breathing and slow and deep breathing to 88% and 99% and improved the overall accuracy to 94.75%, enabling the classification of different respiratory states.

4.5. Comparison with Recent Research Work

We compared the method used by mmWave-RM with other recent research on respiratory pattern classification, as shown in Table 5. Both Mah et al. [15] and He et al. [7] used a random forest classifier to classify respiratory patterns. However, in [15] polynomial fitting was used for signal denoising, and respiratory depth was used as a feature, resulting in a classification accuracy of 87%. In [7], He et al. used singular value decomposition for signal denoising and combined it with time domain features for classification, which improved the accuracy. Purnomo et al. [38] processed signals using various techniques, extracted the MFCC features, and then classified them using the XGBoost model. Park et al. [16] proposed a new method based on the CNN model, which classified breathing patterns with an accuracy of more than 92%.
Considering the above result, compared to existing work, mmWave-RM employs a denoising technique based on signal superposition. This method can effectively improve signal quality and achieve accurate feature extraction, which, in combination with the classical SVM classifier, can achieve better classification of breathing patterns.

5. Conclusions

Respiratory status is a vital reference indicator reflecting human cardiopulmonary function. With the quick development of medical technology, more and more people are beginning to pay attention to monitoring their daily health status to achieve early prevention of respiratory diseases. In this study, FMCW millimetre-wave radar performs accurate extraction and classification of respiration signals based on the non-contact respiration-sensing method. This paper examined human respiration monitoring at different distances, different angles, different clothing thicknesses, and different environments, and classified four respiration modes in the daily office environment. The experiments were conducted to analyse the effectiveness of respiratory monitoring under different influencing factors and to classify different respiratory states using KNN, SVM, CNN, and G-SVM classifiers after processing the data collected in the office environment. The G-SVM classifier performed the best out of the four respiratory modes, with an overall accuracy rate of 94.75%. After many experiments, the effectiveness of using waveform images for classifying respiratory status was verified. However, the data acquisition process currently does not account for the impact of coughing and sneezing on respiration. Further research is needed to classify respiration states in more complex scenarios, which will also be our next step.

Author Contributions

Author Contributions: Conceptualization, Z.H. and Y.W.; methodology, Y.W.; software, Y.W.; validation, G.D. and Y.G.; formal analysis, Y.W.; investigation, Y.W.; data curation, Y.W.; writing—original draft preparation, Y.W.; writing—review and editing, Z.H. and F.L.; supervision, Z.H.; project administration, Z.H.; funding acquisition, Z.H. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (Grant 62262061, Grant 62162056, Grant 62261050), Major science and technology projects of Gansu province (23ZDGA009), Science and Technology Commissioner Special Project of Gansu province (23CXGA0086), 2020 Lanzhou City Talent Innovation and Entrepreneurship Project (2020-RC-116, 2021-RC-81), and Gansu Provincial Department of Education: Industry Support Program Project (2022CYZC-12), Northwest Normal University Young Teachers Research Ability Enhancement Program Project (NWNU-LKQN2019-28).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. West, J.B. Respiratory Physiology: The Essentials; Lippincott Williams & Wilkins: Philadelphia, PA, USA, 2012. [Google Scholar]
  2. Bousquet, J.; Dahl, R.; Khaltaev, N. Global alliance against chronic respiratory diseases. Eur. Respir. J. 2007, 29, 233–239. [Google Scholar] [CrossRef] [PubMed]
  3. Olaithe, M.; Bucks, R.S. Executive dysfunction in OSA before and after treatment: A meta-analysis. Sleep 2013, 36, 1297–1305. [Google Scholar] [CrossRef]
  4. Jin, F.; Zhang, R.; Sengupta, A.; Cao, S.; Hariri, S.; Agarwal, N.K.; Agarwal, S.K. Multiple patients behavior detection in real-time using mmWave radar and deep CNNs. In Proceedings of the 2019 IEEE Radar Conference (RadarConf), Boston, MA, USA, 22–26 April 2019; pp. 1–6. [Google Scholar]
  5. Adib, F.; Mao, H.; Kabelac, Z. Smart homes that monitor breathing and heart rate. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, Seoul, Republic of Korea, 18–23 April 2015; pp. 837–846. [Google Scholar]
  6. Vapnik, V. The Nature of Statistical Learning Theory; Springer Science & Business Media: New York, NY, USA, 2013. [Google Scholar]
  7. He, S.; Han, Z.; Iglesias, C.; Mehta, V.; Bolic, M. A real-time respiration monitoring and classification system using a depth camera and radars. Front. Physiol. 2022, 13, 799621. [Google Scholar] [CrossRef] [PubMed]
  8. Alizadeh, M.; Shaker, G. Remote monitoring of human vital signs using mm-wave FMCW radar. IEEE Access 2019, 7, 54958–54968. [Google Scholar] [CrossRef]
  9. Kiyokawa, H.; Greenberg, M.; Shirota, K.; Pasterkamp, H. Auditory detection of simulated crackles in breath sounds. Chest 2001, 119, 1886–1892. [Google Scholar] [CrossRef] [PubMed]
  10. Andreozzi, E.; Fratini, A.; Esposito, D.; Naik, G.; Polley, C.; Gargiulo, G.D.; Bifulco, P. Forcecardiography: A novel technique to measure heart mechanical vibrations onto the chest wall. Sensors 2020, 20, 3885. [Google Scholar] [CrossRef] [PubMed]
  11. Inan, O.T.; Migeotte, P.F.; Park, K.S.; Etemadi, M.; Tavakolian, K.; Casanella, R.; Di Rienzo, M. Ballistocardiography and seismocardiography: A review of recent advances. IEEE J. Biomed. Health Inform. 2014, 19, 1414–1427. [Google Scholar] [CrossRef]
  12. Sieciński, S.; Kostka, P.S.; Tkacz, E.J. Gyrocardiography: A review of the definition, history, waveform description, and applications. Sensors 2020, 20, 6675. [Google Scholar] [CrossRef]
  13. Liu, H.; Allen, J.; Zheng, D.; Chen, F. Recent development of respiratory rate measurement technologies. Physiol. Meas. 2019, 40, 07TR01. [Google Scholar] [CrossRef]
  14. Vanegas, E.; Igual, R.; Plaza, I. Piezoresistive breathing sensing system with 3d printed wearable casing. J. Sens. 2019, 2019, 2431731. [Google Scholar] [CrossRef]
  15. Mah, A.J.; Nguyen, T.; Ghazi Zadeh, L.; Shadgan, A.; Khaksari, K.; Nourizadeh, M.; Zaidi, A.; Park, S.; Gandjbakhche, A.H.; Shadgan, B. Optical Monitoring of breathing patterns and tissue oxygenation: A potential application in COVID-19 screening and monitoring. Sensors 2022, 22, 7274. [Google Scholar] [CrossRef] [PubMed]
  16. Park, J.; Park, S.; Nguyen, T. Simulated Breathing Patterns Classification Using Convolutional Neural Network with Deep Embedded Features. In Proceedings of the 2024 IEEE International Conference on Consumer Electronics (ICCE), Las Vegas, NV, USA, 6–8 January 2024; pp. 1–3. [Google Scholar]
  17. Ceniccola, G.D.; Castro, M.G.; Piovacari, S.M.F. Current technologies in body composition assessment: Advantages and disadvantages. Nutrition 2019, 62, 25–31. [Google Scholar] [CrossRef] [PubMed]
  18. Wang, L.; Lin, Z.Q.; Wong, A. Covid-net: A tailored deep convolutional neural network design for detection of COVID-19 cases from chest x-ray images. Sci. Rep. 2020, 10, 19549. [Google Scholar] [CrossRef] [PubMed]
  19. Armenta-Garcia, J.A.; Gonzalez-Navarro, F.F.; Caro-Gutierrez, J. Mining Wi-Fi Channel State Information for breathing and heart rate classification. Pervasive Mob. Comput. 2023, 91, 101768. [Google Scholar] [CrossRef]
  20. Kontou, P.; Smida, S.B.; Anagnostou, D.E. Contactless Respiration Monitoring using Wi-Fi and Artificial Neural Network Detection Method. IEEE J. Biomed. Health Inform. 2023, 28, 1297–1308. [Google Scholar] [CrossRef] [PubMed]
  21. Guo, Z.; Yuan, W.; Gui, L. BreatheBand: A fine-grained and robust respiration monitor system using WiFi signals. ACM Trans. Sens. Netw. 2023, 19, 1–18. [Google Scholar] [CrossRef]
  22. Ji, S.; Wen, H.; Wu, J.; Zhang, Z.; Zhao, K. Systematic heartbeat monitoring using a FMCW mm-wave radar. In Proceedings of the 2021 IEEE International Conference on Consumer Electronics and Computer Engineering (ICCECE), Guangzhou, China, 15–17 January 2021; pp. 714–718. [Google Scholar]
  23. Abdul-Atty, M.M.; Mabrouk, M.; Elramly, S. Design and implementation of a low cost FMCW radar with configurable signal processor for human movement and breathing detection. In Proceedings of the Research World International Conference, Saint Petersburg, Russia, 7–8 September 2019; pp. 5–10. [Google Scholar]
  24. Avian, C.; Leu, J.S.; Ali, E.; Putro, N.A.S. Non-contact Breathing Patterns Recognition with FMCW Radar by Processing Temporal Information using Transformer Network. In Proceedings of the 2023 Asia-Pacific Microwave Conference (APMC), Taipei, Taiwan, 5–8 December 2023; pp. 420–422. [Google Scholar]
  25. Wang, Y.; Liu, H.; Xiang, W.; Shui, Y. A Novel Non-contact Respiration and Heartbeat Detection Method Using Frequency-Modulated Continuous Wave Radar. IEEE Sens. J. 2024, 24, 10434–10446. [Google Scholar] [CrossRef]
  26. Miao, D.; Zhao, H.; Hong, H. Doppler radar-based human breathing patterns classification using Support Vector Machine. In Proceedings of the 2017 IEEE radar conference (RadarConf), Seattle, WA, USA, 8–12 May 2017; pp. 0456–0459. [Google Scholar]
  27. Feng, C.; Zhao, H.; Liu, Q. Implementation of radar-based breathing disorder recognition using FPGA. In Proceedings of the 2019 IEEE MTT-S International Microwave Biomedical Conference (IMBioC), Nanjing, China, 6–8 May 2019; Volume 1, pp. 1–3. [Google Scholar]
  28. Cinyol, F.; Baysal, U.; Köksal, D.; Babaoğlu, E. Incorporating support vector machine to the classification of respiratory sounds by Convolutional Neural Network. Biomed. Signal Process. Control 2023, 79, 104093. [Google Scholar] [CrossRef]
  29. Hong, J.W.; Kim, S.H.; Han, G.T. Detection of multiple respiration patterns based on 1D SNN from continuous human breathing signals and the range classification method for each respiration pattern. Sensors 2023, 23, 5275. [Google Scholar] [CrossRef]
  30. Wang, Q.; Dong, Z.; Liu, D.; Cao, T.; Zhang, M.; Liu, R.; Sun, J. Frequency-modulated continuous wave radar respiratory pattern detection technology based on multifeature. J. Healthc. Eng. 2021, 2021, 9376662. [Google Scholar] [CrossRef]
  31. Li, Z.; Jin, T.; Dai, Y.; Song, Y. Motion-Robust Contactless Heartbeat Sensing Using 4D Imaging Radar. IEEE Trans. Instrum. Meas. 2023, 72, 1–10. [Google Scholar]
  32. Guyton, A.C. Text Book of Medical Physiology; Elsevier Saunders: Philadelphia, PA, USA, 2006. [Google Scholar]
  33. Yuan, G.; Drost, N.A.; McIvor, R.A. Respiratory rate and breathing pattern. McMaster Univ. Med. J. 2013, 10, 23–25. [Google Scholar]
  34. Adrogué, H.J.; Madias, N.E. Management of life-threatening acid–base disorders. N. Engl. J. Med. 1998, 338, 26–34. [Google Scholar] [CrossRef] [PubMed]
  35. Rehman, M.; Shah, R.A.; Khan, M.B. Improving machine learning classification accuracy for breathing abnormalities by enhancing dataset. Sensors 2021, 21, 6750. [Google Scholar] [CrossRef]
  36. Hao, Z.; Yan, H.; Dang, X.; Ma, Z.; Ke, W.; Jin, P. RMVS: Remote Monitoring of Vital Signs with mm-Wave Radar. In Proceedings of the 1st ACM Workshop on AI Empowered Mobile and Wireless Sensing, Sydney, NSW, Australia, 21 October 2022; pp. 1–6. [Google Scholar]
  37. Dalal, N.; Triggs, B. Histograms of oriented gradients for human detection. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005; pp. 886–893. [Google Scholar]
  38. Purnomo, A.T.; Lin, D.B.; Adiprabowo, T. Non-contact monitoring and classification of breathing pattern for the supervision of people infected by COVID-19. Sensors 2021, 21, 3172. [Google Scholar] [CrossRef]
Figure 1. FMCW radar life detection signal principle and signal processing diagrams.
Figure 1. FMCW radar life detection signal principle and signal processing diagrams.
Sensors 24 04315 g001
Figure 2. FMCW-RM system workflow.
Figure 2. FMCW-RM system workflow.
Sensors 24 04315 g002
Figure 3. Respiratory time-domain waveforms. (a) Normal breathing. (b) Slow and deep breathing. (c) Quick breathing. (d) Meningitic breathing.
Figure 3. Respiratory time-domain waveforms. (a) Normal breathing. (b) Slow and deep breathing. (c) Quick breathing. (d) Meningitic breathing.
Sensors 24 04315 g003
Figure 4. Comparison of de-noising methods. (a) Raw signal. (b) Average phase cancellation. (c) Moving target indication. (d) Signal overlay.
Figure 4. Comparison of de-noising methods. (a) Raw signal. (b) Average phase cancellation. (c) Moving target indication. (d) Signal overlay.
Sensors 24 04315 g004
Figure 5. Signal processing. (a) Range FFT (b) Phase extraction. (c) Phase unwrapping. (d) Phase difference. (e) Respiratory signal. (f) Spectral estimation.
Figure 5. Signal processing. (a) Range FFT (b) Phase extraction. (c) Phase unwrapping. (d) Phase difference. (e) Respiratory signal. (f) Spectral estimation.
Sensors 24 04315 g005
Figure 6. Flowchart of feature extraction.
Figure 6. Flowchart of feature extraction.
Sensors 24 04315 g006
Figure 7. Experimental environment.
Figure 7. Experimental environment.
Sensors 24 04315 g007
Figure 8. (a) Control setup. (b) Respiratory waveform obtained when facing the wall.
Figure 8. (a) Control setup. (b) Respiratory waveform obtained when facing the wall.
Sensors 24 04315 g008
Figure 9. (a) Corresponding waveform of 0.4 m. (b) Corresponding waveform of 0.6 m. (c) Corresponding waveform of 0.8 m. (d) Corresponding waveform of 1 m. (e) Corresponding waveform of 1.2 m. (f) Corresponding waveform of 1.5 m.
Figure 9. (a) Corresponding waveform of 0.4 m. (b) Corresponding waveform of 0.6 m. (c) Corresponding waveform of 0.8 m. (d) Corresponding waveform of 1 m. (e) Corresponding waveform of 1.2 m. (f) Corresponding waveform of 1.5 m.
Sensors 24 04315 g009
Figure 10. Average energy of different respiration patterns at different distances.
Figure 10. Average energy of different respiration patterns at different distances.
Sensors 24 04315 g010
Figure 11. (a) Effect of different garment thicknesses on thoracic positioning. (b) Effect of varying garment thicknesses on different breathing patterns.
Figure 11. (a) Effect of different garment thicknesses on thoracic positioning. (b) Effect of varying garment thicknesses on different breathing patterns.
Sensors 24 04315 g011
Figure 12. Measurement of respiratory signals at different angles. (a) Schematic diagram of measurement angles. (b) Respiratory energy at different angles.
Figure 12. Measurement of respiratory signals at different angles. (a) Schematic diagram of measurement angles. (b) Respiratory energy at different angles.
Sensors 24 04315 g012
Figure 13. (a) Radar left-aligned to chest waveform and energy map. (b) Radar right-aligned to chest waveform and energy map. (c) Radar frontal to human chest waveform and energy map.
Figure 13. (a) Radar left-aligned to chest waveform and energy map. (b) Radar right-aligned to chest waveform and energy map. (c) Radar frontal to human chest waveform and energy map.
Sensors 24 04315 g013
Figure 14. Breathing energy in different environments. (a) Quiet environment. (b) Working environment (c) noisy environment.
Figure 14. Breathing energy in different environments. (a) Quiet environment. (b) Working environment (c) noisy environment.
Sensors 24 04315 g014
Figure 15. Respiratory amplitude in different environments. (a) Effects of different noise levels on respiratory monitoring of different subjects. (b) Effects of different noise levels on different breathing patterns.
Figure 15. Respiratory amplitude in different environments. (a) Effects of different noise levels on respiratory monitoring of different subjects. (b) Effects of different noise levels on different breathing patterns.
Sensors 24 04315 g015
Figure 16. Confusion matrix for classification of different eigenvalues. (a) KNN. (b) SVM. (c) CNN+LSTM. (d) G-SVM.
Figure 16. Confusion matrix for classification of different eigenvalues. (a) KNN. (b) SVM. (c) CNN+LSTM. (d) G-SVM.
Sensors 24 04315 g016
Table 1. Radar parameter setting.
Table 1. Radar parameter setting.
ParametersValue
Start Frequency77 GHZ
Bandwidth4 GHZ
Number of Transmitting Antennas1
Number of Receiving Antennas4
Samples Per-Chirp200
Chirp Duration50 μs
Frame Duration50 ms
Number of Chirps per Frame2
Table 2. Detailed information on subjects.
Table 2. Detailed information on subjects.
Subject NumberSexHeight (cm)Weight (kg)
1male17260
2male18575
3male17871
4female16347
5female17061
Table 3. A comparison of respiratory rates measured by millimeter-wave radar and HUAWEI WATCH GT2.
Table 3. A comparison of respiratory rates measured by millimeter-wave radar and HUAWEI WATCH GT2.
NumberMillimeter-Wave Radar
(Breaths Times)
HUAWEI WATCH GT2
(Breaths Times)
1109
277
388
487
578
698
7109
877
91110
1098
Average8.68.1
Table 4. Accuracy of different methods.
Table 4. Accuracy of different methods.
MethodAccuracy
KNN84.75%
SVM91%
CNN+LSTM92%
G-SVM94.75%
Table 5. Comparison of recent research work.
Table 5. Comparison of recent research work.
MethodDenoising TechnologyFeaturesModelAccuracy
[15]Polynomial fitRespiratory intervalRandom Forest87%
[7]DifferencingMFCCXGBoost87%
[38]Singular value decompositionTime-domain featuresRandom Forest90%
[16]Not mentionedEmbedded featuresCNN92.34%
This WorkSignal overlayHOGSVM94.75%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hao, Z.; Wang, Y.; Li, F.; Ding, G.; Gao, Y. mmWave-RM: A Respiration Monitoring and Pattern Classification System Based on mmWave Radar. Sensors 2024, 24, 4315. https://doi.org/10.3390/s24134315

AMA Style

Hao Z, Wang Y, Li F, Ding G, Gao Y. mmWave-RM: A Respiration Monitoring and Pattern Classification System Based on mmWave Radar. Sensors. 2024; 24(13):4315. https://doi.org/10.3390/s24134315

Chicago/Turabian Style

Hao, Zhanjun, Yue Wang, Fenfang Li, Guozhen Ding, and Yifei Gao. 2024. "mmWave-RM: A Respiration Monitoring and Pattern Classification System Based on mmWave Radar" Sensors 24, no. 13: 4315. https://doi.org/10.3390/s24134315

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop