Dynamic and Distributed Intelligence over Smart Devices, Internet of Things Edges, and Cloud Computing for Human Activity Recognition Using Wearable Sensors
Abstract
:1. Introduction
- A smart IoT device with machine learning models was mounted on the waist and wirelessly collected readings from the attached sensor and from the shin and the thigh.
- Some users may have used simple IoT sensor devices on their waists with no machine learning.
- Edge devices with machine learning models were used to serve the IoT sensor devices, aggregate their results, and send updates to the cloud.
- The Microsoft Azure cloud system (Microsoft Corporation, Redmond, WA, USA) was used for training models, online prediction, storage, and analyses.
2. Literature Review
3. The Proposed Architecture
- A smart IoT device, which was a Raspberry Pi 3 device with a machine learning model at the waist, was used. This wearable microcomputer was equipped with trained models and wired to the MPU6050 module. The values from this module were processed locally; the rest of the values came from the shin and the thigh wirelessly. The other MPU6050 module was located on the shin, where the movement of the leg was measured during performing activities. The ESP32 microcontroller read the values and sent them via Wi-Fi to the Raspberry Pi device on the waist. The smartphone in the pant pocket at the thigh has several functions: it measures the accelerometer and gyroscope coordinates using the built-in sensor, sends them to the Raspberry Pi device, and provides internet connectivity to support mobility in case the user is out of the range of the wireless LAN.
- An IoT end device without machine learning, the ESP32, was used instead of the Raspberry Pi 3 device. It collected the values from the waist, the shin, and the thigh and sent them to either the IoT edge or the cloud.
- A smart IoT edge using the Raspberry Pi 4 device was employed. With its high performance, this microcomputer can predict behaviors, handle traffic from several devices, and send logs to the cloud. This device is necessary for other IoT devices that do not use machine learning. It is compact, lightweight, and may be powered by a battery or power bank, making it portable. To handle more requests and users, the edge may skip predictions for the requests coming from the smart IoT device and forward aggregated predictions to the cloud.
- A stationary IoT edge personal computer device was also used in this study. It provides an optional alternative for nearby users; the priority between the two edges may be changed on demand based on the location of the users.
- Azure cloud services: This cloud was used to train models, store data, create profiles for different users, and provide real-time predictions for several requests. The received predictions and telemetry data may be analyzed on the cloud for further decisions based on monitoring users for long periods of time.
- Initialize the MPU sensor
- Loop
- Read the MPU sensors
- Wait for sensor values from the shin
- Wait for sensor values from the thigh
- If all values arrived, then use Model-18wst
- Else, if values from the shin only arrived, then use model-12ws
- Else, if values from the thigh only arrived, then use model-12wt
- Else, use model-6w
- Perform prediction
- If Raspberry Pi 4 is reachable, send to Raspberry Pi 4 IoT edge
- Else, if PC is reachable, send to PC IoT edge
- Else, send to the Azure cloud
- Loop
- Wait for requests
- If all values arrived, then use Model-18wst
- Else, if values from the shin only arrived, then use model-12ws
- Else, if values from the thigh only arrived, then use model-12wt
- Else, use model-6w
- Aggregate values and predictions
- Send aggregated results to the Azure cloud
4. Datasets
- Ax1, Ay1, Az1, Gx1, Gy1, and Gz1—from the accelerometer and gyroscope on the waist.
- Ax2, Ay2, Az2, Gx2, Gy2, and Gz2—from the accelerometer and gyroscope on the shin.
- Ax3, Ay3, Az3, Gx3, Gy3, and Gz3—from the accelerometer and gyroscope on the thigh.
5. Experiments and Results
5.1. Training Experiments
5.1.1. Training Algorithms
5.1.2. Confusion Matrix
5.1.3. Training Reduced Datasets
5.2. Real-Time Experiments
5.2.1. Prediction Time
5.2.2. Real-Time Accuracy
5.2.3. Dynamic Model Deployment
- In the first scenario, when the Raspberry Pi 4 edge device is online, the device sends data to the Raspberry P4 edge. Here, the edge supports aggregation, mobility, and fast responses.
- In the second scenario, when the Raspberry Pi 4 edge is offline and the edge PC is online, the device sends data to the PC edge. Here, the edge supports aggregation and fast responses but does not provide mobility.
- In the third scenario, when the connection with both edges is not available, the end device can directly communicate with the cloud.
5.2.4. Scalability
6. Discussion
7. Conclusions
- Integration and cooperation between the devices were efficient.
- The achieved accuracy was 99.19% in training and 93.5% in real time.
- The prediction time was efficient using the smart end and IoT edge devices.
- Dynamic selection worked efficiently in the case of connectivity with the edges.
- Dynamic selection of models worked efficiently in the case of feature availability.
- The architecture is scalable and serves more than 30 users per edge.
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Esposito, M.; Belli, A.; Palma, L.; Pierleoni, P. Design and Implementation of a Framework for Smart Home Automation Based on Cellular IoT, MQTT, and Serverless Functions. Sensors 2023, 23, 4459. [Google Scholar] [CrossRef] [PubMed]
- Franco, T.; Sestrem, L.; Henriques, P.R.; Alves, P.; Varanda Pereira, M.J.; Brandão, D.; Leitão, P.; Silva, A. Motion Sensors for Knee Angle Recognition in Muscle Rehabilitation Solutions. Sensors 2022, 22, 7605. [Google Scholar] [CrossRef] [PubMed]
- Zhuang, Z.; Xue, Y. Sport-Related Human Activity Detection and Recognition Using a Smartwatch. Sensors 2019, 19, 5001. [Google Scholar] [CrossRef] [PubMed]
- Zhou, E.; Zhang, H. Human action recognition toward massive-scale sport sceneries based on deep multi-model feature fusion. Signal Process. Image Commun. 2020, 84, 115802. [Google Scholar] [CrossRef]
- Zhang, S.; Li, Y.; Zhang, S.; Shahabi, F.; Xia, S.; Deng, Y.; Alshurafa, N. Deep Learning in Human Activity Recognition with Wearable Sensors: A Review on Advances. Sensors 2022, 22, 1476. [Google Scholar] [CrossRef] [PubMed]
- Bianchi, V.; Bassoli, M.; Lombardo, G.; Fornacciari, P.; Mordonini, M.; De Munari, I. IoT Wearable Sensor and Deep Learning: An Integrated Approach for Personalized Human Activity Recognition in a Smart Home Environment. IEEE Internet Things J. 2019, 6, 8553–8562. [Google Scholar] [CrossRef]
- Abdel-Basset, M.; Hawash, H.; Chakrabortty, R.K.; Ryan, M.; Elhoseny, M.; Song, H. ST-DeepHAR: Deep Learning Model for Human Activity Recognition in IoHT Applications. IEEE Internet Things J. 2021, 8, 4969–4979. [Google Scholar] [CrossRef]
- Issa, M.E.; Helmi, A.M.; Al-Qaness, M.A.A.; Dahou, A.; Elaziz, M.A.; Damaševičius, R. Human Activity Recognition Based on Embedded Sensor Data Fusion for the Internet of Healthcare Things. Healthcare 2022, 10, 1084. [Google Scholar] [CrossRef]
- Qu, Y.; Tang, Y.; Yang, X.; Wen, Y.; Zhang, W. Context-aware mutual learning for semi-supervised human activity recognition using wearable sensors. Expert Syst. Appl. 2023, 219, 119679. [Google Scholar] [CrossRef]
- Gulati, N.; Kaur, P.D. An argumentation enabled decision making approach for Fall Activity Recognition in Social IoT based Ambient Assisted Living systems. Future Gener. Comput. Syst. 2021, 122, 82–97. [Google Scholar] [CrossRef]
- Nasir, M.; Muhammad, K.; Ullah, A.; Ahmad, J.; Baik, S.W.; Sajjad, M. Enabling automation and edge intelligence over resource constraint IoT devices for smart home. Neurocomputing 2022, 491, 494–506. [Google Scholar] [CrossRef]
- Hong, Z.; Hong, M.; Wang, N.; Ma, Y.; Zhou, X.; Wang, W. A wearable-based posture recognition system with AI-assisted approach for healthcare IoT. Future Gener. Comput. Syst. 2022, 127, 286–296. [Google Scholar] [CrossRef]
- Khan, I.U.; Afzal, S.; Lee, J.W. Human Activity Recognition via Hybrid Deep Learning Based Model. Sensors 2022, 22, 323. [Google Scholar] [CrossRef] [PubMed]
- Tanigaki, K.; Teoh, T.C.; Yoshimura, N.; Maekawa, T.; Hara, T. Predicting Performance Improvement of Human Activity Recognition Model by Additional Data Collection. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2022, 6, 142. [Google Scholar] [CrossRef]
- Khalid, A.M.; Khafaga, D.S.; Aldakheel, E.A.; Hosny, K.M. Human Activity Recognition Using Hybrid Coronavirus Disease Optimization Algorithm for Internet of Medical Things. Sensors 2023, 23, 5862. [Google Scholar] [CrossRef] [PubMed]
- Yen, C.-T.; Liao, J.-X.; Huang, Y.-K. Feature Fusion of a Deep-Learning Algorithm into Wearable Sensor Devices for Human Activity Recognition. Sensors 2021, 21, 8294. [Google Scholar] [CrossRef] [PubMed]
- Qiu, S.; Zhao, H.; Jiang, N.; Wang, Z.; Liu, L.; An, Y.; Zhao, H.; Miao, X.; Liu, R.; Fortino, G. Multi-sensor information fusion based on machine learning for real applications in human activity recognition: State-of-the-art and research challenges. Inf. Fusion 2022, 80, 241–265. [Google Scholar] [CrossRef]
- Islam, M.M.; Nooruddin, S.; Karray, F.; Muhammad, G. Multi-level feature fusion for multimodal human activity recognition in Internet of Healthcare Things. Inf. Fusion 2023, 94, 17–31. [Google Scholar] [CrossRef]
- Chen, J.; Sun, Y.; Sun, S. Improving Human Activity Recognition Performance by Data Fusion and Feature Engineering. Sensors 2021, 21, 692. [Google Scholar] [CrossRef]
- Tuli, S.; Basumatary, N.; Gill, S.S.; Kahani, M.; Arya, R.C.; Wander, G.S.; Buyya, R. HealthFog: An ensemble deep learning based Smart Healthcare System for Automatic Diagnosis of Heart Diseases in integrated IoT and fog computing environments. Future Gener. Comput. Syst. 2020, 104, 187–200. [Google Scholar] [CrossRef]
- Aazam, M.; Zeadally, S.; Flushing, E.F. Task offloading in edge computing for machine learning-based smart healthcare. Comput. Netw. 2021, 191, 108019. [Google Scholar] [CrossRef]
- Ghosh, A.M.; Grolinger, K. Edge-Cloud Computing for Internet of Things Data Analytics: Embedding Intelligence in the Edge With Deep Learning. IEEE Trans. Ind. Inform. 2021, 17, 2191–2200. [Google Scholar] [CrossRef]
- Agarwal, P.; Alam, M. A Lightweight Deep Learning Model for Human Activity Recognition on Edge Devices. Procedia Comput. Sci. 2020, 167, 2364–2373. [Google Scholar] [CrossRef]
- Bourechak, A.; Zedadra, O.; Kouahla, M.N.; Guerrieri, A.; Seridi, H.; Fortino, G. At the Confluence of Artificial Intelligence and Edge Computing in IoT-Based Applications: A Review and New Perspectives. Sensors 2023, 23, 1639. [Google Scholar] [CrossRef] [PubMed]
- Shaik, T.; Tao, X.; Higgins, N.; Li, L.; Gururajan, R.; Zhou, X.; Acharya, U.R. Remote patient monitoring using artificial intelligence: Current state, applications, and challenges. WIREs Data Min. Knowl. Discov. 2023, 13, e1485. [Google Scholar] [CrossRef]
- Mwase, C.; Jin, Y.; Westerlund, T.; Tenhunen, H.; Zou, Z. Communication-efficient distributed AI strategies for the IoT edge. Future Gener. Comput. Syst. 2022, 131, 292–308. [Google Scholar] [CrossRef]
- Tang, Y.; Zhang, L.; Wu, H.; He, J.; Song, A. Dual-Branch Interactive Networks on Multichannel Time Series for Human Activity Recognition. IEEE J. Biomed. Health Inform. 2022, 26, 5223–5234. [Google Scholar] [CrossRef]
- Al-Atawi, A.A.; Alyahyan, S.; Alatawi, M.N.; Sadad, T.; Manzoor, T.; Farooq-i-Azam, M.; Khan, Z.H. Stress Monitoring Using Machine Learning, IoT and Wearable Sensors. Sensors 2023, 23, 8875. [Google Scholar] [CrossRef]
- Duggal, R.; Gupta, N.; Pandya, A.; Mahajan, P.; Sharma, K.; Angra, P. Building structural analysis based Internet of Things network assisted earthquake detection. Internet Things 2022, 19, 100561. [Google Scholar] [CrossRef]
- Mekruksavanich, S.; Jitpattanakul, A. Sport-related activity recognition from wearable sensors using bidirectional gru network. Intell. Autom. Soft Comput. 2022, 34, 1907–1925. [Google Scholar] [CrossRef]
- Tarafdar, P.; Bose, I. Recognition of human activities for wellness management using a smartphone and a smartwatch: A boosting approach. Decis. Support Syst. 2021, 140, 113426. [Google Scholar] [CrossRef]
- Liu, Y.; Li, Z.; Zheng, S.; Cai, P.; Zou, X. An Evaluation of MEMS-IMU Performance on the Absolute Trajectory Error of Visual-Inertial Navigation System. Micromachines 2022, 13, 602. [Google Scholar] [CrossRef]
- Dong, D.; Ma, C.; Wang, M.; Vu, H.T.; Vanderborght, B.; Sun, Y. A low-cost framework for the recognition of human motion gait phases and patterns based on multi-source perception fusion. Eng. Appl. Artif. Intell. 2023, 120, 105886. [Google Scholar] [CrossRef]
- Krupitzer, C.; Sztyler, T.; Edinger, J.; Breitbach, M.; Stuckenschmidt, H.; Becker, C. Beyond position-awareness—Extending a self-adaptive fall detection system. Pervasive Mob. Comput. 2019, 58, 101026. [Google Scholar] [CrossRef]
- Xu, T.; Sun, W.; Lu, S.; Ma, K.; Wang, X. The real-time elderly fall posture identifying scheme with wearable sensors. Int. J. Distrib. Sens. Netw. 2019, 15, 1550147719885616. [Google Scholar] [CrossRef]
- Yazici, A.; Zhumabekova, D.; Nurakhmetova, A.; Yergaliyev, Z.; Yatbaz, H.Y.; Makisheva, Z.; Lewis, M.; Ever, E. A smart e-health framework for monitoring the health of the elderly and disabled. Internet Things 2023, 24, 100971. [Google Scholar] [CrossRef]
- Kulurkar, P.; Dixit, C.K.; Bharathi, V.C.; Monikavishnuvarthini, A.; Dhakne, A.; Preethi, P. AI based elderly fall prediction system using wearable sensors: A smart home-care technology with IOT. Meas. Sens. 2023, 25, 100614. [Google Scholar] [CrossRef]
- Mannheim University. Germany—Research Group Data and Web Science, DataSet—RealWorld (HAR). Available online: https://sensor.informatik.uni-mannheim.de/#dataset_realworld (accessed on 30 October 2023).
- Ray, S.; Alshouiliy, K.; Agrawal, D.P. Dimensionality Reduction for Human Activity Recognition Using Google Colab. Information 2021, 12, 6. [Google Scholar] [CrossRef]
- Jain, V.; Liu, K. Hybrid IoT System for Emergency Responders. In Proceedings of the 2023 11th IEEE International Conference on Mobile Cloud Computing, Services, and Engineering (MobileCloud), Services, Athens, Greece, 17–20 July 2023; pp. 59–66. [Google Scholar] [CrossRef]
- Qi, W.; Wang, N.; Su, H.; Aliverti, A. DCNN based human activity recognition framework with depth vision guiding. Neurocomputing 2022, 486, 261–271. [Google Scholar] [CrossRef]
- Gemirter, C.B.; Şenturca, Ç.; Baydere, Ş. A Comparative Evaluation of AMQP, MQTT and HTTP Protocols Using Real-Time Public Smart City Data. In Proceedings of the 2021 6th International Conference on Computer Science and Engineering (UBMK), Ankara, Turkey, 15–17 September 2021; pp. 542–547. [Google Scholar] [CrossRef]
Category | Number of Entries |
---|---|
Climb | 39,000 |
Lie | 41,000 |
Run | 40,000 |
Walk | 44,000 |
Total | 164,000 |
Model | Training Accuracy (%) |
---|---|
LightGBM | 99.23 |
Extra trees | 98.03 |
Decision trees | 96.56 |
SVM | 97.68 |
Random forest | 91.8 |
Predicted Category | |||||
---|---|---|---|---|---|
Climb | Lie | Run | Walk | ||
True category | Climb | 11,482 | 0 | 24 | 144 |
Lie | 2 | 12,264 | 7 | 4 | |
Run | 18 | 0 | 11,937 | 100 | |
Walk | 55 | 0 | 28 | 13,138 |
Model | Features | Accuracy (%) |
---|---|---|
Model-18wst | Waist: Ax1, Ay1, Az1, Gx1, Gy1, and Gz1 Shin: Ax2, Ay2, Az2, Gx2, Gy2, and Gz2 Thigh: Ax3, Ay3, Az3, Gx3, Gy3, and Gz3 | 99.19 |
Model-12ws | Waist: Ax1, Ay1, Az1, Gx1, Gy1, and Gz1 Shin: Ax2, Ay2, Az2, Gx2, Gy2, and Gz2 | 97.66 |
Model-12wt | Waist: Ax1, Ay1, Az1, Gx1, Gy1, and Gz1 Thigh: Ax3, Ay3, Az3, Gx3, Gy3, and Gz3 | 95.46 |
Model-6w | Waist: Ax1, Ay1, Az1, Gx1, Gy1, and Gz1 | 85.97 |
Role | Device | Model-18wst | Model-12ws | Model-12wt | Model-6w |
---|---|---|---|---|---|
Smart IoT device | Raspberry Pi 3 | 6.38 | 5.63 | 5.41 | 5.20 |
Portable IoT edge | Raspberry Pi 4 | 1.62 | 1.51 | 1.52 | 1.48 |
Stationary IoT edge | PC | 1.17 | 0.92 | 0.91 | 0.86 |
Model-12wst | Model-12ws | Model-12wt | Model-6w |
---|---|---|---|
93.5% | 91.2% | 89.6% | 82.4% |
Scenario | Reaction | Prediction Time (ms) | Real-Time Accuracy (%) |
---|---|---|---|
IoT device received parameters from all three sensors | Use Model-18wst | 6.38 | 93.5 |
IoT device received parameters from the waist and shin | Use Model-12ws | 5.63 | 91.2 |
IoT device received parameters from the waist and thigh | Use Model-12wt | 5.41 | 89.6 |
IoT device only received parameters from the waist | Use Model-6w | 5.20 | 82.4 |
Number of Users | Percentage of Received Requests per User (%) | Average Number of Received Requests |
---|---|---|
1 | 93.47 | 18.7 |
2 | 87.3 | 17.46 |
3 | 84 | 16.8 |
4 | 68.92 | 13.78 |
5 | 58.69 | 11.74 |
11 | 40.16 | 8.03 |
16 | 38.75 | 7.75 |
21 | 37.51 | 7.5 |
31 | 29.5 | 5.9 |
Ref., Year | Dataset | Sensors | Performance |
---|---|---|---|
[8], 2022 | WISDM | Two smartphones in the pocket and on the hand | HAR accuracy: 98.7% |
[16], 2021 | UCI and self-recorded | MPU6050 | HAR using the UCI dataset: 97.49% HAR using the self-recorded dataset: 96.27% |
[35], 2019 | No dataset—the system used tilt angle and acceleration | MPU6050 and graphene/rubber sensors | Daily living detection: 93.5% Fall posture identification: 90% |
[36], 2023 | MHEALTH dataset DML Smart Actions dataset | ECG sensors, MPU9250, and multimedia sensors | HAR training accuracy: 96% Video-based HAR and fall detection: 86.97% |
[37], 2022 | MobiAct | Five sensors from the LSM6DS0 accelerometer | HAR training: 95.87% |
This paper | Realworld HAR | Two modules of MPU6050 and one smartphone, accelerometers, and gyroscopes | HAR training accuracy: 99.23% HAR real-time accuracy: 93.5% Edge prediction time: 1.62 milliseconds IoT device prediction time: 5.83 milliseconds Dynamic model selection Dynamic edge selection Scalable number of users |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wazwaz, A.; Amin, K.; Semary, N.; Ghanem, T. Dynamic and Distributed Intelligence over Smart Devices, Internet of Things Edges, and Cloud Computing for Human Activity Recognition Using Wearable Sensors. J. Sens. Actuator Netw. 2024, 13, 5. https://doi.org/10.3390/jsan13010005
Wazwaz A, Amin K, Semary N, Ghanem T. Dynamic and Distributed Intelligence over Smart Devices, Internet of Things Edges, and Cloud Computing for Human Activity Recognition Using Wearable Sensors. Journal of Sensor and Actuator Networks. 2024; 13(1):5. https://doi.org/10.3390/jsan13010005
Chicago/Turabian StyleWazwaz, Ayman, Khalid Amin, Noura Semary, and Tamer Ghanem. 2024. "Dynamic and Distributed Intelligence over Smart Devices, Internet of Things Edges, and Cloud Computing for Human Activity Recognition Using Wearable Sensors" Journal of Sensor and Actuator Networks 13, no. 1: 5. https://doi.org/10.3390/jsan13010005