Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,848)

Search Parameters:
Keywords = 3D CNN

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 11589 KiB  
Article
Deep Fusion of Skeleton Spatial–Temporal and Dynamic Information for Action Recognition
by Song Gao, Dingzhuo Zhang, Zhaoming Tang and Hongyan Wang
Sensors 2024, 24(23), 7609; https://doi.org/10.3390/s24237609 - 28 Nov 2024
Abstract
Focusing on the issue of the low recognition rates achieved by traditional deep-information-based action recognition algorithms, an action recognition approach was developed based on skeleton spatial–temporal and dynamic features combined with a two-stream convolutional neural network (TS-CNN). Firstly, the skeleton’s three-dimensional coordinate system [...] Read more.
Focusing on the issue of the low recognition rates achieved by traditional deep-information-based action recognition algorithms, an action recognition approach was developed based on skeleton spatial–temporal and dynamic features combined with a two-stream convolutional neural network (TS-CNN). Firstly, the skeleton’s three-dimensional coordinate system was transformed to obtain coordinate information related to relative joint positions. Subsequently, this relevant joint information was encoded as a color texture map to construct the spatial–temporal feature descriptor of the skeleton. Furthermore, physical structure constraints of the human body were considered to enhance class differences. Additionally, the speed information for each joint was estimated and encoded as a color texture map to achieve the skeleton motion feature descriptor. The resulting spatial–temporal and dynamic features were further enhanced using motion saliency and morphology operators to improve their expression ability. Finally, these enhanced skeleton spatial–temporal and dynamic features were deeply fused via TS-CNN for implementing action recognition. Numerous results from experiments conducted on the publicly available datasets NTU RGB-D, Northwestern-UCLA, and UTD-MHAD demonstrate that the recognition rates achieved via the developed approach are 86.25%, 87.37%, and 93.75%, respectively, indicating that the approach can effectively improve the accuracy of action recognition in complex environments compared to state-of-the-art algorithms. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

56 pages, 12429 KiB  
Article
Mitigating Algorithmic Bias in AI-Driven Cardiovascular Imaging for Fairer Diagnostics
by Md Abu Sufian, Lujain Alsadder, Wahiba Hamzi, Sadia Zaman, A. S. M. Sharifuzzaman Sagar and Boumediene Hamzi
Diagnostics 2024, 14(23), 2675; https://doi.org/10.3390/diagnostics14232675 - 27 Nov 2024
Viewed by 201
Abstract
Background/Objectives: The research addresses algorithmic bias in deep learning models for cardiovascular risk prediction, focusing on fairness across demographic and socioeconomic groups to mitigate health disparities. It integrates fairness-aware algorithms, susceptible carrier-infected-recovered (SCIR) models, and interpretability frameworks to combine fairness with actionable AI [...] Read more.
Background/Objectives: The research addresses algorithmic bias in deep learning models for cardiovascular risk prediction, focusing on fairness across demographic and socioeconomic groups to mitigate health disparities. It integrates fairness-aware algorithms, susceptible carrier-infected-recovered (SCIR) models, and interpretability frameworks to combine fairness with actionable AI insights supported by robust segmentation and classification metrics. Methods: The research utilised quantitative 3D/4D heart magnetic resonance imaging and tabular datasets from the Cardiac Atlas Project’s (CAP) open challenges to explore AI-driven methodologies for mitigating algorithmic bias in cardiac imaging. The SCIR model, known for its robustness, was adapted with the Capuchin algorithm, adversarial debiasing, Fairlearn, and post-processing with equalised odds. The robustness of the SCIR model was further demonstrated in the fairness evaluation metrics, which included demographic parity, equal opportunity difference (0.037), equalised odds difference (0.026), disparate impact (1.081), and Theil Index (0.249). For interpretability, YOLOv5, Mask R-CNN, and ResNet18 were implemented with LIME and SHAP. Bias mitigation improved disparate impact (0.80 to 0.95), reduced equal opportunity difference (0.20 to 0.05), and decreased false favourable rates for males (0.0059 to 0.0033) and females (0.0096 to 0.0064) through balanced probability adjustment. Results: The SCIR model outperformed the SIR model (recovery rate: 1.38 vs 0.83) with a 10% transmission bias impact. Parameters (β=0.5, δ=0.2, γ=0.15) reduced susceptible counts to 2.53×1012 and increased recovered counts to 9.98 by t=50. YOLOv5 achieved high Intersection over Union (IoU) scores (94.8%, 93.7%, 80.6% for normal, severe, and abnormal cases). Mask R-CNN showed 82.5% peak confidence, while ResNet demonstrated a 10.4% accuracy drop under noise. Performance metrics (IoU: 0.910.96, Dice: 0.9410.980, Kappa: 0.95) highlighted strong predictive accuracy and reliability. Conclusions: The findings validate the effectiveness of fairness-aware algorithms in addressing cardiovascular predictive model biases. The integration of fairness and explainable AI not only promotes equitable diagnostic precision but also significantly reduces diagnostic disparities across vulnerable populations. This reduction in disparities is a key outcome of the research, enhancing clinical trust in AI-driven systems. The promising results of this study pave the way for future work that will explore scalability in real-world clinical settings and address limitations such as computational complexity in large-scale data processing. Full article
(This article belongs to the Special Issue Artificial Intelligence in Cardiology Diagnosis )
16 pages, 7826 KiB  
Article
Automation and Optimization of Food Process Using CNN and Six-Axis Robotic Arm
by Youngjin Kim and Sangoh Kim
Foods 2024, 13(23), 3826; https://doi.org/10.3390/foods13233826 - 27 Nov 2024
Viewed by 408
Abstract
The Food Process Robot Intelligent System (FPRIS) integrates a 3D-printed six-axis robotic arm with Artificial Intelligence (AI) and Computer Vision (CV) to optimize and automate the coffee roasting process. As an application of FPRIS coffee roasting, this system uses a Convolutional Neural Network [...] Read more.
The Food Process Robot Intelligent System (FPRIS) integrates a 3D-printed six-axis robotic arm with Artificial Intelligence (AI) and Computer Vision (CV) to optimize and automate the coffee roasting process. As an application of FPRIS coffee roasting, this system uses a Convolutional Neural Network (CNN) to classify coffee beans inside the roaster and control the roaster in real time, avoiding obstacles and empty spaces. This study demonstrates FPRIS’s capability to precisely control the Degree of Roasting (DoR) by combining gas and image sensor data to assess coffee bean quality. A comparative analysis between the Preliminary Coffee Sample (PCS) and Validation Coffee Sample (VCS) revealed that increasing roast intensity resulted in consistent trends for both samples, including an increase in weight loss and Gas sensor Initial Difference (GID) and a decrease in Sum of Pixel Grayscale Values (SPGVs). This study underscores the potential of FPRIS to enhance precision and efficiency in coffee roasting. Future studies will expand on these findings by testing FPRIS across various food processes, potentially establishing a universal automation system for the food industry. Full article
(This article belongs to the Special Issue Artificial Intelligence for the Food Industry)
Show Figures

Figure 1

14 pages, 6982 KiB  
Article
Deep Learning Integration for Normal Breathing Classification Using a Flexible Fiber Sensor
by Jiseon Kim and Jooyong Kim
Processes 2024, 12(12), 2644; https://doi.org/10.3390/pr12122644 - 24 Nov 2024
Viewed by 440
Abstract
Measuring respiratory parameters is crucial for clinical decision making and detecting abnormal patterns for disease prevention. While deep learning methods are commonly used in respiratory analysis, the image-based classification of abnormal breathing remains limited. This study developed a stitched sensor using silver-coated thread, [...] Read more.
Measuring respiratory parameters is crucial for clinical decision making and detecting abnormal patterns for disease prevention. While deep learning methods are commonly used in respiratory analysis, the image-based classification of abnormal breathing remains limited. This study developed a stitched sensor using silver-coated thread, optimized for the knit fabric’s course direction in a belt configuration. By applying a Continuous Wavelet Transform (CWT) and a two-dimension Convolutional Neural Network (2D-CNN), the model achieved 96% accuracy, with potential for further improvement through data expansion. Full article
(This article belongs to the Special Issue Research on Intelligent Fault Diagnosis Based on Neural Network)
Show Figures

Figure 1

19 pages, 16378 KiB  
Article
Classification of Chicken Carcass Breast Blood-Related Defects Using Hyperspectral Imaging Combined with Convolutional Neural Networks
by Liukui Duan, Juanfang Bao, Hao Yang, Liuqian Gao, Xu Zhang, Shengjie Li and Huihui Wang
Foods 2024, 13(23), 3745; https://doi.org/10.3390/foods13233745 - 22 Nov 2024
Viewed by 363
Abstract
For chicken carcass breast blood-related defects (CBDs), which occur with high frequency, the visual features are approximated in terms of the similarity of the composition of these defects, making it challenging to classify them, either manually or automatically, using conventional machine vision. The [...] Read more.
For chicken carcass breast blood-related defects (CBDs), which occur with high frequency, the visual features are approximated in terms of the similarity of the composition of these defects, making it challenging to classify them, either manually or automatically, using conventional machine vision. The aim of this paper was to introduce a method of CBD classification based on hyperspectral imaging combined with Convolutional Neural Networks (CNNs). To process hyperspectral data, the Improved Firefly Band Selection Algorithm was constructed with the 1-D CNN CBD classification model as the objective function, achieving a reduction in the dimensionality of hyperspectral data. The multidimensional data CBD classification models were developed based on YOLOv4 and Faster R-CNN, incorporating the 1-D CNN CBD classification model and the feature fusion layer. The combination of hyperspectral data and CNN can effectively accomplish the classification of CBDs, although different model architectures emphasize classification speed and accuracy differently. The multidimensional data YOLOv4 CBD classification model achieves an mAP of 0.916 with an inference time of 41.8 ms, while the multidimensional data Faster R-CNN CBD classification model, despite having a longer inference time of 58.2 ms, reaches a higher mAP of 0.990. In practical production scenarios, the appropriate classification model can be selected based on specific needs. Full article
(This article belongs to the Special Issue Rapid Detection Technology Applied in Food Safety)
Show Figures

Figure 1

17 pages, 3473 KiB  
Article
A Comprehensive Analysis of Early Alzheimer Disease Detection from 3D sMRI Images Using Deep Learning Frameworks
by Pouneh Abbasian and Tracy A. Hammond
Information 2024, 15(12), 746; https://doi.org/10.3390/info15120746 - 22 Nov 2024
Viewed by 473
Abstract
Accurate diagnosis of Alzheimer’s Disease (AD) has largely focused on its later stages, often overlooking the critical need for early detection of Early Mild Cognitive Impairment (EMCI). Early detection is essential for potentially reducing mortality rates; however, distinguishing EMCI from Normal Cognitive (NC) [...] Read more.
Accurate diagnosis of Alzheimer’s Disease (AD) has largely focused on its later stages, often overlooking the critical need for early detection of Early Mild Cognitive Impairment (EMCI). Early detection is essential for potentially reducing mortality rates; however, distinguishing EMCI from Normal Cognitive (NC) individuals is challenging due to similarities in their brain patterns. To address this, we have developed a subject-level 3D-CNN architecture enhanced by preprocessing techniques to improve classification accuracy between these groups. Our experiments utilized structural Magnetic Resonance Imaging (sMRI) data from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset, specifically the ADNI3 collection. We included 446 subjects from the baseline and year 1 phases, comprising 164 individuals diagnosed with EMCI and 282 individuals with NC. When evaluated using 4-fold stratified cross-validation, our model achieved a validation AUC of 91.5%. On the test set, it attained an accuracy of 81.80% along with a recall of 82.50%, precision of 81.80%, and specificity of 80.50%, effectively distinguishing between the NC and EMCI groups. Additionally, a gradient class activation map was employed to highlight key regions influencing model predictions. In comparative evaluations against pretrained models and existing literature, our approach demonstrated decent performance in early AD detection. Full article
(This article belongs to the Special Issue Second Edition of Predictive Analytics and Data Science)
Show Figures

Figure 1

14 pages, 6623 KiB  
Article
YoloSortC3D: A YOLOv8, DeepSort, and C3D-Based Method for Sheep Rumination Behavior Recognition
by Wenbo Yu, Yongqi Liu, Xiang Yang, Ruoya Xie and Chuanjiu Wang
Appl. Sci. 2024, 14(23), 10801; https://doi.org/10.3390/app142310801 - 21 Nov 2024
Viewed by 431
Abstract
The rumination behavior of sheep is closely linked to their overall health, making its accurate identification and quantification crucial. However, traditional manual observation methods by farm workers are both time-consuming and labor-intensive. Similarly, while wearable sensors can provide automated monitoring, they are expensive [...] Read more.
The rumination behavior of sheep is closely linked to their overall health, making its accurate identification and quantification crucial. However, traditional manual observation methods by farm workers are both time-consuming and labor-intensive. Similarly, while wearable sensors can provide automated monitoring, they are expensive and may cause stress to the animals. This study proposes a deep learning and computer-vision-based approach to recognize and quantify sheep rumination behavior by calculating and recording rumination duration. This study presents a method that integrates the YOLOv8 object detection algorithm with the DeepSORT, multiobject tracking algorithm to monitor sheep head movements. Firstly, head images are identified and enclosed within ID-labeled bounding boxes, then cropped from video frames for further processing. The preprocessing steps include pixel padding, resizing, and center cropping. These processed images are subsequently stored for later analysis. Following this, a 3D convolutional neural network (3D-CNN) is employed to recognize rumination behavior, while the frame difference method is utilized to accurately calculate rumination duration. Lastly, to validate the effectiveness of the proposed approach, tests were conducted on multitarget sheep rumination videos, and the results were compared with manual observations. Experimental results demonstrated an average error of 4.403% during the day and 6.156% at night, confirming the feasibility and accuracy of this method for the automated detection and quantification of sheep rumination behavior. Full article
Show Figures

Figure 1

18 pages, 1518 KiB  
Article
VAS-3D: A Visual-Based Alerting System for Detecting Drowsy Drivers in Intelligent Transportation Systems
by Hadi El Zein, Hassan Harb, François Delmotte, Oussama Zahwe and Samir Haddad
World Electr. Veh. J. 2024, 15(12), 540; https://doi.org/10.3390/wevj15120540 - 21 Nov 2024
Viewed by 603
Abstract
Nowadays, driving accidents are considered one of the most crucial challenges for governments and communities that affect transportation systems and peoples lives. Unfortunately, there are many causes behind the accidents; however, drowsiness is one of the main factors that leads to a significant [...] Read more.
Nowadays, driving accidents are considered one of the most crucial challenges for governments and communities that affect transportation systems and peoples lives. Unfortunately, there are many causes behind the accidents; however, drowsiness is one of the main factors that leads to a significant number of injuries and deaths. In order to reduce its effect, researchers and communities have proposed many techniques for detecting drowsiness situations and alerting the driver before an accident occurs. Mostly, the proposed solutions are visually-based, where a camera is positioned in front of the driver to detect their facial behavior and then determine their situation, e.g., drowsy or awake. However, most of the proposed solutions make a trade-off between detection accuracy and speed. In this paper, we propose a novel Visual-based Alerting System for Detecting Drowsy Drivers (VAS-3D) that ensures an optimal trade-off between the accuracy and speed metrics. Mainly, VAS-3D consists of two stages: detection and classification. In the detection stage, we use pre-trained Haar cascade models to detect the face and eyes of the driver. Once the driver’s eyes are detected, the classification stage uses several pre-trained Convolutional Neural Network (CNN) models to classify the driver’s eyes as either open or closed, and consequently their corresponding situation, either awake or drowsy. Subsequently, we tested and compared the performance of several CNN models, such as InceptionV3, MobileNetV2, NASNetMobile, and ResNet50V2. We demonstrated the performance of VAS-3D through simulations on real drowsiness datasets and experiments on real world scenarios based on real video streaming. The obtained results show that VAS-3D can enhance the accuracy detection of drowsy drivers by at least 7.5% (the best accuracy reached was 95.5%) and the detection speed by up to 57% (average of 0.25 ms per frame) compared to other existing models. Full article
Show Figures

Figure 1

19 pages, 5999 KiB  
Article
Automated Pipeline for Robust Cat Activity Detection Based on Deep Learning and Wearable Sensor Data
by Md Ariful Islam Mozumder, Tagne Poupi Theodore Armand, Rashadul Islam Sumon, Shah Muhammad Imtiyaj Uddin and Hee-Cheol Kim
Sensors 2024, 24(23), 7436; https://doi.org/10.3390/s24237436 - 21 Nov 2024
Viewed by 368
Abstract
The health, safety, and well-being of household pets such as cats has become a challenging task in previous years. To estimate a cat’s behavior, objective observations of both the frequency and variability of specific behavior traits are required, which might be difficult to [...] Read more.
The health, safety, and well-being of household pets such as cats has become a challenging task in previous years. To estimate a cat’s behavior, objective observations of both the frequency and variability of specific behavior traits are required, which might be difficult to come by in a cat’s ordinary life. There is very little research on cat activity and cat disease analysis based on real-time data. Although previous studies have made progress, several key questions still need addressing: What types of data are best suited for accurately detecting activity patterns? Where should sensors be strategically placed to ensure precise data collection, and how can the system be effectively automated for seamless operation? This study addresses these questions by pointing out whether the cat should be equipped with a sensor, and how the activity detection system can be automated. Magnetic, motion, vision, audio, and location sensors are among the sensors used in the machine learning experiment. In this study, we collect data using three types of differentiable and realistic wearable sensors, namely, an accelerometer, a gyroscope, and a magnetometer. Therefore, this study aims to employ cat activity detection techniques to combine data from acceleration, motion, and magnetic sensors, such as accelerometers, gyroscopes, and magnetometers, respectively, to recognize routine cat activity. Data collecting, data processing, data fusion, and artificial intelligence approaches are all part of the system established in this study. We focus on One-Dimensional Convolutional Neural Networks (1D-CNNs) in our research, to recognize cat activity modeling for detection and classification. Such 1D-CNNs have recently emerged as a cutting-edge approach for signal processing-based systems such as sensor-based pet and human health monitoring systems, anomaly identification in manufacturing, and in other areas. Our study culminates in the development of an automated system for robust pet (cat) activity analysis using artificial intelligence techniques, featuring a 1D-CNN-based approach. In this experimental research, the 1D-CNN approach is evaluated using training and validation sets. The approach achieved a satisfactory accuracy of 98.9% while detecting the activity useful for cat well-being. Full article
(This article belongs to the Special Issue Advances in Sensing-Based Animal Biomechanics)
Show Figures

Figure 1

13 pages, 860 KiB  
Article
Multi-Scale 3D Cephalometric Landmark Detection Based on Direct Regression with 3D CNN Architectures
by Chanho Song, Yoosoo Jeong, Hyungkyu Huh, Jee-Woong Park, Jun-Young Paeng, Jaemyung Ahn, Jaebum Son and Euisung Jung
Diagnostics 2024, 14(22), 2605; https://doi.org/10.3390/diagnostics14222605 - 20 Nov 2024
Viewed by 378
Abstract
Background: Cephalometric analysis is important in diagnosing and planning treatments for patients, traditionally relying on 2D cephalometric radiographs. With advancements in 3D imaging, automated landmark detection using deep learning has gained prominence. However, 3D imaging introduces challenges due to increased network complexity and [...] Read more.
Background: Cephalometric analysis is important in diagnosing and planning treatments for patients, traditionally relying on 2D cephalometric radiographs. With advancements in 3D imaging, automated landmark detection using deep learning has gained prominence. However, 3D imaging introduces challenges due to increased network complexity and computational demands. This study proposes a multi-scale 3D CNN-based approach utilizing direct regression to improve the accuracy of maxillofacial landmark detection. Methods: The method employs a coarse-to-fine framework, first identifying landmarks in a global context and then refining their positions using localized 3D patches. A clinical dataset of 150 CT scans from maxillofacial surgery patients, annotated with 30 anatomical landmarks, was used for training and evaluation. Results: The proposed method achieved an average RMSE of 2.238 mm, outperforming conventional 3D CNN architectures. The approach demonstrated consistent detection without failure cases. Conclusions: Our multi-scale-based 3D CNN framework provides a reliable method for automated landmark detection in maxillofacial CT images, showing potential for other clinical applications. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

28 pages, 1152 KiB  
Article
Lung and Colon Cancer Detection Using a Deep AI Model
by Nazmul Shahadat, Ritika Lama and Anna Nguyen
Cancers 2024, 16(22), 3879; https://doi.org/10.3390/cancers16223879 - 20 Nov 2024
Viewed by 384
Abstract
Lung and colon cancers are among the leading causes of cancer-related mortality worldwide. Early and accurate detection of these cancers is crucial for effective treatment and improved patient outcomes. False or incorrect detection is harmful. Accurately detecting cancer in a patient’s tissue is [...] Read more.
Lung and colon cancers are among the leading causes of cancer-related mortality worldwide. Early and accurate detection of these cancers is crucial for effective treatment and improved patient outcomes. False or incorrect detection is harmful. Accurately detecting cancer in a patient’s tissue is crucial to their effective treatment. While analyzing tissue samples is complicated and time-consuming, deep learning techniques have made it possible to complete this process more efficiently and accurately. As a result, researchers can study more patients in a shorter amount of time and at a lower cost. Much research has been conducted to investigate deep learning models that require great computational ability and resources. However, none of these have had a 100% accurate detection rate for these life-threatening malignancies. Misclassified or falsely detecting cancer can have very harmful consequences. This research proposes a new lightweight, parameter-efficient, and mobile-embedded deep learning model based on a 1D convolutional neural network with squeeze-and-excitation layers for efficient lung and colon cancer detection. This proposed model diagnoses and classifies lung squamous cell carcinomas and adenocarcinoma of the lung and colon from digital pathology images. Extensive experiment demonstrates that our proposed model achieves 100% accuracy for detecting lung, colon, and lung and colon cancers from the histopathological (LC25000) lung and colon datasets, which is considered the best accuracy for around 0.35 million trainable parameters and around 6.4 million flops. Compared with the existing results, our proposed architecture shows state-of-the-art performance in lung, colon, and lung and colon cancer detection. Full article
(This article belongs to the Collection Oncology: State-of-the-Art Research in the USA)
Show Figures

Figure 1

18 pages, 2733 KiB  
Article
Mastitis Classification in Dairy Cows Using Weakly Supervised Representation Learning
by Soo-Hyun Cho, Mingyung Lee, Wang-Hee Lee, Seongwon Seo and Dae-Hyun Lee
Agriculture 2024, 14(11), 2084; https://doi.org/10.3390/agriculture14112084 - 19 Nov 2024
Viewed by 417
Abstract
Detecting mastitis on time in dairy cows is crucial for maintaining milk production and preventing significant economic losses, and machine learning has recently gained significant attention as a promising solution to address this issue. Most studies have detected mastitis on time series data [...] Read more.
Detecting mastitis on time in dairy cows is crucial for maintaining milk production and preventing significant economic losses, and machine learning has recently gained significant attention as a promising solution to address this issue. Most studies have detected mastitis on time series data using a supervised learning model, which requires the scale of labeled data; however, annotating the onset of mastitis in milking data from dairy cows is very difficult and costly, while supervised learning relies on accurate labels for ensuring the performance. Therefore, this study proposed a mastitis classification based on weakly supervised representation learning using an autoencoder on time series milking data, which allows for concurrent milking representation learning and weakly supervision with low-cost labels. The proposed method employed a structure where the classifier branches from the latent space of a 1D-convolutional autoencoder, enabling representation learning of milking data to be conducted from the perspective of reconstructing the original information and detecting mastitis. The branched classifier backpropagate the mastitis symptoms, which are less costly than mastitis diagnosis, during the encoder’s representation learning. The results showed that the proposed method achieved an F1-score of 0.6 that demonstrates performance comparable to previous studies despite using low-cost labels. Our method has the advantage of being easily reproducible across various data domains through low-cost annotation for supervised learning and is practical as it can be implemented with just milking data and weak labels, which can be collected in the field. Full article
Show Figures

Figure 1

18 pages, 1211 KiB  
Article
Unleashing the Power of AI for Intraoperative Neuromonitoring During Carotid Endarterectomy
by Roaa Hindi and George Pappas
Electronics 2024, 13(22), 4542; https://doi.org/10.3390/electronics13224542 - 19 Nov 2024
Viewed by 385
Abstract
This research investigates the use of a 1D Convolutional Neural Network (CNN) to classify electroencephalography (EEG) signals into four categories of ischemia severity: normal, mild, moderate, and severe. The model’s accuracy was lower in moderate instances (75%) and severe cases (65%) compared to [...] Read more.
This research investigates the use of a 1D Convolutional Neural Network (CNN) to classify electroencephalography (EEG) signals into four categories of ischemia severity: normal, mild, moderate, and severe. The model’s accuracy was lower in moderate instances (75%) and severe cases (65%) compared to normal cases (95%) and mild cases (85%). The preprocessing pipeline now incorporates Power Spectral Density (PSD) analysis, and segment lengths of 32, 64, and 128 s are thoroughly examined. The work highlights the potential of the model to identify ischemia in real time during carotid endarterectomy (CEA) to prevent perioperative stroke. The 1D-CNN effectively captures both temporal and spatial EEG signals, providing a combination of processing efficiency and accuracy when compared to existing approaches. In order to enhance the identification of moderate and severe instances of ischemia, future studies should prioritize the integration of more complex datasets, specifically for severe ischemia, as well as increasing the current dataset. Our contributions in this study are implementing a novel 1D-CNN model to achieve a classification accuracy of over 93%, improving feature extraction by utilizing Power Spectral Density (PSD), automating the ischemia detection procedure, and enhancing model performance using a well-balanced dataset. Full article
Show Figures

Figure 1

14 pages, 4021 KiB  
Article
AI-Aided Gait Analysis with a Wearable Device Featuring a Hydrogel Sensor
by Saima Hasan, Brent G. D’auria, M. A. Parvez Mahmud, Scott D. Adams, John M. Long, Lingxue Kong and Abbas Z. Kouzani
Sensors 2024, 24(22), 7370; https://doi.org/10.3390/s24227370 - 19 Nov 2024
Viewed by 403
Abstract
Wearable devices have revolutionized real-time health monitoring, yet challenges persist in enhancing their flexibility, weight, and accuracy. This paper presents the development of a wearable device employing a conductive polyacrylamide–lithium chloride–MXene (PLM) hydrogel sensor, an electronic circuit, and artificial intelligence (AI) for gait [...] Read more.
Wearable devices have revolutionized real-time health monitoring, yet challenges persist in enhancing their flexibility, weight, and accuracy. This paper presents the development of a wearable device employing a conductive polyacrylamide–lithium chloride–MXene (PLM) hydrogel sensor, an electronic circuit, and artificial intelligence (AI) for gait monitoring. The PLM sensor includes tribo-negative polydimethylsiloxane (PDMS) and tribo-positive polyurethane (PU) layers, exhibiting extraordinary stretchability (317% strain) and durability (1000 cycles) while consistently delivering stable electrical signals. The wearable device weighs just 23 g and is strategically affixed to a knee brace, harnessing mechanical energy generated during knee motion which is converted into electrical signals. These signals are digitized and then analyzed using a one-dimensional (1D) convolutional neural network (CNN), achieving an impressive accuracy of 100% for the classification of four distinct gait patterns: standing, walking, jogging, and running. The wearable device demonstrates the potential for lightweight and energy-efficient sensing combined with AI analysis for advanced biomechanical monitoring in sports and healthcare applications. Full article
Show Figures

Figure 1

17 pages, 4657 KiB  
Article
Low-Complexity Convolutional Neural Network for Channel Estimation
by Simona Sibio, Cristian Sestito, Souheil Ben Smida, Yuan Ding and George Goussetis
Electronics 2024, 13(22), 4537; https://doi.org/10.3390/electronics13224537 - 19 Nov 2024
Viewed by 337
Abstract
This paper presents a deep learning algorithm for channel estimation in 5G New Radio (NR). The classical approach that uses neural networks for channel estimation requires more than one stage to obtain the full channel matrix. First, the channel has to be constructed [...] Read more.
This paper presents a deep learning algorithm for channel estimation in 5G New Radio (NR). The classical approach that uses neural networks for channel estimation requires more than one stage to obtain the full channel matrix. First, the channel has to be constructed by the received reference signal, and then, the precision is improved. In contrast, to reduce the computational cost, the proposed neural network method generates the channel matrix from the information captured from a few subcarriers along the slot. This information is extrapolated by applying the Least Square technique only on the Demodulation Reference Signal (DMRS). The received DMRS placed in the grid can be seen as a 2D low-resolution image and it is processed to generate the full channel matrix. To reduce complexity in the hardware implementation, the convolutional neural network (CNN) structure is selected. This solution is analyzed comparing the Mean Square Error (MSE) and the computational cost with other deep learning-based channel estimation, as well as the traditional channel estimation methods. It is demonstrated that the proposed neural network delivers substantial complexity savings and favorable error performance. It reduces the computational cost by an order of magnitude, and it has a maximum error discrepancy of 0.018 at 5 dB compared to Minimum Mean Square Error (MMSE) channel estimation. Full article
Show Figures

Figure 1

Back to TopTop