Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (107)

Search Parameters:
Keywords = ultra dense networks

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
13 pages, 686 KiB  
Article
Energy-Efficient Cooperative Transmission in Ultra-Dense Millimeter-Wave Network: Multi-Agent Q-Learning Approach
by Seung-Yeon Kim and Haneul Ko
Sensors 2024, 24(23), 7750; https://doi.org/10.3390/s24237750 - 4 Dec 2024
Viewed by 332
Abstract
In beyond fifth-generation networks, millimeter wave (mmWave) is considered a promising technology that can offer high data rates. However, due to inter-cell interference at cell boundaries, it is difficult to achieve a high signal-to-interference-plus-noise ratio (SINR) among users in an ultra-dense mmWave network [...] Read more.
In beyond fifth-generation networks, millimeter wave (mmWave) is considered a promising technology that can offer high data rates. However, due to inter-cell interference at cell boundaries, it is difficult to achieve a high signal-to-interference-plus-noise ratio (SINR) among users in an ultra-dense mmWave network environment (UDmN). In this paper, we solve this problem with the cooperative transmission technique to provide high SINR to users. Using coordinated multi-point transmission (CoMP) with the joint transmission (JT) strategy as a cooperation diversity technique can provide users with higher data rates through multiple desired signals. Nonetheless, cooperative transmissions between multiple base stations (BSs) lead to increased energy consumption. Therefore, we propose a multi-agent Q-learning-based power control scheme in UDmN. To satisfy the quality of service (QoS) requirements of users and decrease the energy consumption of networks, we define a reward function while considering the outage and energy efficiency of each BS. The results show that our scheme can achieve optimal transmission power and significantly improved network energy efficiency compared with conventional algorithms such as no transmit power control and random control. Additionally, we validate that leveraging channel state information to determine the participation of each BS in power control contributes to enhanced overall performance. Full article
Show Figures

Figure 1

26 pages, 1452 KiB  
Article
Machine Learning-Based Resource Allocation Algorithm to Mitigate Interference in D2D-Enabled Cellular Networks
by Md Kamruzzaman, Nurul I. Sarkar and Jairo Gutierrez
Future Internet 2024, 16(11), 408; https://doi.org/10.3390/fi16110408 - 6 Nov 2024
Viewed by 1378
Abstract
Mobile communications have experienced exponential growth both in connectivity and multimedia traffic in recent years. To support this tremendous growth, device-to-device (D2D) communications play a significant role in 5G and beyond 5G networks. However, enabling D2D communications in an underlay, heterogeneous cellular network [...] Read more.
Mobile communications have experienced exponential growth both in connectivity and multimedia traffic in recent years. To support this tremendous growth, device-to-device (D2D) communications play a significant role in 5G and beyond 5G networks. However, enabling D2D communications in an underlay, heterogeneous cellular network poses two major challenges. First, interference management between D2D and cellular users directly affects a system’s performance. Second, achieving an acceptable level of link quality for both D2D and cellular networks is necessary. An optimum resource allocation is required to mitigate the interference and improve a system’s performance. In this paper, we provide a solution to interference management with an acceptable quality of services (QoS). To this end, we propose a machine learning-based resource allocation method to maximize throughput and achieve minimum QoS requirements for all active D2D pairs and cellular users. We first solve a resource optimization problem by allocating spectrum resources and controlling power transmission on demand. As resource optimization is an integer nonlinear programming problem, we address this problem by proposing a deep Q-network-based reinforcement learning algorithm (DRL) to optimize the resource allocation issue. The proposed DRL algorithm is trained with a decision-making policy to obtain the best solution in terms of spectrum efficiency, computational time, and throughput. The system performance is validated by simulation. The results show that the proposed method outperforms the existing ones. Full article
Show Figures

Figure 1

14 pages, 3854 KiB  
Article
Concurrent Direct Inter-ONU and Upstream Communications in IMDD PONs Incorporating P2MP Flexible Optical Transceivers and Advanced Passive Remote Nodes
by Wei Jin, Lin Chen, Jiaxiang He, Roger Philip Giddings, Yi Huang, Ming Hao, Md. Saifuddin Faruk, Xingwen Yi, Tingyun Wang and Jianming Tang
Photonics 2024, 11(11), 1021; https://doi.org/10.3390/photonics11111021 - 30 Oct 2024
Viewed by 581
Abstract
Driven by a large number of emerging diversified services, in the 5G and beyond era, concurrent direct inter-ONU and upstream communications inside a PON-based mobile access network are highly desirable to provide dynamic, ultra-dense, and fast ONU-to-ONU (without involving an OLT) and ONU-to-OLT [...] Read more.
Driven by a large number of emerging diversified services, in the 5G and beyond era, concurrent direct inter-ONU and upstream communications inside a PON-based mobile access network are highly desirable to provide dynamic, ultra-dense, and fast ONU-to-ONU (without involving an OLT) and ONU-to-OLT connections. To cost-effectively deliver highly dynamic and low latency direct inter-ONU communications, this paper proposes and experimentally demonstrates novel concurrent direct inter-ONU and upstream communications in an upstream 27 km, >62.47 Gbit/s IMDD PON. For supporting inter-ONU communications between a large number of ONUs, an advanced passive remote node is also proposed. Based on different passive optical components, this remote node can be implemented using two approaches, which can, respectively, reduce the inter-ONU signal power losses by >12.2 dB and >16.6 dB (for 128 ONUs) in comparison with existing inter-ONU communication techniques’ remote nodes. In each ONU and OLT, a single pair of cascaded IFFT/FFT-based point-to-multipoint (P2MP) flexible optical transceivers are employed to simultaneously and dynamically establish multiple ONU-to-ONU and ONU-to-OLT communications according to actual users’ requirements. Experimental results show that the proposed network has excellent robustness against various transmission system impairments, including chromatic dispersion, the Rayleigh and Brillouin backscattering effects, and the channel interference effects. For each ONU, dynamic channel allocation can be made without compromising its overall performance. Full article
(This article belongs to the Section Optical Communication and Network)
Show Figures

Figure 1

11 pages, 677 KiB  
Article
Benchmarking In-Sensor Machine Learning Computing: An Extension to the MLCommons-Tiny Suite
by Fabrizio Maria Aymone and Danilo Pietro Pau
Information 2024, 15(11), 674; https://doi.org/10.3390/info15110674 - 28 Oct 2024
Viewed by 776
Abstract
This paper proposes a new benchmark specifically designed for in-sensor digital machine learning computing to meet an ultra-low embedded memory requirement. With the exponential growth of edge devices, efficient local processing is essential to mitigate economic costs, latency, and privacy concerns associated with [...] Read more.
This paper proposes a new benchmark specifically designed for in-sensor digital machine learning computing to meet an ultra-low embedded memory requirement. With the exponential growth of edge devices, efficient local processing is essential to mitigate economic costs, latency, and privacy concerns associated with the centralized cloud processing. Emerging intelligent sensors equipped with computing assets to run neural network inferences and embedded in the same package, which hosts the sensing elements, present new challenges due to their limited memory resources and computational skills. This benchmark evaluates models trained with Quantization Aware Training (QAT) and compares their performance with Post-Training Quantization (PTQ) across three use cases: Human Activity Recognition (HAR) by means of the SHL dataset, Physical Activity Monitoring (PAM) by means of the PAMAP2 dataset, and superficial electromyography (sEMG) regression with the NINAPRO DB8 dataset. The results demonstrate the effectiveness of QAT over PTQ in most scenarios, highlighting the potential for deploying advanced AI models on highly resource-constrained sensors. The INT8 versions of the models always outperformed their FP32, regarding memory and latency reductions, except for the activations for CNN. The CNN model exhibited reduced memory usage and latency with respect to its Dense counterpart, allowing it to meet the stringent 8KiB data RAM and 32 KiB program RAM limits of the ISPU. The TCN model proved to be too large to fit within the memory constraints of the ISPU, primarily due to its greater capacity in terms of number of parameters, designed for processing more complex signals like EMG. This benchmark aims to guide the development of efficient AI solutions for In-Sensor Machine Learning Computing, fostering innovation in the field of Edge AI benchmarking, such as the one conducted by the MLCommons-Tiny working group. Full article
Show Figures

Graphical abstract

12 pages, 2587 KiB  
Article
Preprocessing for Multi-Dimensional Enhancement and Reconstruction in Neural Video Compression
by Jiajia Wang, Qi Zhang, Haiwu Zhao, Guozhong Wang and Xiwu Shang
Appl. Sci. 2024, 14(19), 8626; https://doi.org/10.3390/app14198626 - 25 Sep 2024
Viewed by 971
Abstract
The surge in ultra-high-definition video content has intensified the demand for advanced video compression techniques. Video encoding preprocessing can improve coding efficiency while ensuring a high degree of compatibility with existing codecs. Existing video encoding preprocessing methods are limited in their ability to [...] Read more.
The surge in ultra-high-definition video content has intensified the demand for advanced video compression techniques. Video encoding preprocessing can improve coding efficiency while ensuring a high degree of compatibility with existing codecs. Existing video encoding preprocessing methods are limited in their ability to fully exploit redundant features in video data and recover high-frequency details, and their network architectures often lack compatibility with neural video encoders. To addressing these challenges, we propose a Multi-Dimensional Enhancement and Reconstruction (MDER) preprocessing method to improve the efficiency of deep learning-based neural video encoders. Firstly, our approach integrates a degradation compensation module to mitigate encoding noise and boost feature extraction efficiency. Secondly, a lightweight fully convolutional neural network is employed, which utilizes residual learning and knowledge distillation to refine and suppress irrelevant features across spatial and channel dimensions. Furthermore, to maximize the use of redundant information, we incorporate Dense Blocks, which can enhance and reconstruct important features in the video data during preprocessing. Finally, the preprocessed frames are then mapped from pixel space to feature space through the Dense Feature-Enhanced Video Compression (DFVC) module, which improves motion estimation and compensation accuracy. The experimental results demonstrate that, compared to neural video encoders, the MDER method can reduce bits per pixel (Bpp) by 0.0714 and 0.0536 under equivalent PSNR and MS-SSIM conditions, respectively. These results demonstrate significant improvements in compression efficiency and reconstruction quality, highlighting the effectiveness of the MDER preprocessing method and its compatibility with neural video codec workflows. Full article
Show Figures

Figure 1

13 pages, 2817 KiB  
Article
Structure–Elasticity Relationships in Hybrid-Carrageenan Hydrogels Studied by Image Dynamic Light Scattering, Ultra-Small-Angle Light Scattering and Dynamic Rheometry
by Amine Ben Yahia, Adel Aschi, Bruno Faria and Loic Hilliou
Materials 2024, 17(17), 4331; https://doi.org/10.3390/ma17174331 - 31 Aug 2024
Viewed by 841
Abstract
Hybrid-carrageenan hydrogels are characterized using novel techniques based on high-resolution speckle imaging, namely image dynamic light scattering (IDLS) and ultra-small-angle light scattering (USALS). These techniques, used to probe the microscopic structure of the system in sol–gel phase separation and at different concentrations in [...] Read more.
Hybrid-carrageenan hydrogels are characterized using novel techniques based on high-resolution speckle imaging, namely image dynamic light scattering (IDLS) and ultra-small-angle light scattering (USALS). These techniques, used to probe the microscopic structure of the system in sol–gel phase separation and at different concentrations in the gel phase, give access to a better understanding of the network’s topology on the basis of fractals in the dense phase. Observations of the architecture and the spatial and the size distributions of gel phase and fractal dimension were performed by USALS. The pair-distance distribution function, P(r), extracted from USALS patterns, is a new methodology of calculus for determining the network’s internal size with precision. All structural features are systematically compared with a linear and non-linear rheological characterization of the gels and structure–elasticity relationships are identified in the framework of fractal colloid gels in the diffusion limit. Full article
(This article belongs to the Special Issue Modification and Processing of Biodegradable Polymers (Volume II))
Show Figures

Figure 1

18 pages, 13002 KiB  
Article
A Robust Handover Optimization Based on Velocity-Aware Fuzzy Logic in 5G Ultra-Dense Small Cell HetNets
by Hamidullah Riaz, Sıtkı Öztürk and Ali Çalhan
Electronics 2024, 13(17), 3349; https://doi.org/10.3390/electronics13173349 - 23 Aug 2024
Viewed by 1006
Abstract
In 5G networks and beyond, managing handovers (HOs) becomes complex because of frequent user transitions through small coverage areas. The abundance of small cells (SCs) also complicates HO decisions, potentially leading to inefficient resource utilization. To optimize this process, we propose an intelligent [...] Read more.
In 5G networks and beyond, managing handovers (HOs) becomes complex because of frequent user transitions through small coverage areas. The abundance of small cells (SCs) also complicates HO decisions, potentially leading to inefficient resource utilization. To optimize this process, we propose an intelligent algorithm based on a method that utilizes a fuzzy logic controller (FLC), leveraging prior expertise to dynamically adjust the time-to-trigger (TTT), and handover margin (HOM) in a 5G ultra-dense SC heterogeneous network (HetNet). FLC refines TTT based on the user’s velocity to improve the response to movement. Simultaneously, it adapts HOM by considering inputs such as the reference signal received power (RSRP), user equipment (UE) speed, and cell load. The proposed approach enhances HO decisions, thereby improving the overall system performance. Evaluation using metrics such as handover rate (HOR), handover failure (HOF), radio link failure (RLF), and handover ping-pong (HOPP) demonstrate the superiority of the proposed algorithm over existing approaches. Full article
Show Figures

Figure 1

23 pages, 5651 KiB  
Article
Enhancing Wireless Network Efficiency with the Techniques of Dynamic Distributed Load Balancing: A Distance-Based Approach
by Mustafa Mohammed Hasan Alkalsh and Adrian Kliks
Sensors 2024, 24(16), 5406; https://doi.org/10.3390/s24165406 - 21 Aug 2024
Viewed by 1271
Abstract
The unique combination of the high data rates, ultra-low latency, and massive machine communication capability of 5G networks has facilitated the development of a diverse range of applications distinguished by varying connectivity needs. This has led to a surge in data traffic, driven [...] Read more.
The unique combination of the high data rates, ultra-low latency, and massive machine communication capability of 5G networks has facilitated the development of a diverse range of applications distinguished by varying connectivity needs. This has led to a surge in data traffic, driven by the ever-increasing number of connected devices, which poses challenges to the load distribution among the network cells and minimizes the wireless network performance. In this context, maintaining network balance during congestion periods necessitates effective interaction between various network components. This study emphasizes the crucial role that mobility management plays in mitigating the uneven load distribution across cells. This distribution is a significant factor impacting network performance, and effectively managing it is essential for ensuring optimal network performance in 5G and future networks. The study investigated the complexities associated with congested cells in wireless networks to address this challenge. It proposes a Dynamic Distance-based Load-Balancing (DDLB) algorithm designed to facilitate efficient traffic distribution among contiguous cells and utilize available resources more efficiently. The algorithm reacts with congested cells and redistributes traffic to its neighboring cells based on specific network conditions. As a result, it alleviates congestion and enhances overall network performance. The results demonstrate that the DDLB algorithm significantly improves key metrics, including load distribution and rates of handover and radio link failure, handover ping-pong, and failed attached requests. Full article
(This article belongs to the Special Issue Future Wireless Communication Networks (Volume II))
Show Figures

Figure 1

19 pages, 18726 KiB  
Article
A Small-Object Detection Model Based on Improved YOLOv8s for UAV Image Scenarios
by Jianjun Ni, Shengjie Zhu, Guangyi Tang, Chunyan Ke and Tingting Wang
Remote Sens. 2024, 16(13), 2465; https://doi.org/10.3390/rs16132465 - 5 Jul 2024
Cited by 3 | Viewed by 2182
Abstract
Small object detection for unmanned aerial vehicle (UAV) image scenarios is a challenging task in the computer vision field. Some problems should be further studied, such as the dense small objects and background noise in high-altitude aerial photography images. To address these issues, [...] Read more.
Small object detection for unmanned aerial vehicle (UAV) image scenarios is a challenging task in the computer vision field. Some problems should be further studied, such as the dense small objects and background noise in high-altitude aerial photography images. To address these issues, an enhanced YOLOv8s-based model for detecting small objects is presented. The proposed model incorporates a parallel multi-scale feature extraction module (PMSE), which enhances the feature extraction capability for small objects by generating adaptive weights with different receptive fields through parallel dilated convolution and deformable convolution, and integrating the generated weight information into shallow feature maps. Then, a scale compensation feature pyramid network (SCFPN) is designed to integrate the spatial feature information derived from the shallow neural network layers with the semantic data extracted from the higher layers of the network, thereby enhancing the network’s capacity for representing features. Furthermore, the largest-object detection layer is removed from the original detection layers, and an ultra-small-object detection layer is applied, with the objective of improving the network’s detection performance for small objects. Finally, the WIOU loss function is employed to balance high- and low-quality samples in the dataset. The results of the experiments conducted on the two public datasets illustrate that the proposed model can enhance the object detection accuracy in UAV image scenarios. Full article
Show Figures

Figure 1

17 pages, 608 KiB  
Article
Optimized Two-Tier Caching with Hybrid Millimeter-Wave and Microwave Communications for 6G Networks
by Muhammad Sheraz, Teong Chee Chuah, Mardeni Bin Roslee, Manzoor Ahmed, Amjad Iqbal and Ala’a Al-Habashna
Appl. Sci. 2024, 14(6), 2589; https://doi.org/10.3390/app14062589 - 20 Mar 2024
Cited by 1 | Viewed by 1049
Abstract
Data caching is a promising technique to alleviate the data traffic burden from the backhaul and minimize data access delay. However, the cache capacity constraint poses a significant challenge to obtaining content through the cache resource that degrades the caching performance. In this [...] Read more.
Data caching is a promising technique to alleviate the data traffic burden from the backhaul and minimize data access delay. However, the cache capacity constraint poses a significant challenge to obtaining content through the cache resource that degrades the caching performance. In this paper, we propose a novel two-tier caching mechanism for data caching on mobile user equipment (UE) and the small base station (SBS) level in ultra-dense 6G heterogeneous networks for reducing data access failure via cache resources. The two-tier caching enables users to retrieve their desired content from cache resources through device-to-device (D2D) communications from neighboring users or the serving SBS. The cache-enabled UE exploits millimeter-wave (mmWave)-based D2D communications, utilizing line-of-sight (LoS) links for high-speed data transmission to content-demanding mobile UE within a limited connection time. In the event of D2D communication failures, a dual-mode hybrid system, combining mmWave and microwave μWave technologies, is utilized to ensure effective data transmission between the SBS and UE to fulfill users’ data demands. In the proposed framework. the data transmission speed is optimized through mmWave signals in line-of-sight (LoS) conditions. In non-LoS scenarios, the system switches to μWave mode for obstacle-penetrating signal transmission. Subsequently, we propose a reinforcement learning (RL) approach to optimize cache decisions through the approximation of the Q action-value function. The proposed technique undergoes iterative learning, adapting to dynamic network conditions to enhance the content placement policy and minimize delay. Extensive simulations demonstrate the efficiency of our proposed approach in significantly reducing network delay compared with benchmark schemes. Full article
(This article belongs to the Section Electrical, Electronics and Communications Engineering)
Show Figures

Figure 1

20 pages, 9542 KiB  
Article
Investigation on One-Component Waterborne Epoxy Emulsified Asphalt (OWEEA) Used as Bonding Material
by Hao Wu, Zhezheng Wu, Weimin Song, Dongwei Chen, Mei Yang and Hang Yuan
Buildings 2024, 14(2), 503; https://doi.org/10.3390/buildings14020503 - 12 Feb 2024
Cited by 4 | Viewed by 1105
Abstract
Due to the issue of weakened adhesion between ultra-thin surface overlays, higher demands have been placed on bonding layer materials in practical engineering. This study proposed a method for preparing a one-component waterborne epoxy resin-modified emulsified asphalt (OWEEA) and explored the impact of [...] Read more.
Due to the issue of weakened adhesion between ultra-thin surface overlays, higher demands have been placed on bonding layer materials in practical engineering. This study proposed a method for preparing a one-component waterborne epoxy resin-modified emulsified asphalt (OWEEA) and explored the impact of different ratios on its performance. The basic physical and mechanical properties of the OWEEA, as well as its rheological characteristics, were investigated through penetration tests, softening point tests, ductility tests, tensile tests, and dynamic shear rheological tests. Pull-out tests and shear tests considering different substrates were used to evaluate the interfacial bonding performance of the OWEEA as a bonding layer material, and comparative analysis was conducted with conventional waterborne epoxy resin-modified emulsified asphalt. Based on microscopic testing and analysis, the laws of physical and chemical changes and secondary curing characteristics of the one-component waterborne epoxy emulsion (OWE) during the modification of emulsified asphalt were elucidated. The results indicated that the OWE prepared in this study significantly enhanced the tensile strength and bonding properties of emulsified asphalt. The results showed that the tensile strength, bonding strength, and shear strength of the OWEEA increased from 0.15 MPa, 0.36 MPa, and 0.35 MPa to 0.55 MPa, 1.29 MPa, and 2.01 MPa, respectively. The modification effect of the OWEEA surpassed that of conventional waterborne epoxy emulsion, albeit with a certain reduction in elongation at break, reduced from 1551% to 98%. Furthermore, the OWEEA showed a distinct secondary curing phenomenon. The results of the SEM tests showed that high temperatures accelerated the formation of the crosslinked network structure of OWE, promoting its integration with emulsified asphalt and resulting in a more uniform and dense structure, significantly enhancing bonding strength in a short period. In the actual road construction process, laying hot-mix asphalt mixtures on the bonding layer can further enhance its curing effect and improve its bonding performance. Full article
(This article belongs to the Section Building Materials, and Repair & Renovation)
Show Figures

Figure 1

16 pages, 2220 KiB  
Article
Assessment of Deep Learning Models for Cutaneous Leishmania Parasite Diagnosis Using Microscopic Images
by Ali Mansour Abdelmula, Omid Mirzaei, Emrah Güler and Kaya Süer
Diagnostics 2024, 14(1), 12; https://doi.org/10.3390/diagnostics14010012 - 20 Dec 2023
Cited by 3 | Viewed by 1915
Abstract
Cutaneous leishmaniasis (CL) is a common illness that causes skin lesions, principally ulcerations, on exposed regions of the body. Although neglected tropical diseases (NTDs) are typically found in tropical areas, they have recently become more common along Africa’s northern coast, particularly in Libya. [...] Read more.
Cutaneous leishmaniasis (CL) is a common illness that causes skin lesions, principally ulcerations, on exposed regions of the body. Although neglected tropical diseases (NTDs) are typically found in tropical areas, they have recently become more common along Africa’s northern coast, particularly in Libya. The devastation of healthcare infrastructure during the 2011 war and the following conflicts, as well as governmental apathy, may be causal factors associated with this catastrophic event. The main objective of this study is to evaluate alternative diagnostic strategies for recognizing amastigotes of cutaneous leishmaniasis parasites at various stages using Convolutional Neural Networks (CNNs). The research is additionally aimed at testing different classification models employing a dataset of ultra-thin skin smear images of Leishmania parasite-infected people with cutaneous leishmaniasis. The pre-trained deep learning models including EfficientNetB0, DenseNet201, ResNet101, MobileNetv2, and Xception are used for the cutaneous leishmania parasite diagnosis task. To assess the models’ effectiveness, we employed a five-fold cross-validation approach to guarantee the consistency of the models’ outputs when applied to different portions of the full dataset. Following a thorough assessment and contrast of the various models, DenseNet-201 proved to be the most suitable choice. It attained a mean accuracy of 0.9914 along with outstanding results for sensitivity, specificity, positive predictive value, negative predictive value, F1-score, Matthew’s correlation coefficient, and Cohen’s Kappa coefficient. The DenseNet-201 model surpassed the other models based on a comprehensive evaluation of these key classification performance metrics. Full article
(This article belongs to the Special Issue Deep Learning in Medical Image Segmentation and Diagnosis)
Show Figures

Figure 1

2 pages, 173 KiB  
Abstract
Ultra-Processed Food Consumption and Biological Aging in Italian Adults from the Moli-Sani Study Cohort
by Simona Esposito, Alessandro Gialluisi, Augusto Di Castelnuovo, Simona Costanzo, Emilia Ruggiero, Licia Iacoviello and Marialaura Bonaccio
Proceedings 2023, 91(1), 97; https://doi.org/10.3390/proceedings2023091097 - 30 Nov 2023
Viewed by 1163
Abstract
Background and objectives: Chronological age (CA) may not accurately reflect the health status of an individual. Rather, biological age (BA) or hypothetical underlying “functional” age has been proposed as a relevant indicator of healthy aging. Diets high in polyphenol-rich foods, such as the [...] Read more.
Background and objectives: Chronological age (CA) may not accurately reflect the health status of an individual. Rather, biological age (BA) or hypothetical underlying “functional” age has been proposed as a relevant indicator of healthy aging. Diets high in polyphenol-rich foods, such as the Mediterranean diet, were inversely associated with biological aging in several cohorts. However, the nutritional content is only one aspect of overall food health potential, and increasing attention should be paid to non-nutrient food characteristics, such as food processing. Ultra-processed foods (UPFs) are mostly industrial formulations designed to maximize palatability and consumption through a combination of calorie-dense ingredients and chemical additives, and have been consistently associated with the increased risk of premature mortality and diseases. We therefore examined the association of UPF with biological aging. Methods: A cross-sectional analysis of a sub-cohort of 4510 subjects (aged ≥35 years; 52.0% women) enrolled in the Moli-sani Study (2005–2010). Food intake was assessed using a 188-item food frequency questionnaire. UPF was defined according to the Nova classification and calculated as the ratio (%) of UPF (g/d) to total food consumed (g/d), and categorized into sex-specific quintiles. Diet quality was assessed using the Food Standards Agency Nutrient Profiling System (FSAm-NPS) dietary index. A Deep Neural Network approach based on 36 circulating biomarkers was used to compute BA, and the resulting difference (∆age = BA−CA) was tested as a dependent variable in multivariable linear regression analyses including known risk factors. Results: The mean CA (SD) was 55.6 y (±11.6 years), BA 54.8 y (±8.6 years), and ∆age −0.77 (±7.7). In multivariable-adjusted analyses also including the FSAm-NPS dietary index, a higher intake of UPF consumption was directly associated with accelerated biological aging (β = 0.61; 95%CI 0.05 to 1.17 for Q5 vs. Q1). Discussion: High UPF consumption was directly associated with a blood-markers-based measure of biological aging, independent of overall diet quality. These findings suggest that biological aging could be influenced by non-nutrient food characteristics (e.g., altered food matrix, contact materials and neoformed compounds). Longitudinal studies are warranted to examine whether accelerated biological aging could fall on the pathway between UPF consumption and chronic disease onset. Full article
(This article belongs to the Proceedings of The 14th European Nutrition Conference FENS 2023)
31 pages, 3954 KiB  
Review
A Review on Congestion Mitigation Techniques in Ultra-Dense Wireless Sensor Networks: State-of-the-Art Future Emerging Artificial Intelligence-Based Solutions
by Abdullah Umar, Zubair Khalid, Mohammed Ali, Mohammed Abazeed, Ali Alqahtani, Rahat Ullah and Hashim Safdar
Appl. Sci. 2023, 13(22), 12384; https://doi.org/10.3390/app132212384 - 16 Nov 2023
Cited by 1 | Viewed by 2160
Abstract
The Internet of Things (IoT) and wireless sensor networks (WSNs) have evolved rapidly due to technological breakthroughs. WSNs generate high traffic due to the growing number of sensor nodes. Congestion is one of several problems caused by the huge amount of data in [...] Read more.
The Internet of Things (IoT) and wireless sensor networks (WSNs) have evolved rapidly due to technological breakthroughs. WSNs generate high traffic due to the growing number of sensor nodes. Congestion is one of several problems caused by the huge amount of data in WSNs. When wireless network resources are limited and IoT devices require more and more resources, congestion occurs in extremely dense WSN-based IoT networks. Reduced throughput, reduced network capacity, and reduced energy efficiency within WSNs are all effects of congestion. These consequences eventually lead to network outages due to underutilized network resources, increased network operating costs, and significantly degraded quality of service (QoS). Therefore, it is critical to deal with congestion in WSN-based IoT networks. Researchers have developed a number of approaches to address this problem, with new solutions based on artificial intelligence (AI) standing out. This research examines how new AI-based algorithms contribute to congestion mitigation in WSN-based IoT networks and the various congestion mitigation strategies that have helped reduce congestion. This study also highlights the limitations of AI-based solutions, including where and why they are used in WSNs, and a comparative study of the current literature that makes this study novel. The study concludes with a discussion of its significance and potential future study topics. The topic of congestion reduction in ultra-dense WSN-based IoT networks, as well as the current state of the art and emerging future solutions, demonstrates their significant expertise in reducing WSN congestion. These solutions contribute to network optimization, throughput enhancement, quality of service improvement, network capacity expansion, and overall WSN efficiency improvement. Full article
Show Figures

Figure 1

24 pages, 810 KiB  
Article
Residual Energy-Based Computation Efficiency Maximization in Dense Edge Computing Systems
by Shie Wu, Xiaolin Li, Ningfei Dong and Xia Liu
Electronics 2023, 12(21), 4429; https://doi.org/10.3390/electronics12214429 - 27 Oct 2023
Viewed by 1212
Abstract
With the rapid development of 5G, artificial intelligence, the internet of things (IoT) and other technologies, the number of intelligent terminal devices is growing explosively, bringing huge challenges to the existing communication network and cloud computing service mode. The dense edge computing system [...] Read more.
With the rapid development of 5G, artificial intelligence, the internet of things (IoT) and other technologies, the number of intelligent terminal devices is growing explosively, bringing huge challenges to the existing communication network and cloud computing service mode. The dense edge computing system (DECS), which combines mobile edge computing (MEC) with an ultra-dense network (UDN), has the potential to significantly improve low latency of communications and enhance the quality of experience (QoE) of user equipments (UEs). In this paper, to achieve energy-efficient MEC, computation efficiency (CE) is maximized by jointly optimizing computation offloading, subchannel allocation and power allocation, which yields a challenging non-convex problem. Specially, due to the heterogeneity of UE battery capacities and residual energy, the residual energy of UEs should be taken into consideration in order to achieve better QoE. Therefore, we develop a residual energy-based computation efficiency (RECE) optimization scheme to maximize CE, where the optimization problem is divided into three subproblems. Firstly, the computation offloading subproblem is addressed by a many-to-one matching strategy. Secondly, the subchannel allocation subproblem is dealt with by adopting the graph coloring algorithm. Finally, the power allocation subproblem is solved by the concave–convex procedure (CCCP) method. The numerical results illustrate that UEs’ CE can be optimized based on their residual energy in the proposed RECE scheme. Additionally, compared to a scheme without considering UE residual energy, the system CE can be much enhanced, and the UE energy consumption can be significantly reduced in the RECE scheme. Full article
Show Figures

Figure 1

Back to TopTop