Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (757)

Search Parameters:
Keywords = adaptive graph

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
32 pages, 943 KiB  
Review
Advancements in Sensor Fusion for Underwater SLAM: A Review on Enhanced Navigation and Environmental Perception
by Fomekong Fomekong Rachel Merveille, Baozhu Jia, Zhizun Xu and Bissih Fred
Sensors 2024, 24(23), 7490; https://doi.org/10.3390/s24237490 (registering DOI) - 24 Nov 2024
Abstract
Underwater simultaneous localization and mapping (SLAM) has significant challenges due to the complexities of underwater environments, marked by limited visibility, variable conditions, and restricted global positioning system (GPS) availability. This study provides a comprehensive analysis of sensor fusion techniques in underwater SLAM, highlighting [...] Read more.
Underwater simultaneous localization and mapping (SLAM) has significant challenges due to the complexities of underwater environments, marked by limited visibility, variable conditions, and restricted global positioning system (GPS) availability. This study provides a comprehensive analysis of sensor fusion techniques in underwater SLAM, highlighting the amalgamation of proprioceptive and exteroceptive sensors to improve UUV navigational accuracy and system resilience. Essential sensor applications, including inertial measurement units (IMUs), Doppler velocity logs (DVLs), cameras, sonar, and LiDAR (light detection and ranging), are examined for their contributions to navigation and perception. Fusion methodologies, such as Kalman filters, particle filters, and graph-based SLAM, are evaluated for their benefits, limitations, and computational demands. Additionally, innovative technologies like quantum sensors and AI-driven filtering techniques are examined for their potential to enhance SLAM precision and adaptability. Case studies demonstrate practical applications, analyzing the compromises between accuracy, computational requirements, and adaptability to environmental changes. This paper proceeds to emphasize future directions, stressing the need for advanced filtering and machine learning to address sensor drift, noise, and environmental unpredictability, hence improving autonomous underwater navigation through reliable sensor fusion. Full article
(This article belongs to the Section Navigation and Positioning)
19 pages, 1232 KiB  
Article
Bridge Digital Twin for Practical Bridge Operation and Maintenance by Integrating GIS and BIM
by Yan Gao, Guanyu Xiong, Ziyu Hu, Chengzhang Chai and Haijiang Li
Buildings 2024, 14(12), 3731; https://doi.org/10.3390/buildings14123731 (registering DOI) - 23 Nov 2024
Viewed by 318
Abstract
As an emerging technology, digital twin (DT) is increasingly valued in bridge management for its potential to optimize asset operation and maintenance (O&M). However, traditional bridge management systems (BMS) and existing DT applications typically rely on standalone building information modeling (BIM) or geographic [...] Read more.
As an emerging technology, digital twin (DT) is increasingly valued in bridge management for its potential to optimize asset operation and maintenance (O&M). However, traditional bridge management systems (BMS) and existing DT applications typically rely on standalone building information modeling (BIM) or geographic information system (GIS) platforms, with limited integration between BIM and GIS or consideration for their underlying graph structures. This study addresses these limitations by developing an integrated DT system that combines WebGIS, WebBIM, and graph algorithms within a three-layer architecture. The system design includes a common data environment (CDE) to address cross-platform compatibility, enabling real-time monitoring, drone-enabled inspection, maintenance planning, traffic diversion, and logistics optimization. Additionally, it features an adaptive data structure incorporating JSON-based bridge defect information modeling and triple-based roadmap graphs to streamline data management and decision-making. This comprehensive approach demonstrates the potential of DTs to enhance bridge O&M efficiency, safety, and decision-making. Future research will focus on further improving cross-platform interoperability to expand DT applications in infrastructure management. Full article
(This article belongs to the Special Issue Towards More Practical BIM/GIS Integration)
32 pages, 6565 KiB  
Article
Sparse Feature-Weighted Double Laplacian Rank Constraint Non-Negative Matrix Factorization for Image Clustering
by Hu Ma, Ziping Ma, Huirong Li and Jingyu Wang
Mathematics 2024, 12(23), 3656; https://doi.org/10.3390/math12233656 - 22 Nov 2024
Viewed by 221
Abstract
As an extension of non-negative matrix factorization (NMF), graph-regularized non-negative matrix factorization (GNMF) has been widely applied in data mining and machine learning, particularly for tasks such as clustering and feature selection. Traditional GNMF methods typically rely on predefined graph structures to guide [...] Read more.
As an extension of non-negative matrix factorization (NMF), graph-regularized non-negative matrix factorization (GNMF) has been widely applied in data mining and machine learning, particularly for tasks such as clustering and feature selection. Traditional GNMF methods typically rely on predefined graph structures to guide the decomposition process, using fixed data graphs and feature graphs to capture relationships between data points and features. However, these fixed graphs may limit the model’s expressiveness. Additionally, many NMF variants face challenges when dealing with complex data distributions and are vulnerable to noise and outliers. To overcome these challenges, we propose a novel method called sparse feature-weighted double Laplacian rank constraint non-negative matrix factorization (SFLRNMF), along with its extended version, SFLRNMTF. These methods adaptively construct more accurate data similarity and feature similarity graphs, while imposing rank constraints on the Laplacian matrices of these graphs. This rank constraint ensures that the resulting matrix ranks reflect the true number of clusters, thereby improving clustering performance. Moreover, we introduce a feature weighting matrix into the original data matrix to reduce the influence of irrelevant features and apply an L2,1/2 norm sparsity constraint in the basis matrix to encourage sparse representations. An orthogonal constraint is also enforced on the coefficient matrix to ensure interpretability of the dimensionality reduction results. In the extended model (SFLRNMTF), we introduce a double orthogonal constraint on the basis matrix and coefficient matrix to enhance the uniqueness and interpretability of the decomposition, thereby facilitating clearer clustering results for both rows and columns. However, enforcing double orthogonal constraints can reduce approximation accuracy, especially with low-rank matrices, as it restricts the model’s flexibility. To address this limitation, we introduce an additional factor matrix R, which acts as an adaptive component that balances the trade-off between constraint enforcement and approximation accuracy. This adjustment allows the model to achieve greater representational flexibility, improving reconstruction accuracy while preserving the interpretability and clustering clarity provided by the double orthogonality constraints. Consequently, the SFLRNMTF approach becomes more robust in capturing data patterns and achieving high-quality clustering results in complex datasets. We also propose an efficient alternating iterative update algorithm to optimize the proposed model and provide a theoretical analysis of its performance. Clustering results on four benchmark datasets demonstrate that our method outperforms competing approaches. Full article
Show Figures

Figure 1

16 pages, 7450 KiB  
Article
Latent Graph Attention for Spatial Context in Light-Weight Networks: Multi-Domain Applications in Visual Perception Tasks
by Ayush Singh, Yash Bhambhu, Himanshu Buckchash, Deepak K. Gupta and Dilip K. Prasad
Appl. Sci. 2024, 14(22), 10677; https://doi.org/10.3390/app142210677 - 19 Nov 2024
Viewed by 263
Abstract
Global contexts in images are quite valuable in image-to-image translation problems. Conventional attention-based and graph-based models capture the global context to a large extent; however, these are computationally expensive. Moreover, existing approaches are limited to only learning the pairwise semantic relation between any [...] Read more.
Global contexts in images are quite valuable in image-to-image translation problems. Conventional attention-based and graph-based models capture the global context to a large extent; however, these are computationally expensive. Moreover, existing approaches are limited to only learning the pairwise semantic relation between any two points in the image. In this paper, we present Latent Graph Attention (LGA), a computationally inexpensive (linear to the number of nodes) and stable modular framework for incorporating the global context in existing architectures. This framework particularly empowers small-scale architectures to achieve performance closer to that of large architectures, making the light-weight architectures more useful for edge devices with lower compute power and lower energy needs. LGA propagates information spatially using a network of locally connected graphs, thereby facilitating the construction of a semantically coherent relation between any two spatially distant points that also takes into account the influence of the intermediate pixels. Moreover, the depth of the graph network can be used to adapt the extent of contextual spread to the target dataset, thereby able to explicitly control the added computational cost. To enhance the learning mechanism of LGA, we also introduce a novel contrastive loss term that helps our LGA module to couple well with the original architecture at the expense of minimal additional computational load. We show that incorporating LGA improves performance in three challenging applications, namely transparent object segmentation, image restoration for dehazing and optical flow estimation. Full article
Show Figures

Figure 1

19 pages, 2768 KiB  
Article
Reinforcement-Learning-Based Edge Offloading Orchestration in Computing Continuum
by Ioana Ramona Martin, Gabriel Ioan Arcas and Tudor Cioara
Computers 2024, 13(11), 295; https://doi.org/10.3390/computers13110295 - 14 Nov 2024
Viewed by 420
Abstract
The AI-driven applications and large data generated by IoT devices connected to large-scale utility infrastructures pose significant operational challenges, including increased latency, communication overhead, and computational imbalances. Addressing these is essential to shift the workloads from the cloud to the edge and across [...] Read more.
The AI-driven applications and large data generated by IoT devices connected to large-scale utility infrastructures pose significant operational challenges, including increased latency, communication overhead, and computational imbalances. Addressing these is essential to shift the workloads from the cloud to the edge and across the entire computing continuum. However, to achieve this, significant challenges must still be addressed, particularly in decision making to manage the trade-offs associated with workload offloading. In this paper, we propose a task-offloading solution using Reinforcement Learning (RL) to dynamically balance workloads and reduce overloads. We have chosen the Deep Q-Learning algorithm and adapted it to our workload offloading problem. The reward system considers the node’s computational state and type to increase the utilization of the computational resources while minimizing latency and bandwidth utilization. A knowledge graph model of the computing continuum infrastructure is used to address environment modeling challenges and facilitate RL. The learning agent’s performance was evaluated using different hyperparameter configurations and varying episode lengths or knowledge graph model sizes. Results show that for a better learning experience, a low, steady learning rate and a large buffer size are important. Additionally, it offers strong convergence features, with relevant workload tasks and node pairs identified after each learning episode. It also demonstrates good scalability, as the number of offloading pairs and actions increases with the size of the knowledge graph and the episode count. Full article
(This article belongs to the Special Issue Intelligent Edge: When AI Meets Edge Computing)
Show Figures

Figure 1

17 pages, 2994 KiB  
Article
HGNN−BRFE: Heterogeneous Graph Neural Network Model Based on Region Feature Extraction
by Yufei Zhao, Shixiao Xu and Hua Duan
Electronics 2024, 13(22), 4447; https://doi.org/10.3390/electronics13224447 - 13 Nov 2024
Viewed by 393
Abstract
With the strong capability of heterogeneous graphs in accurately modeling various types of nodes and their interactions, they have gradually become a research hotspot, promoting the rapid development of the field of heterogeneous graph neural networks (HGNNs). However, most existing HGNN models rely [...] Read more.
With the strong capability of heterogeneous graphs in accurately modeling various types of nodes and their interactions, they have gradually become a research hotspot, promoting the rapid development of the field of heterogeneous graph neural networks (HGNNs). However, most existing HGNN models rely on meta−paths for feature extraction, which can only utilize part of the data from the graph for training and learning. This not only limits the data generalization ability of deep learning models but also affects the application effect of data−driven adaptive technologies. In response to this challenge, this study proposes a new model—heterogeneous graph neural network based on regional feature extraction (HGNN−BRFE). This model enhances performance through an “extraction−fusion” strategy in three key aspects: first, it efficiently extracts features of neighboring nodes of the same type according to specific regions; second, it effectively fuses information from different regions and hierarchical neighbors using attention mechanisms; third, it specially designs a process for feature extraction and fusion targeting heterogeneous type nodes, ensuring that the rich semantic and heterogeneity information within the heterogeneous graph is retained while maintaining the node’s own characteristics during the node embedding process to prevent the loss of its own features and potential over−smoothing issues. Experimental results show that HGNN−BRFE achieves a performance improvement of 1–3% over existing methods on classification tasks across multiple real−world datasets. Full article
Show Figures

Figure 1

22 pages, 1696 KiB  
Article
Learning A-Share Stock Recommendation from Stock Graph and Historical Price Simultaneously
by Hanyang Chen, Tian Wang, Jessada Konpang and Adisorn Sirikham
Electronics 2024, 13(22), 4427; https://doi.org/10.3390/electronics13224427 - 12 Nov 2024
Viewed by 445
Abstract
The Chinese stock market, marked by rapid growth and significant volatility, presents unique challenges for investors and analysts. A-share stocks, traded on the Shanghai and Shenzhen exchanges, are crucial to China’s financial system and offer opportunities for both domestic and international investors. Accurate [...] Read more.
The Chinese stock market, marked by rapid growth and significant volatility, presents unique challenges for investors and analysts. A-share stocks, traded on the Shanghai and Shenzhen exchanges, are crucial to China’s financial system and offer opportunities for both domestic and international investors. Accurate stock recommendation tools are vital for informed decision making, especially given the ongoing regulatory changes and economic reforms in China. Current stock recommendation methods often fall short, as they typically fail to capture the complex inter-company relationships and rely heavily on financial reports, neglecting the potential of unlabeled data and historical price trends. In response, we propose a novel approach that combines graph-based structures with historical price data to develop self-learned stock embeddings for A-share recommendations. Our method leverages self-supervised learning, bypassing the need for human-generated labels and autonomously uncovering latent relationships and patterns within the data. This dual-input strategy enhances the understanding of market dynamics, leading to more accurate stock predictions. Our contributions include a novel framework for label-free stock recommendations with modeling stock connections and pricing information, and empirical evidence demonstrating the robustness and adaptability of our approach in the volatile Chinese stock market. Full article
(This article belongs to the Special Issue Artificial Intelligence in Graphics and Images)
Show Figures

Figure 1

13 pages, 1246 KiB  
Article
Sprint Management in Agile Approach: Progress and Velocity Evaluation Applying Machine Learning
by Yadira Jazmín Pérez Castillo, Sandra Dinora Orantes Jiménez and Patricio Orlando Letelier Torres
Information 2024, 15(11), 726; https://doi.org/10.3390/info15110726 - 12 Nov 2024
Viewed by 456
Abstract
Nowadays, technology plays a fundamental role in data collection and analysis, which are essential for decision-making in various fields. Agile methodologies have transformed project management by focusing on continuous delivery and adaptation to change. In multiple project management, assessing the progress and pace [...] Read more.
Nowadays, technology plays a fundamental role in data collection and analysis, which are essential for decision-making in various fields. Agile methodologies have transformed project management by focusing on continuous delivery and adaptation to change. In multiple project management, assessing the progress and pace of work in Sprints is particularly important. In this work, a data model was developed to evaluate the progress and pace of work, based on the visual interpretation of numerical data from certain graphs that allow tracking, such as the Burndown chart. Additionally, experiments with machine learning algorithms were carried out to validate the effectiveness and potential improvements facilitated by this dataset development. Full article
Show Figures

Graphical abstract

19 pages, 5235 KiB  
Article
Study on Quality Assessment Methods for Enhanced Resolution Graph-Based Reconstructed Images in 3D Capacitance Tomography
by Robert Banasiak, Mateusz Bujnowicz and Anna Fabijańska
Appl. Sci. 2024, 14(22), 10222; https://doi.org/10.3390/app142210222 - 7 Nov 2024
Viewed by 336
Abstract
This paper proposes a novel approach to assessing the quality of 3D Electrical Capacitance Tomography (ECT) images. Such images are typically represented as irregular graphs. Thus, image quality metrics typically used with raster images do not straightforwardly apply to them. However, given the [...] Read more.
This paper proposes a novel approach to assessing the quality of 3D Electrical Capacitance Tomography (ECT) images. Such images are typically represented as irregular graphs. Thus, image quality metrics typically used with raster images do not straightforwardly apply to them. However, given the recent advancements in Graph Convolutional Neural Networks (GCNs) for improving ECT image reconstruction, reliable Quality Assessment methods are essential for comparing the performance of different GCN models. To address this need, this paper applied some existing image quality and similarity assessment methods designed for raster images to the graph-based representation of 3D ECT images. Specifically, attention was paid to the Peak Signal-to-Noise Ratio (PSNR), the Structural Similarity Index Measure (SSIM), and measures based on image histograms. The proposed adaptations resulted in the development of tailored Graph Quality Assessment (GQA) techniques specifically designed for the graph-based nature of ECT images. The proposed GQA techniques were validated on 1042 phantoms and their corresponding Low-Quality (LQ) and High-Quality (HQ) reconstructions through a robust GQA benchmarking system, enabling a systematic comparison of various GQA methods. The evaluation of the proposed methods’ performances across this diverse dataset, by analyzing overall trends and specific case studies, is presented and discussed. Finally, we present our conclusions regarding the effectiveness of the proposed GQA methods, and we identify the most promising approach for assessing the quality of graph-based ECT images. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

18 pages, 5160 KiB  
Article
DPFANet: Deep Point Feature Aggregation Network for Classification of Irregular Objects in LIDAR Point Clouds
by Shuming Zhang and Dali Xu
Electronics 2024, 13(22), 4355; https://doi.org/10.3390/electronics13224355 - 6 Nov 2024
Viewed by 408
Abstract
Point cloud data acquired by scanning with Light Detection and Ranging (LiDAR) devices typically contain irregular objects, such as trees, which lead to low classification accuracy in existing point cloud classification methods. Consequently, this paper proposes a deep point feature aggregation network (DPFANet) [...] Read more.
Point cloud data acquired by scanning with Light Detection and Ranging (LiDAR) devices typically contain irregular objects, such as trees, which lead to low classification accuracy in existing point cloud classification methods. Consequently, this paper proposes a deep point feature aggregation network (DPFANet) that integrates adaptive graph convolution and space-filling curve sampling modules to effectively address the feature extraction problem for irregular object point clouds. To refine the feature representation, we utilize the affinity matrix to quantify inter-channel relationships and adjust the input feature matrix accordingly, thereby improving the classification accuracy of the object point cloud. To validate the effectiveness of the proposed approach, a TreeNet dataset was created, comprising four categories of tree point clouds derived from publicly available UAV point cloud data. The experimental findings illustrate that the model attains a mean accuracy of 91.4% on the ModelNet40 dataset, comparable to prevailing state-of-the-art techniques. When applied to the more challenging TreeNet dataset, the model achieves a mean accuracy of 88.0%, surpassing existing state-of-the-art methods in all classification metrics. These results underscore the high potential of the model for point cloud classification of irregular objects. Full article
(This article belongs to the Special Issue Point Cloud Data Processing and Applications)
Show Figures

Figure 1

22 pages, 2513 KiB  
Article
CURATE: Scaling-Up Differentially Private Causal Graph Discovery
by Payel Bhattacharjee and Ravi Tandon
Entropy 2024, 26(11), 946; https://doi.org/10.3390/e26110946 - 5 Nov 2024
Viewed by 384
Abstract
Causal graph discovery (CGD) is the process of estimating the underlying probabilistic graphical model that represents the joint distribution of features of a dataset. CGD algorithms are broadly classified into two categories: (i) constraint-based algorithms, where the outcome depends on conditional independence (CI) [...] Read more.
Causal graph discovery (CGD) is the process of estimating the underlying probabilistic graphical model that represents the joint distribution of features of a dataset. CGD algorithms are broadly classified into two categories: (i) constraint-based algorithms, where the outcome depends on conditional independence (CI) tests, and (ii) score-based algorithms, where the outcome depends on optimized score function. Because sensitive features of observational data are prone to privacy leakage, differential privacy (DP) has been adopted to ensure user privacy in CGD. Adding the same amount of noise in this sequential-type estimation process affects the predictive performance of algorithms. Initial CI tests in constraint-based algorithms and later iterations of the optimization process of score-based algorithms are crucial; thus, they need to be more accurate and less noisy. Based on this key observation, we present CURATE (CaUsal gRaph AdapTivE privacy), a DP-CGD framework with adaptive privacy budgeting. In contrast to existing DP-CGD algorithms with uniform privacy budgeting across all iterations, CURATE allows for adaptive privacy budgeting by minimizing error probability (constraint-based), maximizing iterations of the optimization problem (score-based) while keeping the cumulative leakage bounded. To validate our framework, we present a comprehensive set of experiments on several datasets and show that CURATE achieves higher utility compared to existing DP-CGD algorithms with less privacy leakage. Full article
(This article belongs to the Special Issue Information-Theoretic Security and Privacy)
Show Figures

Figure 1

13 pages, 380 KiB  
Article
TEA-GCN: Transformer-Enhanced Adaptive Graph Convolutional Network for Traffic Flow Forecasting
by Xiaxia He, Wenhui Zhang, Xiaoyu Li and Xiaodan Zhang
Sensors 2024, 24(21), 7086; https://doi.org/10.3390/s24217086 - 4 Nov 2024
Viewed by 615
Abstract
Traffic flow forecasting is crucial for improving urban traffic management and reducing resource consumption. Accurate traffic conditions prediction requires capturing the complex spatial-temporal dependencies inherent in traffic data. Traditional spatial-temporal graph modeling methods often rely on fixed road network structures, failing to account [...] Read more.
Traffic flow forecasting is crucial for improving urban traffic management and reducing resource consumption. Accurate traffic conditions prediction requires capturing the complex spatial-temporal dependencies inherent in traffic data. Traditional spatial-temporal graph modeling methods often rely on fixed road network structures, failing to account for the dynamic spatial correlations that vary over time. To address this, we propose a Transformer-Enhanced Adaptive Graph Convolutional Network (TEA-GCN) that alternately learns temporal and spatial correlations in traffic data layer-by-layer. Specifically, we design an adaptive graph convolutional module to dynamically capture implicit road dependencies at different time levels and a local-global temporal attention module to simultaneously capture long-term and short-term temporal dependencies. Experimental results on two public traffic datasets demonstrate the effectiveness of the proposed model compared to other state-of-the-art traffic flow prediction methods. Full article
(This article belongs to the Special Issue Data and Network Analytics in Transportation Systems)
Show Figures

Figure 1

19 pages, 7491 KiB  
Article
An Improved Vital Signal Extraction Method Based on Laser Doppler Effect
by Yu Li, Haiyang Zhang, Bowen Zhang, Yujiao Qi and Si Chen
Sensors 2024, 24(21), 7027; https://doi.org/10.3390/s24217027 - 31 Oct 2024
Viewed by 392
Abstract
The mixed signal of respiratory waveform and heartbeat waveform detected by the Laser-Doppler system is processed with an intermediate-frequency (IF) interference filtering method, an enhanced extraction method and a waveform-fixing method. To filter the IF interference signals and the noise scatters in the [...] Read more.
The mixed signal of respiratory waveform and heartbeat waveform detected by the Laser-Doppler system is processed with an intermediate-frequency (IF) interference filtering method, an enhanced extraction method and a waveform-fixing method. To filter the IF interference signals and the noise scatters in the time-frequency graph, the filtering method based on coefficient of variation (CoV) values and the enhanced curve extraction method based on noise-scatter theory are utilized in vital signal analysis. To decouple the respiratory signal and the heartbeat signal in time domain, the waveform-fixing method based on second-order difference theory is utilized in signal decoupling. This method as an algorithm is applied in the computer simulation and laboratory environments. The results show that the above methods can extract the mixed waveforms and identify the respiratory rates and heart rates in real experimental data. The IF interference signal can be filtered adaptively, and the accuracy of the analyzed rates can be improved to about 95%. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

25 pages, 6970 KiB  
Article
Urban Land Use Classification Model Fusing Multimodal Deep Features
by Yougui Ren, Zhiwei Xie and Shuaizhi Zhai
ISPRS Int. J. Geo-Inf. 2024, 13(11), 378; https://doi.org/10.3390/ijgi13110378 - 30 Oct 2024
Viewed by 607
Abstract
Urban land use classification plays a significant role in urban studies and provides key guidance for urban development. However, existing methods predominantly rely on either raster structure deep features through convolutional neural networks (CNNs) or topological structure deep features through graph neural networks [...] Read more.
Urban land use classification plays a significant role in urban studies and provides key guidance for urban development. However, existing methods predominantly rely on either raster structure deep features through convolutional neural networks (CNNs) or topological structure deep features through graph neural networks (GNNs), making it challenging to comprehensively capture the rich semantic information in remote sensing images. To address this limitation, we propose a novel urban land use classification model by integrating both raster and topological structure deep features to enhance the accuracy and robustness of the classification model. First, we divide the urban area into block units based on road network data and further subdivide these units using the fractal network evolution algorithm (FNEA). Next, the K-nearest neighbors (KNN) graph construction method with adaptive fusion coefficients is employed to generate both global and local graphs of the blocks and sub-units. The spectral features and subgraph features are then constructed, and a graph convolutional network (GCN) is utilized to extract the node relational features from both the global and local graphs, forming the topological structure deep features while aggregating local features into global ones. Subsequently, VGG-16 (Visual Geometry Group 16) is used to extract the image convolutional features of the block units, obtaining the raster structure deep features. Finally, the transformer is used to fuse both topological and raster structure deep features, and land use classification is completed using the softmax function. Experiments were conducted using high-resolution Google images and Open Street Map (OSM) data, with study areas on the third ring road of Shenyang and the fourth ring road of Chengdu. The results demonstrate that the proposed method improves the overall accuracy and Kappa coefficient by 9.32% and 0.17, respectively, compared to single deep learning models. Incorporating subgraph structure features further enhances the overall accuracy and Kappa by 1.13% and 0.1. The adaptive KNN graph construction method achieves accuracy comparable to that of the empirical threshold method. This study enables accurate large-scale urban land use classification with reduced manual intervention, improving urban planning efficiency. The experimental results verify the effectiveness of the proposed method, particularly in terms of classification accuracy and feature representation completeness. Full article
Show Figures

Figure 1

12 pages, 2304 KiB  
Article
L-GraphSAGE: A Graph Neural Network-Based Approach for IoV Application Encrypted Traffic Identification
by Shihe Zhang, Ruidong Chen, Jingxue Chen, Yukun Zhu, Manyuan Hua, Jiaying Yuan and Fenghua Xu
Electronics 2024, 13(21), 4222; https://doi.org/10.3390/electronics13214222 - 28 Oct 2024
Viewed by 537
Abstract
Recently, with a crucial role in developing smart transportation systems, the Internet of Vehicles (IoV), with all kinds of in-vehicle devices, has undergone significant advancement for autonomous driving, in-vehicle infotainment, etc. With the development of these IoV devices, the complexity and volume of [...] Read more.
Recently, with a crucial role in developing smart transportation systems, the Internet of Vehicles (IoV), with all kinds of in-vehicle devices, has undergone significant advancement for autonomous driving, in-vehicle infotainment, etc. With the development of these IoV devices, the complexity and volume of in-vehicle data flows within information communication have increased dramatically. To adapt these changes to secure and smart transportation, encrypted communication realization, real-time decision-making, traffic management enhancement, and overall transportation efficiency improvement are essential. However, the security of a traffic system under encrypted communication is still inadequate, as attackers can identify in-vehicle devices through fingerprinting attacks, causing potential privacy breaches. Nevertheless, existing IoV traffic application models for encrypted traffic identification are weak and often exhibit poor generalization in some dynamic scenarios, where route switching and TCP congestion occur frequently. In this paper, we propose LineGraph-GraphSAGE (L-GraphSAGE), a graph neural network (GNN) model designed to improve the generalization ability of the IoV application of traffic identification in these dynamic scenarios. L-GraphSAGE utilizes node features, including text attributes, node context information, and node degree, to learn hyperparameters that can be transferred to unknown nodes. Our model demonstrates promising results in both UNSW Sydney public datasets and real-world environments. In public IoV datasets, we achieve an accuracy of 94.23%(↑0.23%). Furthermore, our model achieves an F1 change rate of 0.20%(↑96.92%) in α train, β infer, and 0.60%(↑75.00%) in β train, α infer when evaluated on a dataset consisting of five classes of data collected from real-world environments. These results highlight the effectiveness of our proposed approach in enhancing IoV application identification in dynamic network scenarios. Full article
(This article belongs to the Special Issue Graph-Based Learning Methods in Intelligent Transportation Systems)
Show Figures

Figure 1

Back to TopTop