Explainable Graph Neural Networks for Observation Impact Analysis in Atmospheric State Estimation

HJ Jeon, JH Kang, IH Kwon, O Lee - arXiv preprint arXiv:2403.17384, 2024 - arxiv.org
HJ Jeon, JH Kang, IH Kwon, O Lee
arXiv preprint arXiv:2403.17384, 2024arxiv.org
This paper investigates the impact of observations on atmospheric state estimation in
weather forecasting systems using graph neural networks (GNNs) and explainability
methods. We integrate observation and Numerical Weather Prediction (NWP) points into a
meteorological graph, extracting $ k $-hop subgraphs centered on NWP points. Self-
supervised GNNs are employed to estimate the atmospheric state by aggregating data
within these $ k $-hop radii. The study applies gradient-based explainability methods to …
This paper investigates the impact of observations on atmospheric state estimation in weather forecasting systems using graph neural networks (GNNs) and explainability methods. We integrate observation and Numerical Weather Prediction (NWP) points into a meteorological graph, extracting -hop subgraphs centered on NWP points. Self-supervised GNNs are employed to estimate the atmospheric state by aggregating data within these -hop radii. The study applies gradient-based explainability methods to quantify the significance of different observations in the estimation process. Evaluated with data from 11 satellite and land-based observations, the results highlight the effectiveness of visualizing the importance of observation types, enhancing the understanding and optimization of observational data in weather forecasting.
arxiv.org
Showing the best result for this search. See all results