Predicting global conflicts through data-driven approaches has the potential to aid political decision- makers in formulating more effective and targeted policies. However, high-performance models that derive patterns from data often become highly complex, making it challenging to extract understandable rationales behind their outcomes. In this paper, we suggest integrating a transformer-based Artificial Intelligence Early Warning System (AI-EWS) with integrated gradients, an eXplainable Artificial Intel- ligence (XAI) technique attributing model predictions to specific features at a given time in the input data, thereby enhancing interpretability. To validate our methodology, we conduct experiments on a prominent geopolitical dataset: ACLED. This dataset provides comprehensive insights into global conflict events, facilitating effective pattern learning and generalization by our model. Leveraging these explainability techniques, our goal is to bridge the gap between complex, high-performance models and the practical needs of policymakers in conflict prevention and resolution. Predictive analytics algorithms in conjunction with an XAI approach can foresee the impact of decisions on various population segments, fostering equity, and inclusion and supporting a data-driven approach, along with a culture of openness and accountability within the public administration.

Integrating XAI for Predictive Conflict Analytics

Luca Macis
;
Marco Tagliapietra
;
Greta Greco;Paola Pisano;Edoardo Carroccetto
2024-01-01

Abstract

Predicting global conflicts through data-driven approaches has the potential to aid political decision- makers in formulating more effective and targeted policies. However, high-performance models that derive patterns from data often become highly complex, making it challenging to extract understandable rationales behind their outcomes. In this paper, we suggest integrating a transformer-based Artificial Intelligence Early Warning System (AI-EWS) with integrated gradients, an eXplainable Artificial Intel- ligence (XAI) technique attributing model predictions to specific features at a given time in the input data, thereby enhancing interpretability. To validate our methodology, we conduct experiments on a prominent geopolitical dataset: ACLED. This dataset provides comprehensive insights into global conflict events, facilitating effective pattern learning and generalization by our model. Leveraging these explainability techniques, our goal is to bridge the gap between complex, high-performance models and the practical needs of policymakers in conflict prevention and resolution. Predictive analytics algorithms in conjunction with an XAI approach can foresee the impact of decisions on various population segments, fostering equity, and inclusion and supporting a data-driven approach, along with a culture of openness and accountability within the public administration.
2024
The 2nd World Conference on eXplainable Artificial Intelligence
La Valletta, Malta
17-19/07/2024
xAI-2024 Late-breaking Work, Demos and Doctoral Consortium Joint Proceedings
CEUR Workshop Proceedings
3793
105
112
https://ceur-ws.org/Vol-3793/paper_14.pdf
eXplainable Artificial Intelligence, Transformers, Time Series Forecasting, Integrated Gradients, Conflict Prediction, Early Warning System, Public Policy
Luca Macis, Marco Tagliapietra, Alessandro Castelnovo, Daniele Regoli, Greta Greco, Andrea Claudio Cosentini, Paola Pisano, Edoardo Carroccetto...espandi
File in questo prodotto:
File Dimensione Formato  
paper_14-3.pdf

Accesso aperto

Tipo di file: PDF EDITORIALE
Dimensione 1.52 MB
Formato Adobe PDF
1.52 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2318/2028830
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact