Enhancing Perception: Refining Explanations of News Claims with LLM Conversations

YL Hsu, JN Chen, YF Chiang, SC Liu… - Findings of the …, 2024 - aclanthology.org
YL Hsu, JN Chen, YF Chiang, SC Liu, A Xiong, LW Ku
Findings of the Association for Computational Linguistics: NAACL 2024, 2024aclanthology.org
Abstract We introduce Enhancing Perception, a framework for Large Language Models
(LLMs) designed to streamline the time-intensive task typically undertaken by professional
fact-checkers of crafting explanations for fake news. This study investigates the effectiveness
of enhancing LLM explanations through conversational refinement. We compare various
questioner agents, including state-of-the-art LLMs like GPT-4, Claude 2, PaLM 2, and 193
American participants acting as human questioners. Based on the histories of these …
Abstract
We introduce Enhancing Perception, a framework for Large Language Models (LLMs) designed to streamline the time-intensive task typically undertaken by professional fact-checkers of crafting explanations for fake news. This study investigates the effectiveness of enhancing LLM explanations through conversational refinement. We compare various questioner agents, including state-of-the-art LLMs like GPT-4, Claude 2, PaLM 2, and 193 American participants acting as human questioners. Based on the histories of these refinement conversations, we further generate comprehensive summary explanations. We evaluated the effectiveness of these initial, refined, and summary explanations across 40 news claims by involving 2,797 American participants, measuring their self-reported belief change regarding both real and fake claims after receiving the explanations. Our findings reveal that, in the context of fake news, explanations that have undergone conversational refinement—whether by GPT-4 or human questioners, who ask more diverse and detail-oriented questions—were significantly more effective than both the initial unrefined explanations and the summary explanations. Moreover, these refined explanations achieved a level of effectiveness comparable to that of expert-written explanations. The results highlight the potential of automatic explanation refinement by LLMs in debunking fake news claims.
aclanthology.org
Showing the best result for this search. See all results