WilKE: Wise-Layer Knowledge Editor for Lifelong Knowledge Editing

Chenhui Hu, Pengfei Cao, Yubo Chen, Kang Liu, Jun Zhao


Abstract
Knowledge editing aims to rectify inaccuracies in large language models (LLMs) without costly retraining for outdated or erroneous knowledge. However, current knowledge editing methods primarily focus on single editing, failing to meet the requirements for lifelong editing. This study reveals a performance degradation encountered by knowledge editing in lifelong editing, characterized by toxicity buildup and toxicity flash, with the primary cause identified as pattern unmatch. We introduce a knowledge editing approach named Wise-Layer Knowledge Editor (WilKE), which selects editing layer based on the pattern matching degree of editing knowledge across different layers in language models. Experimental results demonstrate that, in lifelong editing, WilKE exhibits an average improvement of 46.2% and 67.8% on editing GPT2-XL and GPT-J relative to state-of-the-art knowledge editing methods.
Anthology ID:
2024.findings-acl.207
Volume:
Findings of the Association for Computational Linguistics: ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3476–3503
Language:
URL:
https://aclanthology.org/2024.findings-acl.207
DOI:
10.18653/v1/2024.findings-acl.207
Bibkey:
Cite (ACL):
Chenhui Hu, Pengfei Cao, Yubo Chen, Kang Liu, and Jun Zhao. 2024. WilKE: Wise-Layer Knowledge Editor for Lifelong Knowledge Editing. In Findings of the Association for Computational Linguistics: ACL 2024, pages 3476–3503, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
WilKE: Wise-Layer Knowledge Editor for Lifelong Knowledge Editing (Hu et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.207.pdf