Prompts have evil twins

Rimon Melamed, Lucas Hurley McCabe, Tanay Wakhare, Yejin Kim, H. Howie Huang, Enric Boix-Adserà


Abstract
We discover that many natural-language prompts can be replaced by corresponding prompts that are unintelligible to humans but that provably elicit similar behavior in language models. We call these prompts “evil twins” because they are obfuscated and uninterpretable (evil), but at the same time mimic the functionality of the original natural-language prompts (twins). Remarkably, evil twins transfer between models. We find these prompts by solving a maximum-likelihood problem which has applications of independent interest.
Anthology ID:
2024.emnlp-main.4
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
46–74
Language:
URL:
https://aclanthology.org/2024.emnlp-main.4/
DOI:
10.18653/v1/2024.emnlp-main.4
Bibkey:
Cite (ACL):
Rimon Melamed, Lucas Hurley McCabe, Tanay Wakhare, Yejin Kim, H. Howie Huang, and Enric Boix-Adserà. 2024. Prompts have evil twins. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 46–74, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Prompts have evil twins (Melamed et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.4.pdf
Software:
 2024.emnlp-main.4.software.zip
Data:
 2024.emnlp-main.4.data.zip