×
May 24, 2024 · This work aims to enhance the robustness of concept erasing by integrating the principle of adversarial training (AT) into machine unlearning.
Defensive Unlearning with Adversarial Training for Robust Concept Erasure in Diffusion Models. from github.com
Our proposed robust unlearning framework, AdvUnlearn, enhances diffusion models' safety by robustly erasing unwanted concepts through adversarial training.
Oct 9, 2024 · This work aims to enhance the robustness of concept erasing by integrating the principle of adversarial training (AT) into machine unlearning.
This work aims to enhance the robustness of concept erasing by integrating the principle of adversarial training (AT) into machine unlearning, resulting in the ...
May 27, 2024 · GitHub - OPTML-Group/AdvUnlearn: Official implementation of "Defensive Unlearning with Adversarial Training for Robust Concept Erasure in ...
Oct 11, 2024 · Warning:This paper contains model outputs that may be offensive in nature. 1 Introduction. Recent rapid advancements in diffusion models (DMs) ...
In this work, we study the problem of adversarial attack generation for Stable Diffusion and ask if an adversarial text prompt can be obtained even in the ...
My recent works about machine unlearning (MU) ; Defensive Unlearning with Adversarial Training for Robust Concept Erasure in Diffusion Models, NeurIPS'24, OPTML- ...