×
Mar 13, 2023 · We propose the Causality-inspired Backdoor Defense (CBD), to learn deconfounded representations for reliable classification.
In this paper, we first construct a causal graph to model the generation process of poisoned data and find that the backdoor attack acts as the confounder,.
We propose the Causality-inspired Backdoor Defense (CBD), to learn deconfounded representations for reliable classification.
This paper first construct a causal graph to model the generation process of poisoned data and finds that the backdoor attack acts as the confounder, ...
People also ask
May 18, 2023 · Our work opens up an interesting research direction to leverage causal inference to analyze and mitigate backdoor attacks in machine learning. • ...
Mar 13, 2023 · In this paper, we propose a black-box backdoor detection (B3D) method to identify backdoor attacks with only query access to the model. We ...
Aug 18, 2024 · Machine learning models are susceptible to adversarial attacks which dramatically reduce their performance. Reliable defenses to these attacks ...
Recent studies have demonstrated that deep neural networks (DNNs) are vulnerable to backdoor attacks during the training process. 1.
Jan 28, 2022 · We propose a novel backdoor defense via decoupling the original end-to-end training process into three stages.
This repository contains a collection of papers and resources on backdoor attacks and backdoor defense in deep learning.