Jul 25, 2019 · We demonstrate that it is possible to generate any desired explanation, while leaving the model's accuracy essentially unchanged.
This work investigates the explanation method GradCAM, and considers an adversary that manipulates the model itself to attack the explanation, ...
Aug 16, 2019 · Instead of manipulating the input, we consider an adversary that manipulates the model itself to attack the explanation. By changing weights and ...
Instead of manipulating the input, we consider an adversary that manipulates the model itself to attack the explanation. By changing weights and architecture, ...
How to Manipulate CNNs to Make Them Lie: the GradCAM Case (slides) - Free download as PDF File (.pdf), Text File (.txt) or read online for free.
Aug 18, 2021 · 1) 저자는 GradCAM 방법에 4가지의 방법을 통해서 explanation을 수정했을 때에도 model의 accuracy가 크게 변하지 않는 sanity check 방법을 제안하였다.
How to manipulate cnns to make them lie: the gradcam case. T Viering, Z Wang, M Loog, E Eisemann. BMVC 2019 workshop, 2019. 22, 2019. Attention-aware age ...
In the case of face recognition, for instance, an attacker who can manipulate the output of a model could im- personate someone and steal, for instance, the ...
In this review, we present state-of-the-art explanation techniques in detail. We focus our presentation and critical discussion on visualisation methods.
In our paper, we analyzed modified BP methods, which aim to explain the predictions of deep neural networks. Our analysis revealed that most of these ...