Adversarial defense via data dependent activation function and total variation minimization
arXiv preprint arXiv:1809.08516, 2018•arxiv.org
We improve the robustness of Deep Neural Net (DNN) to adversarial attacks by using an
interpolating function as the output activation. This data-dependent activation remarkably
improves both the generalization and robustness of DNN. In the CIFAR10 benchmark, we
raise the robust accuracy of the adversarially trained ResNet20 from $\sim 46\% $ to $\sim
69\% $ under the state-of-the-art Iterative Fast Gradient Sign Method (IFGSM) based
adversarial attack. When we combine this data-dependent activation with total variation …
interpolating function as the output activation. This data-dependent activation remarkably
improves both the generalization and robustness of DNN. In the CIFAR10 benchmark, we
raise the robust accuracy of the adversarially trained ResNet20 from $\sim 46\% $ to $\sim
69\% $ under the state-of-the-art Iterative Fast Gradient Sign Method (IFGSM) based
adversarial attack. When we combine this data-dependent activation with total variation …
We improve the robustness of Deep Neural Net (DNN) to adversarial attacks by using an interpolating function as the output activation. This data-dependent activation remarkably improves both the generalization and robustness of DNN. In the CIFAR10 benchmark, we raise the robust accuracy of the adversarially trained ResNet20 from to under the state-of-the-art Iterative Fast Gradient Sign Method (IFGSM) based adversarial attack. When we combine this data-dependent activation with total variation minimization on adversarial images and training data augmentation, we achieve an improvement in robust accuracy by 38.9 for ResNet56 under the strongest IFGSM attack. Furthermore, We provide an intuitive explanation of our defense by analyzing the geometry of the feature space.
arxiv.org
Showing the best result for this search. See all results