Sequence squeezing: A defense method against adversarial examples for API call-based RNN variants

I Rosenberg, A Shabtai, Y Elovici… - 2021 International Joint …, 2021 - ieeexplore.ieee.org
2021 International Joint Conference on Neural Networks (IJCNN), 2021ieeexplore.ieee.org
Adversarial examples are known to mislead deep learning models so that the models will
classify them incorrectly, even in domains where such models have achieved state-of-the-art
performance. Until recently, research on both adversarial attack and defense methods
focused on computer vision, primarily using convolutional neural networks (CNNs). In recent
years, adversarial example generation methods for recurrent neural networks (RNNs) have
been published, demonstrating that RNN classifiers are also vulnerable to such attacks. In …
Adversarial examples are known to mislead deep learning models so that the models will classify them incorrectly, even in domains where such models have achieved state-of-the-art performance. Until recently, research on both adversarial attack and defense methods focused on computer vision, primarily using convolutional neural networks (CNNs). In recent years, adversarial example generation methods for recurrent neural networks (RNNs) have been published, demonstrating that RNN classifiers are also vulnerable to such attacks. In this paper, we present a novel defense method, referred to as sequence squeezing, aimed at making RNN variant (e.g., LSTM) classifiers more robust against such attacks. Our method differs from existing defense methods, which were designed only for non-sequence based models. We also implement three additional defense methods inspired by recently published CNN defense methods as baselines for our method. Using sequence squeezing, we were able to decrease the effectiveness of such adversarial attacks from 99.9% to 15%, outperforming all of the baseline defense methods.
ieeexplore.ieee.org
Showing the best result for this search. See all results