Deception, robustness and trust in big data fueled deep learning systems

L Liu - 2019 IEEE International Conference on Big Data (Big …, 2019 - ieeexplore.ieee.org
L Liu
2019 IEEE International Conference on Big Data (Big Data), 2019ieeexplore.ieee.org
We are entering an exciting era where human intelligence is being enhanced by machine
intelligence through big data fueled artificial intelligence (AI) and machine learning (ML).
However, recent work shows that DNN models trained privately are vulnerable to
adversarial inputs. Such adversarial inputs inject small amount of perturbations to the input
data to fool machine learning models to misbehave, turning a deep neural network against
itself. As new defense methods are proposed, more sophisticated attack algorithms are …
We are entering an exciting era where human intelligence is being enhanced by machine intelligence through big data fueled artificial intelligence (AI) and machine learning (ML). However, recent work shows that DNN models trained privately are vulnerable to adversarial inputs. Such adversarial inputs inject small amount of perturbations to the input data to fool machine learning models to misbehave, turning a deep neural network against itself. As new defense methods are proposed, more sophisticated attack algorithms are surfaced. This arms race has been ongoing since the rise of adversarial machine learning. This keynote provides a comprehensive analysis and characterization of the most representative attacks and their defenses. As more and more mission critical systems are incorporating machine learning and AI as an essential component in their real-world big data applications and their big data service provisioning platforms or products, understanding and ensuring the verifiable robustness of deep learning becomes a pressing challenge in the presence of adversarial attacks. This includes (1) the development of formal metrics to quantitatively evaluate and measure the robustness of a DNN prediction with respect of intentional and unintentional artifacts and deceptions, (2) the comprehensive understanding of the blind spots and the invariants in the DNN trained models and the DNN training process, and (3) the statistical measurement of trust and distrust that we can place on a deep learning algorithm to perform reliably and truthfully. In this keynote talk, I will use empirical analysis and evaluation of our cross-layer strategic teaming defense framework and techniques to illustrate the feasibility of ensuring robust deep learning.
ieeexplore.ieee.org
Showing the best result for this search. See all results