Oct 3, 2020 · Recent studies show that it is possible to achieve fast Adversarial Training by performing a single-step attack with random initialization.
A new initialization strategy, backward smoothing, is proposed to ad- dress this issue and significantly improves both stability and model robustness over ...
Jun 28, 2022 · In order to further improve the robustness-efficiency trade-off of fast robust training techniques, we propose a new initialization strat- egy, ...
This work develops a new understanding towards Fast Adversarial Training, by viewing random initialization as performing randomized smoothing for better ...
Sep 28, 2020 · We develop a new understanding towards Fast Adversarial Training, by viewing random initialization as performing randomized smoothing for better optimization.
Sep 13, 2024 · A new initialization strategy, backward smoothing, is proposed to address this issue and significantly improves both stability and model ...
jinghuichen/BackwardSmoothing: Github Repo for AAAI'22 Paper
github.com › jinghuichen › BackwardSm...
This is the official code for our paper Efficient Robust Training via Backward Smoothing (aceepted by AAAI'2022) by Jinghui Chen (PSU), Yu Cheng (Microsoft) ...
Oct 6, 2020 · Paper: Efficient Robust Training via Backward Smoothing https://arxiv.org/abs/2010.01278 Venue: {if applicable, the venue where the paper ...
We propose a new method called bridged adversarial training that mitigates the negative effect by bridging the gap between clean and adversarial examples.
May 1, 2024 · This chapter explores the foundational concept of robustness in Machine Learning (ML) and its integral role in establishing trustworthiness in Artificial ...