Exploring structure consistency for deep model watermarking
arXiv preprint arXiv:2108.02360, 2021•arxiv.org
The intellectual property (IP) of Deep neural networks (DNNs) can be easily``stolen''by
surrogate model attack. There has been significant progress in solutions to protect the IP of
DNN models in classification tasks. However, little attention has been devoted to the
protection of DNNs in image processing tasks. By utilizing consistent invisible spatial
watermarks, one recent work first considered model watermarking for deep image
processing networks and demonstrated its efficacy in many downstream tasks …
surrogate model attack. There has been significant progress in solutions to protect the IP of
DNN models in classification tasks. However, little attention has been devoted to the
protection of DNNs in image processing tasks. By utilizing consistent invisible spatial
watermarks, one recent work first considered model watermarking for deep image
processing networks and demonstrated its efficacy in many downstream tasks …
The intellectual property (IP) of Deep neural networks (DNNs) can be easily ``stolen'' by surrogate model attack. There has been significant progress in solutions to protect the IP of DNN models in classification tasks. However, little attention has been devoted to the protection of DNNs in image processing tasks. By utilizing consistent invisible spatial watermarks, one recent work first considered model watermarking for deep image processing networks and demonstrated its efficacy in many downstream tasks. Nevertheless, it highly depends on the hypothesis that the embedded watermarks in the network outputs are consistent. When the attacker uses some common data augmentation attacks (e.g., rotate, crop, and resize) during surrogate model training, it will totally fail because the underlying watermark consistency is destroyed. To mitigate this issue, we propose a new watermarking methodology, namely ``structure consistency'', based on which a new deep structure-aligned model watermarking algorithm is designed. Specifically, the embedded watermarks are designed to be aligned with physically consistent image structures, such as edges or semantic regions. Experiments demonstrate that our method is much more robust than the baseline method in resisting data augmentation attacks for model IP protection. Besides that, we further test the generalization ability and robustness of our method to a broader range of circumvention attacks.
arxiv.org
Showing the best result for this search. See all results