Black-box dissector: Towards erasing-based hard-label model stealing attack

Y Wang, J Li, H Liu, Y Wang, Y Wu, F Huang… - European conference on …, 2022 - Springer
Previous studies have verified that the functionality of black-box models can be stolen with
full probability outputs. However, under the more practical hard-label setting, we observe
that existing methods suffer from catastrophic performance degradation. We argue this is
due to the lack of rich information in the probability prediction and the overfitting caused by
hard labels. To this end, we propose a novel hard-label model stealing method termed black-
box dissector, which consists of two erasing-based modules. One is a CAM-driven erasing …

[PDF][PDF] Black-Box Dissector: Towards Erasing-based Hard-Label Model Stealing Attack (Supplementary Material)

Y Wang, J Li, H Liu, Y Wang, Y Wu, F Huang, R Ji - ecva.net
… Here, we report the numerical results of previous methods under both the softlabel setting
and the hard-label setting as a supplementary to the Fig.1. To be consistent with the
experiment section, the victim models we use are trained using a ResNet-34 [2] architecture
on four datasets: CIFAR10 [4], SVHN [5], Caltech256 [1], and CUBS200 [8]. And their test
accuracy are 91.56%, 96.45%, 78.40%, and 77.10% respectively. We use the 1.2M images
without labels presented in the ILSVRC-2012 challenge [6] as the … The above results show …
Showing the best results for this search. See all results