Hacking The Waveform: Generalized Wireless Adversarial Deep Learning
Hacking The Waveform: Generalized Wireless Adversarial Deep Learning
Hacking The Waveform: Generalized Wireless Adversarial Deep Learning
Y, MONTH ZZZZ 1
Abstract
arXiv:2005.02270v1 [cs.NI] 5 May 2020
Deep learning techniques can classify spectrum phenomena (e.g., waveform modulation) with
accuracy levels that were once thought impossible. Although we have recently seen many advances
in this field, extensive work in computer vision has demonstrated that adversarial machine learning
(AML) can seriously decrease the accuracy of a classifier. This is done by designing inputs that are
close to a legitimate one but interpreted by the classifier as being of a completely different class. On the
other hand, it is unclear if, when, and how AML is concretely possible in practical wireless scenarios,
where (i) the highly time-varying nature of the channel could compromise adversarial attempts; and (ii)
the received waveforms still need to be decodable and thus cannot be extensively modified. This paper
advances the state of the art by proposing the first comprehensive analysis and experimental evaluation
of adversarial learning attacks to wireless deep learning systems. We postulate a series of adversarial
attacks, and formulate a Generalized Wireless Adversarial Machine Learning Problem (GWAP) where
we analyze the combined effect of the wireless channel and the adversarial waveform on the efficacy
of the attacks. We propose a new neural network architecture called FIRNet, which can be trained to
“hack” a classifier based only on its output. We extensively evaluate the performance on (i) a 1,000-
device radio fingerprinting dataset, and (ii) a 24-class modulation dataset. Results obtained with several
channel conditions show that our algorithms can decrease the classifier accuracy up to 3x. We also
experimentally evaluate FIRNet on a radio testbed, and show that our data-driven blackbox approach
can confuse the classifier up to 97% while keeping the waveform distortion to a minimum.
I. I NTRODUCTION
The Internet of Things (IoT) will bring 75.44B devices on the market by 2025, a 5x increase
in ten years [1]. Due to the sheer number of IoT devices soon to be deployed worldwide, the
design of practical spectrum knowledge extraction techniques has now become a compelling
This paper has been submitted for possible publication to IEEE Transactions on Wireless Communications. The authors
are with the Institute for the Wireless Internet of Things, Department of Electrical and Computer Engineering, Northeastern
University, Boston, MA, 02215 USA. Corresponding author e-mail: [email protected].
necessity – not only to understand in real time the wireless environment, but also to design
reactive, intelligent, and more secure wireless protocols, systems, and architectures [2].
Arguably, the radio frequency (RF) spectrum is one of nature’s most complex phenomena.
For this reason, the wireless community has started to move toward data-driven solutions based
on deep learning [3] – well-known to be exceptionally suited to solve classification problems
where a mathematical model is impossible to obtain. Extensively applied since the 1980s, neural
networks are now being used to address notoriously hard problems such as radio fingerprinting
[4], signal/traffic classification [2, 5, 6] and resource allocation [7], among many others [8].
Recent advances in wireless deep learning have now clearly demonstrated its great potential.
For example, O’Shea et al. [5] has demonstrated that models based on deep learning can achieve
about 20% higher modulation classification accuracy than legacy learning models under noisy
channel conditions. However, it has been extensively proven that neural networks are prone to
be “hacked” by carefully crafting small-scale perturbations to the input – which keep the input
similar to the original one, but are ultimately able to “steer” the neural network away from the
ground truth. This activity is known [9–13] as adversarial machine learning (AML). The degree
to which malicious agents can find adversarial examples is strongly correlated to the applicability
of neural networks to address problems in the wireless domain [14].
Technical Challenges. We believe the above reasons clearly show the timeliness and urgency
of a rigorous investigation into the robustness of wireless deep learning systems. Prior work
[15, 16] – which is discussed in great detail in Section II – is severely limited by small-scale
simulation-based scenarios, which has left several fundamental questions unanswered. The key
reason that sets wireless AML apart is that a wireless deep learning system is affected by the
stochastic nature of the channel [17]. This implies that the channel action must be factored into
the crafting process of the AML attack.
2
To further confirm this critical aspect, Figure 1 reports a series of experiments results obtained
with our software-defined radio testbed (see Section VIII-D). In our setup shown in Figure 1(d),
we collect a series of waveforms coming from 5 legitimate transmitters (L1 to L5) through
a legitimate receiver (R). Then, we train a neural network (see Section VIII-D) to recognize
the legitimate devices by learning the unique impairments imposed by the radio circuitry on
the transmitted waveforms, also called radio fingerprinting [4]. The neural network obtains
59% accuracy, as shown in 1(a). We also use an adversarial eavesdropper radio (AE) to record
the waveforms transmitted by the legitimate transmitters. We show the fooling rate obtained
by 5 adversarial devices A1 to A5 which transmit RF waveforms trying to fool the classifier
by imitating respectively L1 to L5. A high fooling rate means that adversaries can generate
waveforms that are classified as belonging to legitimate devices. On the contrary, a low fooling
rate indicates that the attack is unsuccessful as the classifier is not able to identify received
waveforms. In this experiment, we consider two substantially different attacks where adversaries
(i) transmit their own waveforms – shown in 1(b); and (ii) “replay” the recorded waveforms
from L1 to L5 (i.e., by simply retransmitting the I/Q samples recorder by the eavesdropper
AE) – shown in 1(c). Figure 1(b) shows that when A1 to A5 transmit their own waveforms,
the fooling rate is 20%, way lower than the original accuracy of 59%. In principle, we would
expect the adversary to obtain a significant increase in fooling rate by performing the replay
attack. However, 1(c) indicates that the fooling rate is only 30% when A1 to A5 replay the
eavesdropped waveforms. This strongly suggests that even if the adversary is successful in
replaying the waveforms, the channel will inevitably make the attack less effective. Thus, more
complex attacks have to be designed and tested to validate whether AML is effectively a threat
in the wireless domain.
Novel Contributions. The key contribution of this paper is to provide the first comprehensive
modeling and experimental evaluation of adversarial machine learning (AML) attacks to state-
of-the-art wireless deep learning systems. To this end, our study bridges together concepts from
both the wireless and the adversarial learning domains, which have been so far kept separated.
We summarize our core technical contributions as follows:
• We propose a novel AML threat model (Section IV) where we consider (i) a “whitebox”
scenario, where the adversary has complete access to the neural network; and (ii) a “blackbox”
scenario, where the neural network is not available to the adversary. The primary advance of our
3
model is that our attacks are derived for arbitrary channels, waveforms, and neural networks,
and thus generalizable to any state-of-the-art wireless deep learning system;
• Based on the proposed model, we formulate an AML Waveform Jamming (Section V-A)
and an AML Waveform Synthesis (Section V-B) attack. Next, we propose a Generalized Wireless
Adversarial Machine Learning Problem (GWAP) where an adversary tries to steer the neural
network away from the ground truth while satisfying constraints such as bit error rate, radiated
power, and other relevant metrics below a threshold (Section VI). Next, we propose in Section
VI-B a gradient-based algorithm to solve the GWAP in a whitebox scenario. For the blackbox
scenario, we design a novel neural network architecture called FIRNet. Our approach mixes
together concepts from generative adversarial learning and signal processing to train a neural
network composed by finite impulse response layers (FIRLayers), which are trained to impose
small-scale modifications to the input and at the same time decrease the classifier’s accuracy;
• We extensively evaluate the proposed algorithms on (i) a deep learning model for radio
fingerprinting [4] trained on a 1,000-device dataset of WiFi and ADS-B transmissions collected
in the wild; and (ii) a modulation recognition model [5] trained on the widely-available RadioML
2018.01A dataset, which includes 24 different analog and digital modulations with different levels
of signal-to-noise ratio (SNR). Our algorithms are shown to decrease the accuracy of the models
up to 3x in case of whitebox attacks, while keeping the waveform distortion to a minimum.
Moreover, we evaluate our FIRNet approach on the software-defined radio testbed, and show
that our approach confuses the 5-device radio fingerprinting classifier up to 97%.
Adversarial machine learning (AML) has been extensively investigated in computer vision.
Szegedy et al. [18] first pointed out the existence of targeted adversarial examples: given a
valid input x, a classifier C and a target t, it is possible to find x0 ∼ x such that C(x0 ) = t.
More recently, Moosavi-Dezfooli et al. [11] have further demonstrated the existence of so-called
universal perturbation vectors, such that for the majority of inputs x, it holds that C(x + v) 6=
C(x). Carlini and Kruger [13] evaluated a series of adversarial attacks that are shown to be
effective against defensive neural network distillation [19]. Although the above papers have
made significant advances in our understanding of AML, it can only be applied to stationary
learning contexts such as computer vision. The presence of non-stationarity makes wireless AML
significantly more challenging and thus worth of additional investigation.
4
Only very recently has AML been approached by the wireless community. Bair et al. [16]
propose to apply a variation of the MI-FGSM attack [20] to create adversarial examples to
modulation classification systems. Shi et al. [15] propose the usage of a generative adversarial
network (GAN) to spoof a targeted device. However, the evaluation is only conducted through
simulation without real dataset. . Sadeghi et al. [21] proposed two AML algorithms based on a
variation of the fast gradient methods (FGMs) [10] and tested on the 11-class RadioML 2016.10A
dataset [22] and with the architecture in [23]. In this paper, we instead consider the much larger
RadioML 2018.01A dataset [5], which has 24 classes.
The key target of adversarial machine learning (AML) is to compromise the robustness of
classifiers based on neural networks [11]. Broadly speaking, there are two types of AML attacks
studied in the literature, which are often referred to as targeted [18] and untargeted [11]. The
former type attempts to find perturbation vectors v that, applied to a given input x, makes
the classifier “steer” toward a different class than the ground truth g. More formally, given a
classifier C and a target t, the adversary tries to find x + v ∼ x such that C(x + v) = t 6= g.
Conversely, untargeted AML attempts to find universal perturbation vectors v, through which
C(x + v) 6= C(x) for most inputs x. To keep the notation consistent with previous work, we
will keep the same nomenclature throughout the paper.
Figure 2 summarizes the differences between AML for Computer Vision (CV) and wireless
networking applications. Although very similar in scope and target, there are unique character-
5
istics that make AML in the wireless domain fundamentally different than AML in CV systems.
First, CV-based algorithms assume that adversarial and legitimate inputs are received “as-is” by
the classifier. In other words, if x is an image and x + v is the adversarial input, the classifier
will always attempt to classify x + v as input. However, due to the wireless channel, we cannot
make this assumption in the wireless domain. In short, any adversarial waveform w + v will
be subject to the additive and multiplicative action of the channel, which can be expressed via
a perturbation matrix C = (ca , cm ) given by the wireless channel, which ultimately makes the
classifier attempts to classify the waveform cm (w + v) + ca instead of the w + v waveform. The
second key difference is that wireless AML has to assume that waveforms cannot be arbitrarily
modified, since they have to be decodable at the receiver’s side (i.e., if not decodable, the receiver
will discard received packets, thus making the attack ineffective). Therefore, the adversary has a
critical constraint on the maximum distortion that the joint action C of the channel and his own
perturbation v can impose to a waveform. Meaning, cm (w +v)+ca still has to be decodable. As
we will show in the rest of the paper, an adversary’s capability of launching a successful AML
attack will depend on the signal-to-noise ratio (SNR) between the adversary and the receiver.
We use boldface upper and lower-case letters to denote matrices and column vectors, re-
spectively. For a vector x, xi denotes the i-th element, kxkp indicates the lp - norm of x, x>
its transpose, and x · y the inner product of x and y. For a matrix H, Hij will indicate the
(i,j)-th element of H. The notation R and C will indicate the set of real and complex numbers,
respectively.
System Model. The top portion of Figure 3 summarizes our system model, where we consider
a receiving node R, an attacker node A, and a set L of N legitimate nodes communicating with
R. We assume that R hosts a target neural network (TNN) used to classify waveforms coming
from nodes in L.
Let Λ > 1 be the number of layers of the TNN, and C be the set of its classes. We model
the TNN as a function F that maps the relation between an input x and an output y through a
Λ-layer mapping F (x; θ) : Ri → Ro of an input vector x ∈ Ri to an output vector y ∈ Ro . The
mapping happens through Λ transformations:
rj = Fj (rj−1 , θj ) 0 ≤ j ≤ Λ, (1)
6
L Wireless
R
Channel
TNN output
Fig. 3: Overview of AML Waveform Jamming (AWJ) and AML Waveform Synthesis (AWS).
where Fj (rj−1 , θj ) is the mapping carried out by the j-th layer. The vector θ = {θ1 , . . . , θΛ }
defines the whole set of parameters of the TNN. We assume the last layer of the TNN is dense,
meaning that FΛ−1 (rj−1 , θj ) = σ(Wj · rj−1 + bj ), where σ is a softmax activation function, Wj
is the weight matrix and bj is the bias vector.
We evaluate the activation probabilities of the neurons at the last layer of the TNN. Let
c ∈ C be a generic class in the classification set of the TNN. We denote fc (x) as the activation
probability of the neuron corresponding to class c at the output layer of the TNN when input x
is fed to the TNN. From (1), it follows that
Notice that the mapping F (x; θ) : Ri → Ro can be any derivable function, including recurrent
networks. By taking as reference [5], we assume that the input of the TNN is a series of I/Q
samples received from the radio interface. We assume that the I/Q samples may be processed
7
through a processing function P () before feeding the I/Q samples to the TNN. Common examples
of processing functions P () are equalization, demodulation or packet detection.
Threat Model. We assume the adversary A may or may not part of the legitimate set of
nodes in L. We call the adversary respectively rogue and external in these cases. We further
classify adversarial action based on the knowledge that the adversary possesses regarding the
TNN. In the first, called in literature whitebox, the adversary A has perfect knowledge of the
TNN activation functions Fj , meaning that A has access not only to the output layer FΛ but
also to the weight vector θ (and thus, its gradient as a function of the input).
In the second scenario, also called blackbox, the adversary does not have full knowledge of
the TNN, and therefore cannot access gradients. We do assume, however, that the adversary
has access to the output of the TNN. Specifically, for any arbitrarily chosen waveform x, the
adversary can obtain its label C(x) = y by querying the TNN. Obtaining the output of the TNN
is an issue known as 1-bit feedback learning, and was studied by Zhang et al. in [24]. In our
scenario, the adversary could use ACKs or REQs as 1-bit feedback. Specifically, for a given
batch B of size M , the loss function L(B) can be approximated by observing the number of
M −A
ACKs or REQs received (A) for the current batch and then assign L(B) = M
.
The adversary then may choose different strategies to craft adversarial samples over tuples
(x, y) obtained from querying the TNN. By referring to prior work, we consider both targeted
[18] and untargeted [11] attacks.
Wireless Model. To be effective, the attacker must be within the transmission range of R,
meaning that A should be sufficiently close to R to emit waveforms that compromise (to some
extent) ongoing transmissions between any node l ∈ L and R. This scenario is particularly
compelling, since not only can A eavesdrop wireless transmissions generated by R (e.g., feedback
information such as ACKs or REQs), but also emit waveforms that can be received by R – and
thus, compromise the TNN.
We illustrate the effect of channel action in Figure 3, which can be expressed through
well-established models for wireless networks. Specifically, the waveform transmitted by any
legitimate node L ∈ L and received by R can be modeled as
zL = xL ~ hL + wL , (3)
8
Similarly, let xA be the waveform transmitted by node A, and let φ be an attack strategy of
A. The attacker utilizes φ to transform the waveform xA and its I/Q samples. For this reason,
the waveform transmitted by A can be written as xA (φ). For the sake of generality, in this
section we do not make any assumption on φ. However, in Section V we present two examples
of practical relevance (i.e., jamming and waveform synthesis) where closed-form expressions for
the attack strategy φ and xA (φ) are derived. The waveform zA can be written as
zA = xA (φ) ~ hA + wA . (4)
Notice that (3) and (4) do not assume any particular channel model, nor any particular attack
strategy. Therefore, our formulation is very general in nature and able to model a rich set of
real-world wireless scenarios.
In most wireless applications, noise wi can be modeled as additive white Gaussian (AWGN).
On the contrary, hi depends on mobility, multi-path and interference. Although these aspects
strongly depend on the application and network scenarios, they are usually assumed to be constant
within the coherence time of the channel, thus allowing us to model hi as a Finite Impulse
Response (FIR) filter with K > 0 complex-valued taps.
By leveraging the above properties, the n-th component zi [n] of the waveform zi received
from node i can be written as follows:
K−1
X
zi [n] = hik [n]xi [n − k] + wi [n] (5)
k=0
where xi [n] is the n-th I/Q symbol transmitted by node i; hik [n] and wi [n] are respectively the
k-th complex-valued FIR tap and noise coefficients representing the channel effect at time n.
With the help of Figure 3, we now introduce the AML Waveform Jamming (Section V-A), and
AML Waveform Synthesis (Section V-B).
In AWJ, an adversary carefully jams the waveform of a legitimate device to confuse the TNN.
Since the TNN takes as input I/Q samples, the adversary may craft a jamming waveform that,
at the receiver side, causes a slight displacement of I/Q samples transmitted by the legitimate
device, thus pushing the TNN towards a misclassification.
9
As shown in Figure 3, the waveform xA generated by the attacker node A is aimed at jamming
already ongoing transmissions between a legitimate node L and the receiver R. In this case, the
signal received by R can be written as
z = zA + zL (6)
xA (φ) = (φ< =
n + jφn )n=1,...,NJ , (7)
where (i) a= = Im(a) and a< = Re(a) for any complex number a; and (ii) NJ > 1 represents the
length of the jamming signal in terms of I/Q samples. Since NJ might be smaller than the TNN
input NI —without losing in generality—we assume that the adversary periodically transmits the
sequence of NJ I/Q samples so that they completely overlap with legitimate waveforms and have
the same length. However, it is worth to notice that we do not assume perfect superimposition
of the jamming signal with the legitimate signal, and thus, adversarial signals are not added in
a precise way to the legitimate waveform.
Undetectability aspects. Recall that any invasive attack might reveal the presence of the
adversary to the legitimate nodes, which will promptly implement defense strategies [25]. For
this reason, the adversary aims at generating misclassifications while masquerading the very
existence of the attack by computing φ such that the signal z can still be decoded successfully
by the receiver (e.g., by keeping the bit-error-rate (BER) lower than a desirable threshold) but
yet misclassified. This is because the attacker aims to conceal its presence. If exposed, the
receiver might switch to another frequency, or change location, thus making attacks less effective.
However, we remark that this constraint can be relaxed if the jammer is not concerned about
concealing its presence. We further assume the attacker has no control over channel conditions
(i.e., hA and wA ) and legitimate signals (i.e., zL ), meaning that the attacker can control xA (φ)
only by computing effective strategies φ.
10
Addressing non-stationarity. An adversary cannot evaluate the channel hL in (3) – which is
node-specific and time-varying. Also, waveforms transmitted by legitimate nodes vary according
to the encoded information, which is usually a non-stationary process. It follows that jamming
waveforms that work well for a given legitimate waveform zL , might not be equally effective
for any other z0L 6= zL . Thus, rather than computing the optimal jamming waveform for each
zL , we compute it over a set of consecutive S legitimate input waveforms, also called slices.
Let ρ ∈ {0, 1} be a binary variable to indicate whether or not the attacker node belongs to
the legitimate node set L (i.e., a rogue node). Specifically, ρ = 1 if the attacker node is a rogue
device and A ∈ L, ρ = 0 if the attacker is external (i.e., A 6∈ L). Also, let cL and cA be the
correct classes of the waveforms transmitted by nodes L and A, respectively.
Untargeted AWJ. The adversary aims at jamming legitimate waveforms such that (i) these are
misclassified by the TNN; (ii) malicious activities are not detected by the TNN; and (iii) attacks
satisfy hardware limitations (e.g., energy should be limited). These objectives and constraints
can be formulated through the following untargeted AWJ problem (AWJ-U):
S
1X
minimize [fc (zs ) + ρ · fcA (zs )] (AWJ-U)
φ S s=1 L
where zs = zA +zLs , zLs represents the s-th slice (or input) of the TNN; Constraint (C1) ensures
that the BER experienced by the legitimate node is lower than the maximum tolerable BER
threshold BERmax ; while (C2) guarantees that the energy of the jamming waveform does not
exceed a maximum threshold Emax . In practise, Constraints (C1) and (C2) ensure that jamming
waveforms do not excessively alter the position of legitimate I/Q samples. This is crucial to avoid
anti-jamming strategies such as modulation and frequency hopping, among others. Although
Problem (AWJ-U) takes into account Constraints (C1) and (C2) only, in Section VI we extend
the formulation to larger set of constraints.
Targeted AWJ. By defining cT ∈ C as the target class, we formulate the targeted AWJ as
S
1X
maximize [fc (zs ) − (fcL (zs ) + ρ · fcA (zs ))] (AWJ-T)
φ S s=1 T
11
When compared to Problem (AWJ-U), Problem (AWJ-T) differs in terms of the objective
function. One naive approach would see the adversary maximize the term S1 Ss=1 fcT (zs ) only.
P
However, the objective of the adversary is to produce misclassifications, so the adversary should
try to reduce the activation probability of the jammed class cL and adversarial class cA , while
maximizing the activation probability for the target class cT . It is expected that the TNN has
high accuracy and by simply maximizing S1 Ss=1 fcT (zs ) does not necessarily mean that the
P
TNN would not be able to still correctly classify transmissions from the legitimate device L
(i.e., the activation probability fcL might still be high).
Let us provide a simple yet effective example. Assume that the attacker is external (ρ = 0),
1
P S 1
S s=1 fcT (zLs ) = 0.1 and S fcL (zLs ) = 0.9. Let us consider the case where the adversary
computes φ such that the term S1 Ss=1 fcT (zs ) only is maximized. A reasonable outcome of this
P
optimization problem is that φ is such that S1 Ss=1 fcT (zs ) = 0.4 and S1 Ss=1 fcL (zs ) = 0.6. In
P P
this case, it is easy to notice that input waveforms are still classified as belonging to class cL . A
similar argument can be made for term ρfA (zs ) when ρ = 1 (i.e., the attacker is a rogue node).
In other words, to effectively fool the TNN, the attacker must generate waveforms that (i)
suppress features of class cL ; (ii) mimic those of class cT ; and (iii) hide features of the attacker’s
class cA . These objectives can be formulated via the objective function in Problem (AWJ-T).
This attack maps well to scenarios such as radio fingerprinting, where a malicious device aims
at generating a waveform embedding impairments that are unique to the target legitimate device
[4]. In other words, the attacker cannot generate random waveforms as in the AWJ, but should
transmit waveforms that contain decodable information. To this end, FIR filters are uniquely
12
positioned to address this issue. More formally, a FIR is described by a finite sequence φ of M
filter taps, i.e., φ = (φ1 , φ2 , . . . , φM ). For any input x ∈ X , the filtered n-th element x̂[n] ∈ x̂
can be written as
M
X −1
x̂[n] = φm x[n − m] (8)
m=0
It is easy to observe that by using FIRs, the adversary can manipulate the position in the
complex plane of the transmitted I/Q symbols. By using complex-valued filter taps, i.e., φm ∈ C
for all m = 0, 1, . . . , M − 1, Eq. (8) becomes:
M
X −1
x̂[n] = (φ< = < =
m + jφm )(x [n − m] + jx [n − m])
m=0
For example, to rotate all I/Q samples by θ = π/2 radiants and halve their amplitude, we
π
may set φ1 = 21 expj 2 and φk = 0 for all k > 1. Similarly, other complex manipulations can be
obtained by fine-tuning filter taps. It is clear that complex FIRs can be effectively used by the
attacker node to fool the TNN through AWS attacks.
By using a FIR φ with M complex-valued taps, the waveform xA (φ) transmitted by the
attacker can be written as
xA (φ) = xBB ~ φ (10)
where xA (φ) = (xA [n](φ))n=1,...,NI , xA [n](φ) is computed as in (9), xBB = (xBB [n])n=1,...,NI
is an intelligible signal (e.g., a portion of a WiFi packet) and φ = (φ< =
n + jφn )n=1,...,NI is the
Notice that Problems (AWJ-U), (AWJ-T) and (AWS) are similar in target. Thus, we propose
the following Generalized Wireless AML problem (GWAP) formulation
S X
X
maximize ωc fc (zs ) (GWAP)
φ
s=1 c∈C
where g(z) = (g1 (z), . . . , gG (z))> is a generic set of constraints that reflect BER, energy and
any other constraint that the attack strategy φ must satisfy (e.g., upper and lower bounds); and
13
ωc takes values in {−ρ, −1, 0, 1, ρ} depending on the considered attack. As an example, Problem
(AWJ-T) has ωcT = 1, ωcL = −1, ωcA = −ρ and ωc = 0 for all c 6= cL , cT , cA .
Problem (GWAP) is non trivial since (i) the functions fc have no closed-form and depend on
millions of parameters; (ii) both the objective and the constraints are highly non-linear and
non-convex; (iii) it is not possible to determine the convexity of the problem. Despite the
above challenges, in whitebox attacks the adversary has access to the gradients of the TNN
(Figure 3). In the following, we show how an attacker can effectively use gradients to efficiently
compute AML attack strategies. It is worth mentioning that our whitebox algorithms, similar
to the fast gradient sign method (FGSM) [26], use gradients to generate adversarial outputs.
Despite being similar, FGSM can compute adversarial examples tailored for a specific input and
a specific channel condition only. Conversely, as explained in Section V-A, under ”Addressing
non-stationarity”, our algorithms take into account multiple inputs to find a single FIR filter that
can synthesize adversarial inputs for multiple channel conditions, thus resulting more general
and practical than FGSM-based approaches.
From (6), the input of the TNN is z = zA + zL . Since zL cannot be controlled by the attacker
node, we have fc (z) = fc (zA ). Figure 3 shows that the TNN provides the gradients ∇z fc (z),
hence the attacker can compute the gradients ∇φ fc (z) of the activation probability corresponding
to the c-th class of the TNN with respect to the attacker’s strategy φ by using the well-known
chain rule of derivatives. Specifically, the gradients are
where Jφ (z) is the NI × M Jacobian matrix of the input z with respect to the attacker’s strategy
φ, > is the transposition operator, and · stands for matrix dot product.
We define the input of the TNN as a set of NI consecutive I/Q samples, i.e., z = (z[n])n=0,...,NI −1 ,
where zn ∈ C for all n = 0, . . . , NI − 1. The attacker’s waveform is defined as a sequence of
M complex numbers, i.e., xA (φ) = (xA [m](φ))m=0,...,M −1 whose values depend on the attack
strategy φ. With this information at hand, we observe the gradient ∇φ fc (z) has dimension
2M × 1, while the gradients with respect to real and imaginary parts of the m-component are
14
respectively
NI
∂fc (z) ∂z < [n] ∂fc (z) ∂z = [n]
∂fc (z) X
= + = (13)
∂φ<m n=1
∂z < [n] ∂φ< m ∂z [n] ∂φ< m
NI
∂fc (z) ∂z < [n] ∂fc (z) ∂z = [n]
∂fc (z) X
= + = . (14)
∂φ=m n=1
∂z < [n] ∂φ=
m ∂z [n] ∂φ =
m
A. Gradients Computation
We remark that while the AWJ generates waveforms that mimic noise on the channel and target
already ongoing transmissions between legitimate nodes, the AWS aims at creating synthetic
waveforms when no other node is occupying the wireless channel. Therefore, the two attacks
require different attacks strategies φ which will inevitably result in different values of (13) and
(14). Thus, we discuss the implementation details of AWJ and AWS attacks and derive the
corresponding closed-form expressions for the partial derivatives in (13) and (14).
AML Waveform Jamming. Here, the adversary is not required to transmit intelligible or
standard-compliant waveforms. Therefore, xA (φ) is defined in (10). Since φ is the only variable
0 0
Z [n]
∂z Z [n] ∂zA
the attacker can control, ∂φZ 00 = ∂φZ 00 , where Z 0 and Z 00 can be either < or = to identify real
m m
Now we present a general solution to Problem GWAP which leverages the availability of
gradients (13), (14), (15) and (16) to compute an effective attack strategy φ.
First, we relax the constraints gi (·) through Lagrangian Relaxation [27]. Specifically, we define
the augmented Lagrangian
S
!
X X ρ
L(φ, λ) = ωc fc (zs ) − λ> 2
s g(zs ) − ||g(zs )||2 (17)
s=1 c∈C
2
15
where λs = (λ0,s , . . . , λG,s )> , λG,s ≥ 0, λ = (λ1 , . . . , λS ), and ρ > 0 is a fixed-step size to reg-
ulate the convergence speed of the algorithm [27]. By using Lagrangian duality, an approximated
solution to Problem (GWAP) can be found by the following iterative process
λ(t) (t−1)
s = max{0, λs + γt g(zs )} (19)
P
where t represents the iteration counter and γt is a decreasing step-size such that t γt = ∞
and t γt2 < ∞ [27].
P
We solve (18) via the Non-linear Conjugate Gradient (NCG) method [28]. To compute a
solution at each iteration t, we define the gradient of L(φ, λ(t−1) ) as a function of the attack
strategy φ:
S X
X
∇φ L(φ, λ(t−1) ) = ωc ∇φ fc (zs )
s=1 c∈C
− λ(t−1)>
s ∇φ g(zs ) − ρJg> (φ) · g(zs ) (20)
with ∇φ fc (zs ) being computed in (12), ∇φ g(zs ) and Jg> (φ) being the gradient and Jacobian
matrix of the functions g with respect to φ, respectively. We omit the NCG-based solution, and
refer the interested reader to [27, 28] for a theoretical background of the algorithm.
The core objective of FIRNet is to hack the TNN without requiring to have a copy of the
TNN. To this end, we leverage the feedback from the TNN to carefully transform the input via a
series of finite impulse response (FIR) convolutional layers, which to the best of our knowledge
are conceived for the first time in this paper.
Figure 4 shows at a high level the architecture of FIRNet. In a nutshell, the ultimate target
of FIRNet is to take as input a number of I/Q samples generated by the adversary’s wireless
application and a target class that the and “perturbate” them through a series of consecutive
FIRLayers. The key intuition is that FIR operations are easily implementable in software and
in hardware, making the complexity of FIRNet scalable. Moreover, an FIR can be implemented
using one-dimensional (1D) layers in Keras. Thus, FIRNet is fully GPU-trainable and applicable
to many different applications beside the ones described in this paper. More formally, by defining
16
Fig. 4: The FIRNet Architecture.
xR , xI the real and imaginary components of an I/Q signal, and φR , φI the real and imaginary
components of the FIR, a FIRLayer manipulates an input as follows:
N
X −1
y[n] = (φR I R I
i + jφi )(x [n − i] + jx [n − i]), (21)
i=0
Before training, the FIRLayer’s weights are initialized such that φ0 = 1 and {φi } = 0, i > 0.
This initialization in essence represents an identity vector, which returns unchanged input values.
The reason why we consider this particular initialization rule is to preserve the shape and content
of input waveforms in the first few training epochs. This way FIRNet updates weights iteratively
without irremediably distorting input waveforms.
We first provide a brief background on traditional adversarial networks, and then we formalize
the FIRNet training process.
Generative adversarial networks (GANs) are composed by a generator G and a discriminator
D. Both G and D are trained to respectively learn (i) the data distribution and (ii) to distinguish
samples that come from the training data rather than G. To this end, the generator builds a
mapping function parametrized with θg from a prior noise distribution pz as G(z; θg ), while
the discriminator D(x; θd ), parametrized with θd parameters, outputs a single scalar representing
the probability that x came from the training data distribution px rather than the generator G.
Therefore, G and D are both trained simultaneously in a minmax problem, where the target is to
17
find the G that minimizes log 1 − D(G(z)) and the D that minimizes log D(x). More formally:
Although FIRNet is at its core an adversarial network, there are a number of key aspects that
set FIRNet apart from existing GANs. First, in our scenario D has already been trained and
thus is not subject to any modification during the G training process. Second, GANs assume
that D is a binary discriminator (i.e., ”fake” vs “authentic” response). This is not the case in our
problem, since D has a softmax output (i.e., multiclass). Third, GANs take as input a noise vector,
whereas here we need to take baseband I/Q samples as inputs. Fourth, as shown in Equation 23,
the minmax problem solved by GANs is unconstrained, while the GWAP problem in Section
VI is instead constrained. Fifth, GANs assume stationarity, which is not entirely the case in the
wireless domain. Finally, to really implement a “blackbox” attack, we cannot assume that the
waveform produced by FIRNet will be used by the target network without further processing
(e.g., demodulation), which is instead assumed in traditional GANs.
For the above reasons, we devise a brand-new training strategy shown in Figure 5. In a nutshell,
we aim to train a generator function G able to imitate any device the target network D has been
trained to discriminate and with any baseband waveform of interest. As in previous work [4],
to limit the FIR action to a given scope we model the constraint (C1) in Problem (AWJ-U) as
a box constraint where each I/Q component of the FIR is constrained within [−, ]2 , for any
small > 0.
18
First, the adversary generates a waveform training batch B (step 1), where waveforms are
generated according to the wireless protocol being used. For example, if WiFi is the wireless
protocol of choice, each waveform could be the baseband I/Q samples of a WiFi packet that
the adversary wants to transmit. To each waveform z in the batch, the adversary assigns an
embedded label y, which is selected randomly among the set of devices that the adversary wants
to imitate. Notice that the adversary does not need to know exactly the number of devices in
the network. This set is then fed to FIRNet which generates a training output G(z, y, ) (step 2),
where is the constraint of the weight of the FIRLayers as explained earlier.
The waveform produced by FIRNet is then transmitted over the air and then received as a
waveform H(G(z, y, )) (step 3). It is realistic to assume that the device could pre-process the
waveform before feeding it to the target network, e.g., to extract features in the frequency
domain [4, 29]. Thus, the softmax output of the target network is modeled as O(z, y) =
D(P (H(G(z, y, )))). We assume that the adversary does not have access in any way to D
and P , but only to the softmax output. The adversary can thus minimize the following loss:
M
X X
L(B) = − I{t = y} · log(Ot (z, y)) (23)
(z,y)∈B t=1
where M is the number of devices, I{·} is a binary indicator function, and Ot is the softmax
output for target class t. The adversary can then minimize L(B) using stochastic gradient descent
(SGD) or similar algorithms.
We first describe the datasets and learning architectures in Section VIII-A, followed by the
results for AWJ (Section VIII-B), AWS (Section VIII-C) and FIRNet (Section VIII-D).
1) Radio Fingerprinting: We consider (i) a dataset of 500 devices emitting IEEE 802.11a/g
(WiFi) transmissions; and (ii) a dataset of 500 airplanes emitting Automatic Dependent Surveil-
lance – Broadcast (ADS-B) beacons1 . ADS-B is a surveillance transmission where an aircraft
1
Due to stringent contract obligations, we cannot release these datasets to the community. We hope this will change in the
future.
19
determines its position via satellite navigation. For the WiFi dataset, we demodulated the trans-
missions and trained our models on the derived I/Q samples. To demonstrate the generality of
our AML algorithms, the ADS-B model was instead trained on the unprocessed I/Q samples. We
use the CNN architecture in [30], where the input is an I/Q sequence of length 288, followed by
two convolutional layers (with ReLu and 2x2 MaxPool) and two dense layers of size 256 and 80.
We refer to the above CNN models as RF-W (WiFi) and RF-A (ADS-B) TNN architectures.
2) Modulation Classification (MC): We use the RadioML 2018.01A dataset, publicly avail-
able for download at http://deepsig.io/datasets. The dataset is to the best of our knowledge
the largest modulation dataset available, and includes 24 different analog and digital modulations
generated with different levels of signal-to-noise ratio (SNR). Details can be found in [5]. For
the sake of consistency, we also consider the neural network introduced in Table III of [5], which
presents 7 convolutional layers each followed by a MaxPool-2 layer, finally followed by 2 dense
layers and 1 softmax layer. The dataset contains 2M examples, each 1024 I/Q samples long. In
the following, this model will be referred to as the MC TNN architecture. We considered the
same classes shown in Figure 13 of [5]. Confusing classes in Fig. 7 ( = 0.2) of our paper and
Figure [5] in are the same (i.e., mostly M-QAM modulations). Notice that = 0 corresponds to
zero transmission power (i.e., no attack).
3) Data and Model Setup: For each architecture and experiment, we have extracted two
distinct datasets for testing and optimization purposes. The optimization set is used to compute
the attack strategies φ as shown in Sections V and VI. The computed φ are then applied to
the testing set and then fed to the TNN. To understand the impact of channel conditions, we
simulate a Rayleigh fading channel with AWGN noise hA that affects all waveforms that node
A transmits to node R. We consider high and low SNR scenarios with path loss equal to 0dB
and 20dB, respectively. Moreover, we also consider a baseline case with no fading.
4) Model Training: To train our neural networks, we use an `2 regularization parameter
λ = 0.0001. We also use an Adam optimizer with a learning rate of l = 10−4 and categorical
cross-entropy as a loss function. All architectures are implemented in Keras. The source code
used to train the models is free and available to the community for download at https:
//github.com/neu-spiral/RFMLS-NEU.
20
B. AML Waveform Jamming (AWJ) Results
In AWJ, the adversary aims at disrupting the accuracy of the TNN by transmitting waveforms
of length NJ and of maximum amplitude > 0, to satisfy Constraint (C2) and keep the energy
of the waveform limited.
1) Untargeted AWJ (U-AWJ): Figure 6(a) shows the accuracy of the MC TNN (original accu-
racy of 60%) under the AWJ-U attack, for different channel conditions hA , jamming waveform
length NJ and values. Figure 6 shows that the adversary always reduces the accuracy of
the TNN even when NJ and are small. We notice that high SNR fading conditions allow the
adversary to halve the accuracy of the TNN, while the best performance is achieved in no-fading
conditions where the attacker can reduce the accuracy of the TNN by a 3x factor.
Fig. 6: Accuracy of (a) MC TNN (originally 60%) and (b) RF-W TNN (originally 40%) under
the AWJ-U attack for different jamming lengths and values.
Figures 7 and 8 show the confusion matrices and the corresponding accuracy levels of the
AWJ-U attack to the MC TNN model in the low SNR regime. Here, increasing also increases
the effectiveness of the attack, demonstrated by the presence of high values outside the main
diagonal of the confusion matrix.
Figure 6(b) shows the accuracy of the RF-W TNN for different attack strategies, constraints
and fading conditions. To better understand the impact of AWJ-U, we have extracted the 10
least (i.e., Bottom 10) and most (i.e., Top 10) classified devices out of the 500 devices included
in the WiFi dataset. Interestingly, AWJ-U attacks are extremely effective when targeting the top
devices. In some cases, the attacker can drop the accuracy of these devices from 70% to a mere
21
= 0.2 = 0.6
1
0.6
5 5
0.9
10 10 0.5
0.8
15 15
0.7
Accuracy of TNN
20 20 0.4
0.6
5 10 15 20 5 10 15 20
0.5 0.3
= 10 = 30
0.4
5 5 0.2
0.3
10 10
0.2 0.1
15 15
20 20 0.1
0
5 10 15 20 5 10 15 20 0 0.2 0.6 10 30
Fig. 7: Confusion matrix of MC TNN under the AWJ-U Fig. 8: Accuracy of MC TNN
attack in low SNR regime for different values. in Fig. 7 (originally 60%).
20% in the high SNR regime. Since the bottom 10 devices are classified with a low accuracy
already, it takes minimal effort to alter legitimate waveforms and activate other classes.
2) Targeted AWJ (AWJ-T): Compared to untargeted jamming, AWJ-T requires smarter attack
strategies as the adversary needs to (i) jam an already transmitted waveform, (ii) hide the
underlying features of the jammed waveform and (iii) mimic those of another class. The top
portion of Figure 9 show the fooling matrices of AWJ-T attacks against MC TNN. Notice that
the higher the fooling rate, the more successful the attack is. We notice that the adversary is
able to effectively target a large set of modulations from 1 to 17 and 24 (i.e., OOK, M-QAM,
M-PSK, ASK). However classes from 18-23 (i.e., AM, FM and GMSK) are hard to be targeted
and show low fooling rate values. The bottom portion of Figure 9 shows the results concerning
the AWJ-T attack against RF-W TNN. In this case, the adversary achieves higher fooling rates
with higher energy.
22
MC TNN, NJ=2 (top)
No Fading Fading (High SNR) Fading (Low SNR)
5 5 5
10 10 10
15 15 15
1
20 20 20
5 10 15 20 5 10 15 20 5 10 15 20
0.8
MC TNN, NJ=8 (top)
No Fading Fading (High SNR) Fading (Low SNR)
0.6
5 5 5
10 10 10
0.4
15 15 15
20 20 20
0.2
5 10 15 20 5 10 15 20 5 10 15 20
5 5 5
10 10 10
15 15 15
20 20 20
10 20 10 20 10 20
Fig. 9: (top) Fooling matrix of MC TNN under AWJ-T for NJ and values; (bottom) Fooling
matrix of RF-W TNN under AWJ-T for different values and no fading.
Let us now evaluate the performance of AWS attacks in the case of rogue nodes. In this
case, the attacker strategy φ consists of M complex-valued FIR taps (Section V-B) that are
convoluted with a baseband waveform xBB . To simulate a rogue device, we extract xBB from
the optimization set of the rogue class. This way we can effectively emulate a rogue class that
needs to hide its own features and imitate those of the target classes.
23
No Fading Fading (High SNR) 1
5 5 0.9
Rogue class
Rogue class
10 10
0.8
15 15
0.7
20 20
0.6
5 10 15 20 5 10 15 20
Target class Target class 0.5
0.4
5 5
Rogue class
0
5 10 15 20 5 10 15 20
Target class Target class
Fig. 10: Fooling matrix of MC TNN under AWS with different M (M = 4: top; M = 8: bottom).
Figure 10 shows the fooling matrix of AWS attacks against the MC TNN for different channel
conditions and values of M when = 0.2. First, note that the main diagonal shows close-to-
zero accuracy, meaning that the attacker can successfully hide its own features. Second, in the
no-fading regime, rogue classes can effectively imitate a large set of target classes. Figure 11
depicts the fooling matrices of AWS attacks against the RF-W TNN. We notice that (i) increasing
the number of FIR taps increases the fooling rate; and (ii) the bottom classes (1-10) are the
ones that the attacker is not able to imitate. However, the same does not hold for the top 10
classes (11 to 20), which can be imitated with high probability (i.e., 28%, 35%, 62% for classes
11,15,20, respectively). Figure 11 gives us an interesting insight on AWS attacks as it shows
that the attacker is unlikely to attack those classes that are misclassified by the TNN.
The same behavior is also exhibited by the RF-A TNN. Figure 12 shows the fooling matrix
when = 0.5 and M = 4. Our results clearly show that the attacker is not able to properly
24
Fading (High SNR) Fading (Low SNR)
1
Rogue class 5 5
Rogue class
0.9
10 10 0.8
15 15 0.7
20 20 0.6
5 10 15 20 5 10 15 20
Target class Target class 0.5
0.4
5 5
Rogue class
Rogue class
0.3
10 10
0.2
15 15
0.1
20 20
0
5 10 15 20 5 10 15 20
Target class Target class
Fig. 11: Fooling matrix of RF-W TNN under AWS for different values of M (M = 4: top;
M = 8: bottom).
imitate classes 1-10 (i.e., the bottom classes). Classes 11-20 (i.e., the top classes) can instead be
imitated to some extent. This is because it is unlikely that a unique setup of and M will
work for all classes (both rogue and target).
To further demonstrate this critical point, Figure 13 shows how rogue classes can actually
imitate other classes by utilizing different values of M and . We define two cases: Case A,
where A=11 and T=14, and Case B, where A=15 and T=17. As shown in Figure 12, Case A
and B both yield low fooling rate when M = 4 and = 0.5. Figure 13 shows two ADS-B
waveforms generated through AWS attacks in Case A and Case B, where solid lines show the
original waveform transmitted by the rogue node without any modification in Case A and B. At
first, the unmodified blue waveforms are classified by the RF-A TNN as belonging to the rogue
class (11 and 15, respectively) with probabilities 97% and 88%. However, by applying AWS
with different M and parameters than the ones in Figure 12, the adversary is successful in
imitating the target class in both Case A and B by increasing the activation probability to 20%
and 28%, which are considerably larger than the activation probability of all other 500 classes
25
Case A
(A = 11, T = 14 ) Case B
(A = 15, T = 17 )
Fig. 12: Fooling matrix of RF-A (original accuracy 60%) TNN under AWS with M = 4 and
= 0.5.
in the dataset. This demonstrates that M and is critical to the success of the AWS.
Finally, the waveforms in Figure 13 give precious insights on how AWS actually operates.
Interestingly, we notice that the phase of the waveforms does not change significantly, conversely
from the amplitude. Since ADS-B uses an on-off keying (OOK) modulation, we verified that the
modifications made by the waveform did not increase the BER of those transmissions. Moreover,
Figure 13 shows that AWS attempts to change the path loss between A and R, as the amplitude
respectively increases and decreases in Case A and B.
26
2
Without AWS Case A (A=11, T=14)
Waveform Amplitude
AWS ( = 0.2, M = 5)
1.5
0.5
0
0 100 200 300 400 500 600 700 800 900 1000
I/Q sample index
5
Without AWS Case B (A=15, T = 17)
Waveform Amplitude
4 AWS ( = 0.2, M = 9)
0
0 100 200 300 400 500 600 700 800 900 1000
I/Q sample index
Fig. 13: Comparison of waveforms generated through AWS attacks to RF-A TNN.
to fool the neural network. The receiver SDR samples the incoming signals at 20 MS/s and
equalizes it using WiFi pilots and training sequences. The resulting data is used to train a TNN
(see Figure 7 of [4]) which takes as input 6 equalized OFDM symbols, thus 48*6 = 288 I/Q
samples. It is composed by two 1D Conv/ReLU with dropout rate of 0.5 and 50 filters of size
1x7 and 2x7, respectively. The output is then fed to two dense layers of 256, and 80 neurons,
respectively. We trained our network using the procedure in Section VIII-A4. The resulting
confusion matrix of the classifier, which obtains 59% accuracy, is shown in Figure 1(a).
We trained FIRNet using baseband WiFi I/Q samples, thus without any impairment, with 1
FIRLayer and with a batch of 100 slices. Figure 14(a) shows that when has a low value of 0.1,
FIRNet-generated I/Q sequences always collapse onto a single class, therefore are not able to
hack the TNNs. However, Figure 14(b) shows that when increases to 1 the fooling rate jumps
to 79%, which further increases to 97% with 20 FIR taps and = 10, improving by over 60%
27
Fig. 14: FIRNet fooling matrices with 1 FIRLayer and different number of taps and value.
with respect to the replay attack that could achieve only 30% fooling rate (see Figure 1(c)).
Finally, Figure 15(a) and (b) show respectively the displacement caused by FIRNet on an
input slice with = 10 and the average values of the 5 FIR taps obtained after training. We do
not plot the remaining 15 taps since they are very close to zero. We notice that the distortion
imposed to the I/Q samples is kept to a minimum, which is confirmed by the average FIR tap
value which remains always below one.
IX. C ONCLUSIONS
28
settings. Finally, we have extensively evaluated the performance of our algorithms on existing
state-of-the-art neural networks and datasets. Results demonstrate that our algorithms are effective
in confusing the classifiers to a significant extent.
R EFERENCES
[1] Statista.com, “Internet of Things - Number of Connected Devices Worldwide 2015-2025.” https://www.statista.
com/statistics/471264/iot-number-of-connected-devices-worldwide/, 2019.
[2] F. Restuccia and T. Melodia, “Big Data Goes Small: Real-Time Spectrum-Driven Embedded Wireless
Networking through Deep Learning in the RF Loop,” in Proceedings of IEEE INFOCOM, 2019.
[3] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, p. 436, 2015.
[4] F. Restuccia, S. D’Oro, A. Al-Shawabka, M. Belgiovine, L. Angioloni, S. Ioannidis, K. Chowdhury,
and T. Melodia, “DeepRadioID: Real-Time Channel-Resilient Optimization of Deep Learning-based Radio
Fingerprinting Algorithms,” Proceedings of ACM MobiHoc, 2019.
[5] T. J. O’Shea, T. Roy, and T. C. Clancy, “Over-the-Air Deep Learning Based Radio Signal Classification,”
IEEE Journal of Selected Topics in Signal Processing, vol. 12, pp. 168–179, Feb 2018.
[6] F. Restuccia and T. Melodia, “PolymoRF: Polymorphic Wireless Receivers Through Physical-Layer Deep
Learning,” Proceedings of ACM MobiHoc, 2020.
[7] H. Zhang, W. Li, S. Gao, X. Wang, and B. Ye, “ReLeS: A Neural Adaptive Multipath Scheduler based on
Deep Reinforcement Learning,” Proceedings of IEEE INFOCOM, 2019.
[8] J. Jagannath, N. Polosky, A. Jagannath, F. Restuccia, and T. Melodia, “Machine Learning for Wireless
Communications in the Internet of Things: A Comprehensive Survey,” Ad Hoc Networks, vol. 93, 2019.
[9] F. Restuccia, S. D’Oro, A. Al-Shawabka, B. Costa Rendon, K. Chowdhury, S. Ioannidis, and T. Melodia,
“Generalized Wireless Adversarial Deep Learning,” submitted for publication, ACM Workshop on Wireless
Security and Machine Learning (WiseML), 2020.
[10] I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and Harnessing Adversarial Examples,” in Proceedings
of the International Conference on Learning Representations (ICLR), 2015.
[11] S.-M. Moosavi-Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard, “Universal Adversarial Perturbations,” in
Proceedings of IEEE CVPR, pp. 1765–1773, 2017.
[12] N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami, “Practical Black-Box Attacks
Against Machine Learning,” in Proceedings of the 2017 ACM on Asia Conference on Computer and
Communications Security, ASIA CCS ’17, (New York, NY, USA), pp. 506–519, ACM, 2017.
[13] N. Carlini and D. Wagner, “Towards Evaluating the Robustness of Neural Networks,” in IEEE Symposium on
Security and Privacy (S&P), pp. 39–57, May 2017.
[14] F. Restuccia, S. D’Oro, and T. Melodia, “Securing the Internet of Things in the Age of Machine Learning
and Software-Defined Networking,” IEEE Internet of Things Journal, vol. 5, pp. 4829–4842, Dec 2018.
[15] Y. Shi, K. Davaslioglu, and Y. E. Sagduyu, “Generative Adversarial Network for Wireless Signal Spoofing,”
29
in Proceedings of the ACM Workshop on Wireless Security and Machine Learning, WiseML 2019, pp. 55–60,
ACM, 2019.
[16] S. Bair, M. DelVecchio, B. Flowers, A. J. Michaels, and W. C. Headley, “On the Limitations of Targeted
Adversarial Evasion Attacks Against Deep Learning Enabled Modulation Recognition,” in Proceedings of the
ACM Workshop on Wireless Security and Machine Learning, WiseML 2019, pp. 25–30, ACM, 2019.
[17] A. Goldsmith, Wireless Communications. Cambridge university press, 2005.
[18] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, “Intriguing Properties
of Neural Networks,” Proceedings of the International Conference on Learning Representations (ICLR), 2013.
[19] N. Papernot, P. McDaniel, X. Wu, S. Jha, and A. Swami, “Distillation as a Defense to Adversarial Perturbations
Against Deep Neural Networks,” in Proceedings of IEEE Symposium on Security and Privacy (S&P), pp. 582–
597, IEEE, 2016.
[20] Y. Dong, F. Liao, T. Pang, H. Su, J. Zhu, X. Hu, and J. Li, “Boosting Adversarial Attacks with Momentum,”
in Proceedings of IEEE CVPR, pp. 9185–9193, 2018.
[21] M. Sadeghi and E. G. Larsson, “Adversarial Attacks on Deep-Learning Based Radio Signal Classification,”
IEEE Wireless Communications Letters, vol. 8, pp. 213–216, Feb 2019.
[22] T. J. O’Shea and N. West, “Radio Machine Learning Dataset Generation with Gnu Radio,” in Proceedings of
the GNU Radio Conference, vol. 1, 2016.
[23] T. J. O’Shea, J. Corgan, and T. C. Clancy, “Convolutional Radio Modulation Recognition Networks,” in
International Conference on Engineering Applications of Neural networks, pp. 213–226, Springer, 2016.
[24] L. Zhang, T. Yang, R. Jin, Y. Xiao, and Z.-H. Zhou, “Online Stochastic Linear Optimization under One-bit
Feedback,” in International Conference on Machine Learning, pp. 392–401, 2016.
[25] S. D’Oro, E. Ekici, and S. Palazzo, “Optimal Power Allocation and Scheduling Under Jamming Attacks,”
IEEE/ACM Transactions on Networking, vol. 25, no. 3, pp. 1310–1323, 2016.
[26] S. Huang, N. Papernot, I. Goodfellow, Y. Duan, and P. Abbeel, “Adversarial Attacks on Neural Network
Policies,” arXiv preprint arXiv:1702.02284, 2017.
[27] D. P. Bertsekas, Constrained Optimization and Lagrange Multiplier Methods. Academic press, 2014.
[28] M. R. Hestenes and E. Stiefel, Methods of Conjugate Gradients for Solving Linear Systems, vol. 49. NBS
Washington, DC, 1952.
[29] T. D. Vo-Huu, T. D. Vo-Huu, and G. Noubir, “Fingerprinting Wi-Fi Devices using Software Defined Radios,”
in Proceedings of the 9th ACM Conference on Security & Privacy in Wireless and Mobile Networks, pp. 3–14,
ACM, 2016.
[30] S. Riyaz, K. Sankhe, S. Ioannidis, and K. Chowdhury, “Deep Learning Convolutional Neural Networks for
Radio Identification,” IEEE Communications Magazine, vol. 56, pp. 146–152, Sept 2018.
30