MMB & DFT 2012 Workshop Proceedings

Download as pdf or txt
Download as pdf or txt
You are on page 1of 44

MMB & DFT 2012 Workshop Proceedings

Workshop on Network Calculus Workshop on Physically-augmented Security for Wireless Networks Workshop on Modeling and Analysis of Complex Reaction Networks

16

th

International GI/ITG Conference on

Measurement, Modelling and Evaluation of Computing Systems and Dependability and Fault Tolerance

Kaiserslautern; March 21, 2012

Jens B. Schmitt, Michael A. Beck (Editors)

Workshop Proceedings of the 16 and Fault Tolerance.

th

International GI/ITG Conference on Mea-

surement, Modelling and Evaluation of Computing Systems and Dependability Includes the proceedings of the Workshop on Network Calculus (WoNeCa), the Workshop on Physically-augmented Security for Wireless Networks (PILATES) and the Workshop on Modeling and Analysis of Complex Reaction Networks (MACoRN).

Kaiserslautern, Germany March 2012 ISBN: 978-3-00-037728-0 Technical Report Number: 388/12

Table of Contents
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . WoNeCa Using Network Calculus to Model Energy Harvesting Wireless Sensor Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Network Calculus: Application to an Industrial Automation Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Estimation of Statistical Bandwidth through Backlog Measurement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ContainerMinMaxGD: a Toolbox for (Min,+)-Linear Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . PILATES Joint Physical Layer Security, Primary User Authentication and Interleaving for OFDM . . . . . . . . . . On Enhancing the Reliability of Key Extraction Mechanisms from Wireless Channels . . . . . . . . . . . . . . . . . . Practical Message Manipulation Attacks in IEEE 802.15.4 Wireless Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . MACoRN Kronecker-based Innite Level-dependent QBDs: Matrix Analytic Solution versus Simulation . . . . . . . . . . The Monte Carlo EM Method for the Parameter Estimation of Biological Models . . . . . . . . . . . . . . . . . . . . . . . . Robustness Analysis for Biological Systems - from Qualitative to Quantitative Models . . . . . . . . . . . . . . . . . . . .
35 21 7 3

11

15

25

29

37

39

Preface
This report contains the extended abstracts of workshop presentations at the 16th International GI/ITG Conference on Measurement, Modelling and Evaluation of Computing Systems and Dependability and Fault Tolerance (MMB & DFT 2012) held on March 21, 2012 in Kaiserslautern, hosted by the University of Kaiserslautern. In this edition of MMB & DFT, we had the speciality of integrated workshops featuring certain topics (with their own call for papers):

Workshop on Network Calculus (WoNeCa), organized by Anne Bouillard (ENS, France), Markus Fidler (Leibniz University of Hannover), and Florin Ciucu (TU Berlin / Deutsche Telekom Laboratories);

Workshop

on

Physically-augmented

Security

for

Wireless

Networks

(PILATES), organized by Matthias Hollick (TU Darmstadt), Ivan Martinovic (University of Oxford), and Dirk Westho (HAW Hamburg);

Workshop

on

Modeling

and

Analysis

of

Complex

Reaction

Networks

(MACoRN), organized by Werner Sandmann (TU Clausthal) and Verena Wolf (Saarland University). We would like to thank the workshop chairs for making a great eort to put together very interesting programs at the respective workshops. We are also grateful to the workshop presenters for sharing their latest research results.

March 2012 Kaiserslautern

Jens B. Schmitt, Michael A. Beck

WoNeCa

Using Network Calculus to Model Energy Harvesting Wireless Sensor Networks (Extended Abstract)
Steen Bondorf [email protected]
University of Kaiserslautern Computer Science Department disco | Distributed Computer Systems Lab

Nowadays sensor nodes are commonly used for surveillance purposes. A nodes sensing unit usually covers a specic area in order to monitor characteristics like temperature, humidity, or movement. The resulting measurements are then reported to a central entity for evaluation. However, the area sensed by a single device is typically not covering the whole region of interest so multiple sensors are needed. Their composition in conjunction with the ability to wirelessly communicate and thus collaborate even allows for the surveillance of a generally inaccessible region like a forest or a glacier where it is infeasible to install an infrastructure or maintain the devices at all. Yet, wireless sensor networks often still have to meet predened requirements like a maximum reporting delay of an event or a suciently long network lifetime in order to achieve their designated task. The network lifetime is commonly dened to end as soon as the rst nodes battery is depleted. Consequently there has been plenty of work concentrating on increasing the operation time of sensors. On the one hand, there are node-local eorts to conserve energy such as power management that may shut down parts of the device during a period of time, duty cycling that may shut down the whole device, voltage scaling and transmission speed adjustment that adapt the power consumption to the workload and transmission range adaptation optimizing the energy used for the expensive wireless communication. On the other hand, there are network wide concepts pursuing the same objective by preventing the socalled hot spot problem stating that the nodes near the sink are depleted earlier because of relaying more trac. Among these concepts there are routing protocols and multiple as well as mobile sink solutions that aim to evenly distribute network trac without unacceptably degrading the delay performance. These eorts shift the networks end of life into the future but they do not solve the underlying problem of a decreasing level of available energy at a nodes disposal. Thus lifetime remains nite as exhausted batteries are inevitable if it is impracticable to recharge them. However, in recent times technological advancements like energy harvesting made it feasible to recharge devices without the need for a sophisticated infrastructure. With energy harvesting, sensor nodes can constantly replenish their battery with environmental energy such as electromagnetic radiation or solar power. Thus this technique allows nodes to operate past the time their energy consumption exceeded their initial energy budget.
7

We aim to use network calculus to model energy replenishing sensor nodes in order to derive their service curves. By concatenation of the individual service curves we are then able to dene the end-to-end service for a specic data ow and thus bound the time it takes until a measurement or event is reported to the sink. This allows to check if requirements are met and enables to take appropriate action in the network design already before deployment. As mentioned above, harvesting denes the energy replenishment of a sensor node and therefore restricts the possible service by upper bounding the energy that can be spent. On the other hand, the service that is required by a ow denes how much energy is spent by the sensor node and how much could be preserved to increase performance in the subsequent periods of operation. Our main eort lies in modeling the mutual dependency between performance and energy consumption in order to derive an accurate service curve representation. During the time the sensor is running on its initial energy budget, it can oer forwarding services according to a xed service curve . The delay an arriving data ow that is characterized by the input process A suers when crossing a sensor node on its path to the sink is then lower bounded by the horizontal deviation of its arrival curve and the sensors service curve, i.e., h(, ). However, as soon as the battery was depleted rst, the service is additionally depending on the sensor nodes energy replenishment and we cannot simply use the ordinary network calculus descriptions for service and arrival to derive the delay bound. The service demand and thus the energy consumption of the data ow A must be characterized in order to assure for any time instance that the energy budget is not exceeded and the service curve has to be adapted accordingly. Energy consumption can be characterized by the output of the sensor node A A . If the sensors service would cause an output that is too energy exhaustive, we aim to limit the amount of data that enters the system and thus can consume energy in order to prevent the depletion of the sensors battery. We call this arrival restriction Arestr . For any time instance the eective input served by a nodes forwarding capabilities is then bounded by A Arestr and the output accordingly by (A Arestr ) which is guaranteed to drain less energy than available. Our model for the mutual dependency therefore resembles a feedback network [2] where the input depends on the output and vice versa. The problem of a service restriction according to the behavior of a replenishing energy source resembles a window ow control mechanism [1]. Just like a window ow controller (WFC) we want to limit the amount of data that enters the system, but in contrast to a WFC we do not intent to do so according to a predened and well-known articial restriction. Thus we aim to model a more general setting of external restrictions to a sensor nodes service.

References
1. Rajeev Agrawal and Rajendran Rajan. Performance bounds for guaranteed and adaptive services. IBM Research Report, 1996. 2. Francois Baccelli, Guy Cohen, Geert Jan Olsder, and Jean-Pierre Quadrat. Synchronization and Linearity: An Algebra for Discrete Event Systems. Wiley, 1992.

Network Calculus: Application to an Industrial Automation Network


Sven Kerschbaum1 , Kai-Steen Hielscher2 , Ulrich Klehmet2 and Reinhard German2
Siemens AG, Industry Sector (Industry Automation Division) Nrnberg, Germany Email: [email protected] Department of Computer Science 7 (Computer Networks and Communication Systems) University of Erlangen-Nrnberg Email: {klehmet, ksjh, german}@informatik.uni-erlangen.de
1

Introduction Due to company-wide information processing, in recent years the oce and automation world have increasingly merged (oce meets factory). Processes in industrial automation plants often require hard deadlines. Classical performance modeling approaches, e.g. simulation and queuing theory, are not able to provide these bounds. On the contrary, Network Calculus (NC) is a formal modeling method to obtain worst-case bounds [1]. Its results allow to plan and dimension industrial plants which fulll the required deadlines even when using non-real time capable network components. Industrial Automation Historically, industrial automation networks were mainly based on specic networks called eldbuses, e.g., Probus and Modbus, that interconnected programmable logic controllers, robot controllers, I/O devices, etc. to exchange data for monitoring, controlling and synchronizing industrial processes. The eldbus protocols ensured that the end-to-end message delays remained within specic limits and met the requirements of industrial processes. As a consequence, industrial automation networks were deterministic and allowed their end-to-end delays to be determined. Nowadays, automation systems are connected via Ethernet. The use of Ethernet is becoming more and more common for connecting the devices at the eld level. Despite all aords to include quality of service aspects into the Ethernet standard, e.g. priority tagging, Ethernet remains non-deterministic, which is the main requirement of industrial networks. The Network Calculus Engine After evaluating dierent NC tools ([2,4]), we decided to implement a specic tool for industrial automation networks. Our Network Calculus Engine (NCE) provides a framework to model and analyze networks. Until now, the total (TFA) and separated ow analysis (SFA) [2,3], methods have been employed. The NCE is highly structured and consists of various modules. The most important modules are the network, curve and analysis module. The network module provides basic elements, e.g. nodes
9

and links, that can be used to build up the entire network. The curve module provides the basic NC arrival and service curves like token bucket and rate-latency. All elements, that means nodes, ports, curves, etc., can be parameterized as needed, e.g. for a wired link, the length and the cable type can be specied, so that the link propagation delay is also taken into account. NCE analyzes the model using one of the supported analysis methods TFA and SFA, resulting in the worst-case end-to-end delays for all ows, and the per-node worst-case delay and maximum buer bounds. Application to the Smart Automation (SmA) The SmA is an industrial research facility (prototype) used by Siemens AG in Nrnberg to test new technologies, products and automation solutions and to check new development methods or strategies in a real plant environment under real conditions. The SmA consists of the factory management system and several stations interconnected by PROFINET for lling, quality check, transport, storing, capping/uncapping and emptying bottles. The physical network topology is a ring which consists of industrial switches that builds a logical tree structure using the RSTP protocol. All network components oer their ows a FIFO service and at a rate of 100 Mbit/s and use the SIMATIC S7 communication protocol based on TCP/IP. The communication of the SmA can be split-up between cyclic and acyclic trac. Results and Future Work The calculated delay bounds are upper bounds which we compared to measurements. All measured delays keep their calculated deadlines. The calculated deadlines are 2-3 times larger than the measured delays. The SmA was engineered without the help of any tools that could verify the deadlines of its network ows. With the application of NC, its operation can now be ensured, since the worst-case delays of all network ows meet their deadlines. Furthermore, SFA outperforms the TFA method with respect to the obtained bounds. Hence, whenever possible, SFA should be used instead of TFA. In industry, most users do not have detailed knowledge about the trac of their industrial applications. But exactly this knowledge is indispensable when it comes to modeling the corresponding arrival curves. Generally, industrial automation networks and their ows are described in detail in their specication sheet. Consequently, it is desirable that the corresponding arrival curves are automatically generated.

References
1. Le Boudec, J.Y., Thiran, P.: Network Calculus. Springer Verlag LNCS 2050 (2001) 2. Schmitt, J.B., Zdarsky, F.A.: The DISCO Network Calculator - A Toolbox for Worst Case Analysis. Proceedings of the First International Conference on Performance Evaluation Methodologies and Tools (2006) 3. Schmitt, J.B., Zdarsky, F.A., Fidler M.: Delay Bounds under Arbitrary Multiplexing: When Network Calculus Leaves You in the Lurch .... 27th IEEE International Conference on Computer Communications (2008) 4. Wandeler, E., Thiele, L.: Real-Time Calculus (RTC) Toolbox. http://www.mpa. ethz.ch/Rtctoolbox (2006)

10

Estimation of Statistical Bandwidth through Backlog Measurement


Huimin She, Zhonghai Lu, Axel Jantsch, Li-Rong Zheng Email: [email protected]
KTH Royal Institute of Technology, Stockholm, Sweden

Abstract. Bandwidth estimation in wireless networks is dicult due to the intrinsic randomness of the wireless links. In this paper, we propose a network calculus based method for statistical bandwidth estimation in wireless networks with random service, where the bandwidth is expressed in terms of a statistical service curve with a violation probability. By injecting probing packet trains, the statistical bandwidth can be estimated through the measurement of backlogs in the system.

Introduction

Network calculus is a theory for service guarantee analysis of computer and communication networks. Recently, it has been developed for estimating available bandwidth based on trac measurements [1] [2]. In [1], Liebeherr et. al proposed a systematic approach for available service estimation of time-invariant systems through the measurement of deterministic backlog. In [2], the authors extended the method to networks with random trac load or link capacities. The bandwidth is estimated through the measurement of time stamps of probing packet trains. In this paper, we extended the work in [1] [2] and developed a network calculus based method for bandwidth estimation of system with random service, where the bandwidth is estimated through the measurement of statistical backlog based on probing packet trains. The bandwidth is expressed by statistical service curves that are allowed to violate a service guarantee with a certain probability [3]. Our method is exempt from the same timing reference for the nodes in the network compared with the time stamp based estimation methods.

Statistical Bandwidth Estimation

Consider a system with the arrival process, service process, and departure process denoted by R(t), S(t), and D(t) respectively. Let S(t) represent the statistical service curve which is dened as follows: Denition: Statistical service curve Consider a non-decreasing function S(t). It is a statistical service curve of the system if the following equality holds [3], P r D(t) R S(t) > 1
11

(1)

whereR S(t) = inf R( ) + S(t ) denotes the min-plus convolution. And denotes the violation probability, which satises 0 < < 1. The objective of bandwidth estimation is to derive the statistical service curve S(t) from B(t), R(t) and D(t), where R(t) is the arrival process, B(t) and D(t) are backlog and output, respectively. We adopt the rate scanning probe scheme proposed in [1], where the packet trains are transmitted with increasing rates. The arrival process can be expressed as R(t) = rt, where r is the transmission rate. Since it is very dicult to derive the exact service process S(t), we try to esti mate the statistical service curve S(t). Their relations is dened by the following lemma. The proof of this lemma can be found in [2]. Lemma: Consider a system with service process S(t). Any S(t) that satises, P r S(t) S(t) > 1 (2)

for t 0, is a statistical service curve of the system. The input of the system consists of constant rate packet trains, so the arrival process can be expressed by R(t) = rt, where r is the arrival rate of the probing trains. We dene the statistical steady-state backlog B (r) as , P r {B(r) B (r)} > 1 (3)

where B(r) denotes the steady-state backlog when the probing rate is r. In practice, the statistical backlog bound can be obtained based on the percentiles. We formalize the process of deriving statistic service curve through the measurement of backlog by the following theorem. Theorem: Consider a system with probing packet trains constrained by the arrival curve R(t) = rt. Based on the measurement of the statistical steady-state backlog B (r), the statistical service curve of the system can be derived by, S(t) = sup {rt B (r)}
r

(4)

where the violation probability of the statistical service curve is = r . The detailed proof of this theorem can be found in [6]. The theorem relates the statistical backlog bound with the statistical service curve based on the Legendre transform. It is able to estimate service curve for random wireless channels using probe packet trains transmitted at dierent rates. To estimate the bandwidth, tens or hundreds of dierent probe rates may be applied for the estimation. However, in the calculation of r , we only need to consider the probe rates that contribute to the derivation of S(t).

Results and Conclusions

Simulations are conducted to validate the proposed estimation method. The system consists of one sender and one receiver. Packet trains are periodically
12

injected to the buers of the sender. A packet train contains 1000 packets, and the arrival interval between two adjacent packets is 10 ms. The link between the sender and receiver is time-variant with capacity uniformly randomly varying in the range (20 kbps, 200 kbps). Assume the link status does not change during the transmission of a packet. The length of a packet changes from 300 bit to 3000 bit with an increment of 50 bit in each step. Hence, the corresponding probing data rate varies from 30 kbps to 300 kbps with an increment of 5 kbps. For each probing rate, the simulation runs 1000 times. The values of backlog are recorded every millisecond until the last packet has been sent. In simulations, the statistical service curve and backlog can be obtained from their percentiles. Fig. 1 shows the percentiles of link capacity and their corresponding statistical service curves. The deterministic service curve is the upper bound of the service curve we generated in the simulation. The left part of Fig. 2 shows the statistical results of the measured backlogs with varying probing rates. From these backlog values, the statistical service curve can be derived according to the theorem. In the right part of Fig. 2, we compare the statistical service curve estimated by our method with the actual statistical service curve. As we can see, when the violation probability is smaller, the dierence between these two is smaller. It means that our method can accurately estimate the service capacity with a small violation probability. In our future work, we will study which parameters impact the estimation accuracy.

Statistics of backlog
100 2 r = 30 r= 100 r = 200 r=300 1.8 1.6 1.4

Actual x 10 SC VS. Estimated SC


5

Percentiles of link capacity


100 90 80 70 2 1.8

Statistical service curve x 10


5

90 80 70

1.6

Pecentile

1.4

50 40 30

Data (bit)

60

1.2 1 0.8 0.6 0.4 0.2 Deterministic Vio Pr = 5% Viol Pr = 20% 500 1000

Pecentile

50 40 30 20 10 0 0 50 100 150 200

Data (bit)

60

1.2 1 0.8

20 0.6 10 0.4 0.2 0 0 Deterministic Vio Pr = 5% Viol Pr = 20% 500 1000 0 0 50 100 150 200

0 0

Backlog (packets)

Time (ms)

Link capacity (kbps)

Time (ms)

Fig. 1. Left: Percentiles of random link capacity ; Right: the statistical service curve of the random link.

Fig. 2. Left: Statistical backlog measurements; Right: Comparison between the reference service curve and the estimated service curve (The solid line denotes the estimated service curve and the dashed line denotes the reference service curve).

In this paper, we proposed a network calculus based method of statistical bandwidth estimation for networks with random service. The statistical bandwidth is estimated from the measurement of statistical steady-state backlog with
13

probing packet trains. Our method does not rely on the same timing reference for the sender and receiver.

References
1. Jrg Liebeherr, Markus Fidler, and Shahrokh Valaee. A system-theoretic approach o to bandwidth estimation. IEEE/ACM Trans. Netw., 18(4):10401053, 2010. 2. Ralf Lbben, Markus Fidler, and Jrg Liebeherr. A foundation for stochastic bandu o width estimation of networks with random service. In INFOCOM, 2011. 3. Almut Burchard, Jrg Liebeherr, and Stephen D. Patek. A min-plus calculus for endo to-end statistical service guarantees. IEEE Transactions on Information Theory, 52(9):41054114, 2006. 4. J. Boudec and P. Thiran. Network calculus: A theory of deterministic queuing systems for the internet. Springer, LNCS 2050, 2004. 5. Yuming Jiang and Yong Liu. Stochastic Network Calculus. Springer Publishing Company, 2008. 6. Ralf Lbben, Markus Fidler, and Jrg Liebeherr. u o A foundation for stochastic bandwidth estimation of networks with random service. online, http://arxiv.org/abs/1008.0050, 2010.

14

ContainerMinMaxGD: a Toolbox for (Min,+)-Linear Systems


Euriell Le Corronc, Bertrand Cottenceau, and Laurent Hardouin
Laboratoire dIngnierie des Syst`mes Automatiss, Universit dAngers, e e e e 62, Avenue Notre Dame du Lac, 49000 Angers, France, {euriell.lecorronc,bertrand.cottenceau,laurent.hardouin}@univ-angers.fr, WWW home page: http://www.istia.univ-angers.fr/LISA/

Introduction

According to the theory of Network Calculus based on the (min,+) algebra (see [2] and [5]), analysis and measure of worst-case performance in communication networks can be made easily and several toolboxes such as COINC [1] or DISCO [6] oer to do it. However, the exact computations sum, inf-convolution, subadditive closure of such systems are often memory consuming and time costly (see [1] and [4]). That is why we developed a toolbox called ContainerMinMaxGD which handles some container of ultimately pseudo-periodic functions and makes approximated computations. The convexity properties of the bounds of a container provide ecient algorithms (linear and quasi-linear complexity) for sum, inf-convolution and subadditive closure. The ContainerMinMaxGD toolbox1 is a set of C++ classes which can be found at the following address: http://www.istia.univ-angers.fr/~euriell. lecorronc/Recherche/softwares.php.

ContainerMinMaxGD Toolbox

The elementary object handled by the toolbox is called a container and dened as the following intersection illustrated by the grey zone of Fig. 1: [ f , f ]L [ f , f ] [ f ]L ,

where [ f , f ] is an interval of functions and [ f ]L is the equivalence class of f modulo the LegendreFenchel transform2 L.
1

Fig. 1: Container [ f , f ]L F.

It it important to note that this toolbox is an extension of the library MinMaxGD which handles increasing periodic series of the idempotent semiring Max , (see in [3]). A non-injective mapping dened by L(f )(s) supt {s.t f (t)} from the set of increasing and positive functions F to the set of convex functions Facx .
15

A function f is approximated by a container [ f , f ]L if f f f and [f ]L = [ f ]L . This means that f necessarily belongs to the grey zone of the gure, and by denoting Cvx the convex hull of a function, that f [ f , f ]L , f = Cvx(f ). Handling such containers amounts doing computations modulo L. We thus obtain the equivalence class of the non-approximated result f . Therefore, even throughout the computations, the extremal points of f truly belong to the exact function f , and the asymptotic slope of f is the one of f . Such a container belongs to the following set: F { [ f , f ]L | f Facv , f Facx , (f ) = (f ) }.

Its bounds f and f are non-decreasing, piecewise ane and ultimately ane functions. They are in addition concave for the lower bound (set Facv ), and convex for the upper bound (set Facx ). Moreover, their asymptotic slopes (f ) and (f ) are equals, so are the slopes of their ultimately ane parts. According to the computations, let us rst recall that the elementary operations of the Network Calculus are: sum: (f g)(t) = min{f (t), g(t)}, inf-convolution: (f g)(t) = min 0 {f ( ) + g(t )}, subadditive closure: f (t) = min 0 f (t) with f 0 (t) = e. On the set F of containers, these operations are now denoted [] { [] , [] , [ ] } and redened as inclusion functions such that for f = [ f , f ]L F, g = [ g , g ]L F, f f , and g g: f []g F, f g f []g. Thanks to the convexity characteristics of the bounds of a container, the computation algorithms of these inclusion functions are of linear complexity depending on the input size for the sum [], the inf-convolution [] and the upper bound of the subadditive closure [ ] , whereas the algorithm for the computation of the lower bound of [ ] is of quasi-linear complexity depending on the input size. Finally, it is interesting to have an idea of the performance of this toolbox by the following method. First, an exact system A is approximated by a container A (A A). Then, the subadditive closures of both the exact system A and the container A[ ] are computed, and the result obtained with the exact system is approximated by another container: A B. At last, the pessimism of the toolbox is given by comparing B (obtained from the exact system), and A[ ] (obtained from the approximated system). After experiments, we reach a pessimism of about 30%.

References
1. Bouillard, A., Thierry, E.: An Algorithmic Toolbox for Network Calculus. Journal of Discrete Event Dynamic Systems, Springer 18(1):349 (2008)
16

2. Chang, C.-S.: Performance Guarantees in Communication Networks. Springer Verlag (2000) 3. Cottenceau, B., Lhommeau, M., Hardouin, L., Boimond, J.-L.: Data Processing Tool for Calculation in Dioid. In: 5th International Workshop on Discrete Event Systems, WODES00 (2000) 4. Cottenceau, B., Lhommeau, M., Hardouin, L.: MinMaxGD, a Library for Computations in Max , (1998 - 2006) in 5. Le Boudec, J.-Y., Thiran, P.: Network Calculus: a Theory of Deterministic Queuing Systems for the Internet. Springer (2001) 6. Schmitt, J.B., Zdarsky, F.A.: The Disco Network Calculator: a Toolbox for Worst Case Analysis. In: Proceedings of the 1st international conference on Performance Evaluation Methodologies and Tools, ValueTools06 (2006)

17

18

PILATES

20

Joint Physical Layer Security, Primary User Authentication and Interleaving for OFDM
Hossein Khoshnevis [email protected]
International Max Planck Research School Campus E1 4, Saarland University, 66123 Saarbrcken, Germany u Abstract. Traditionally security is implemented above the physical layer of telecommunication systems. However, recently with the emergence of adhoc and decentralized networks, a lot of attention is paid to the implementation of security in the physical layer. This work proposes a joint physical layer security and the primary user authentication that are merged with interleaving for OFDM modulation. Keywords: Physical Layer Security, Primary User Emulation Attack, Primary User Authentication, Cyclostationary Signatures

Introduction

Security or scrambling as an important part of telecommunication systems is traditionally implemented above the physical layer. However, future telecommunication systems will be decentralized and adhoc; and therefore, their higher layer encryption is complex [1]. Recently, there has been an emerging research activity to implement the security in the physical layer that can decrease the complexity of systems and can prevent some types of attacks such as data forgery and denial of service (DoS) [2]. As one of the most ecient modulation schemes, OFDM has been widely used in telecommunication systems e.g. DVB and 4G. Therefore, the implementation of security in physical layer of this modulation decreases the complexity of widely used devices. Primary user authentication service is used to provide frequency rendezvous as one of the issues of cognitive radio to distinguish between the primary user (PU) and secondary users. One of the schemes that can be used to add the authentication to OFDM signal is cyclostationary signatures proposed in [3]. However, attacker can detect and regenerate them by monitoring the spectral correlation density (SCD). Consequently, secondary users detect the attacker as the primary user and do not use the channel. This attack is called primary user emulation [4]. Fortunately by changing the conventional structure of OFDM modulation, these signatures can be hidden.

The System Model and Proposed Algorithm

The physical layer of OFDM-based system has been shown in Fig. 1. The rst step in the block diagram is forward error correction codes (FEC) that are
21

Joint PHY Layer Security, PU Authentication and Interleaving for OFDM

applied for detection and correction of errors in the receiver. Afterwards, the data are modulated using constellation mapping (CM) in which QPSK, 16-QAM or higher constellations are employed. Time interleaving is the next step used to propagate the samples in time to reduce the eect of burst errors in the channel [6]. Then, the M point IFFT step is applied and the achieved signal is further processed by adding cyclic prex, pulse shaping (PS) and D/A converting by IQ modulator before transmitting to the channel. Cyclic prex (CP), as an important part of the mentioned process, is the repetition of last part of each OFDM symbol which is added to start of symbol that prevents inter-symbol interference (ISI) and is used for frequency domain equalization (FDE).
Time Interleaving IFFT(M) IQ

FEC

CM

CP & PS

Fig. 1. OFDM transmitter model

As explained in section 1, the scrambling or security is traditionally applied in higher layers. However the interleaving that is applied in the physical layer is a special case of scrambling [5]. To use the interleaving as the joint block for scrambling and interleaving, samples should be propagated and regularized based on a proper key. One of the best algorithms for proposing a proper key is advanced encryption standard (AES) algorithm that despite its simple algebraic description, no ecient attack has been introduced to break these codes specially when the number of samples is large enough to be achieved by deep interleaver. The block diagram of proposed scheme is shown in Fig. 2. In the rst step, samples are written column-wise in the interleaver with the size of B N where B and N are exponentials of 2 e.g. B = N = 128. Then, for each row and each column, the AES based row and column code vector should be computed and based on the code vector, the row and column permutation should be applied. Afterwards, the data are read out column-wise. The receiver need to apply the anti-permuting step. Therefore for the generation of the row and column code vectors, both the transmitter and receiver need to know the shared key and the original plaintext. If the AES algorithm length is 256, the number of keys is in order of 2200 that should be checked by the exhaustive key search for breaking the algorithm. Despite of the attacks introduced for breaking AES e.g. [7], it is still reliable and can be used for security enhancement in OFDM.

CM

Signature

IFFT(D)

AES Based Interleaving

IFFT(M)

CP & PS

Fig. 2. OFDM transmitter model with AES based interleaving and authentication

As mentioned in section 1, cyclostationary signatures are used to authenticate the primary user. To achieve secure frequency rendezvous, cyclostationary
22

Joint PHY Layer Security, PU Authentication and Interleaving for OFDM

signatures [3] can be used before AES based interleaving. These signatures are generated by repeating of some subcarriers after a special distance. However recurrence of special known subcarriers in these signatures, can be the starting point for an attack. The reliability of this scheme against this type of attack can be increased by using an additional D point IFFT block before interleaving to eliminate the similarity of samples. This additional IFFT also solves one of the shortcomings of conventional OFDM that is sensitivity to very deep fades and to impulsive noise, in the form of wideband noise bursts [8]. The scheme is illustrated in Fig. 2. In this case by monitoring the channel, the signature is not detectable and only the legal user who knows the key can detect the signature. Indeed, secure interleaving step changes the samples enough, so that by sketching the SCD, no signature can be detected.

Performance Evaluation

The performance of signature detector is evaluated using simulation and by varying observation interval. The signature results from the recurrence of eight subcarriers after a special distance similar to [3]. The constellation mapping is QPSK. The length of the rst and the second IFFT are 128 and 256 points, respectively. Useful duration of Tu is 4.16 s, and the CP length is 1.04 s. The type of the channel is AWGN and the probability of false alarm is set to 0.01. Fig. 3 illustrates the performance of signature detector by varying the observation interval for 10, 20 and 30 OFDM symbols. Obviously, as in all cyclostationary based detectors, by increasing the number of symbols, the performance of the detector is improved.

Fig. 3. Probability of detection of proposed algorithm by varying observation interval

23

Joint PHY Layer Security, PU Authentication and Interleaving for OFDM

Conclusions

In this work, an algorithm for joint security and interleaving in OFDM modulation was proposed and the eciency of the algorithm was discussed. A solution to hide the traditional cyclostationary signatures was provided, which improves the strength of OFDM against the impulsive noise as well. The nal scheme provides physical layer security and hides the signatures for secure frequency rendezvous. The performance of cyclostationary signature based authentication was evaluated. Results imply that the longer observation interval improves the probability of detection for signature.

Acknowledgment

I would like to express my sincere gratitude to Prof. Dr.-Ing. Thorsten Herfet and Kim Pecina for their invaluable guidance and insight.

References
1. Han, Z., Marina, N., Debbah, M. and Hjrungnes, A.: Physical Layer Security Game: Interaction between Source, Eavesdropper, and Friendly Jammer. EURASIP Journal on Wireless Communications and Networking, 2009. 2. Khan, M. A., Asim, M., Jeoti, V. and Manzoor, R. S.: Chaos based constellation scrambling in OFDM systems: Security and interleaving issues. International Symposium on Information Technology, vol.1 pp 1-7, 2008 3. Sutton, P.D., Nolan, K.E. and Doyle, L.E.: Cyclostationary Signatures for Rendezvous in OFDM-Based Dynamic Spectrum Access Networks. 2nd IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks, pp 220-231, 2007. 4. Chen, R., Park, J. and Reed J. H.: Defense against primary user emulation attacks in cognitive radio networks. IEEE Journal on Selected Areas in Communications, vol.26 issue.1 pp 25-37, 2008. 5. Ling, Q., Li, T. and Ren, J.: Physical layer built-in security enhancement of DSCDMA systems using secure block interleaving. Thirty-Eighth Asilomar Conference on Signals, Systems and Computers, pp 1105-1109, 2004. 6. Herfet, T: Future Media Internet: Video and Audio Transport-A New Paradigm. Saarland University, Telecommunications Lab, 2009-2010. 7. Bogdanov, A., Khovratovich, D. and Rechberger, C.: Biclique Cryptanalysis of the Full AES. ASIACRYPT 11, Lecture Notes in Computer Science (LNCS), vol.7073 pp 344-371, Springer-Verlag, 2011. 8. Stol, G., Baccala, L.A.: Fourier Transform Time Interleaving in OFDM Modulation. IEEE Ninth International Symposium on Spread Spectrum Techniques and Applications, pp 158 - 162, 2006.

24

On Enhancing the Reliability of Key Extraction Mechanisms from Wireless Channels


Youssef El Hajj Shehadeh1 , Ammar El Falou2 , and Dieter Hogrefe1
1

Institute of Computer Science, University of Goettingen, Germany [email protected] 2 Telecom Bretagne, Brest, France

Abstract. We investigate applying an error correcting code of small block size to enhance the performance of key generation from wireless channels. A trade-o between performance and secrecy is then studied. Preliminary results show that using a simple lower quantization approach achieves better performance than applying a small block size BCH code.

Introduction

Recently, much attention has been given to the wireless channel in the realm of generating secret keys. Indeed, it has been found that the multipath wireless channel forms by its nature a source of randomness which can be leveraged to derive a shared secret key between two wireless devices. In our previous work [1], we have proposed intelligent mechanisms for channel quantization and key extraction achieving a high extraction rate of secret bits with a low probability of disagreement. While, in some other related works (ex. [2], [3]), a direct quantization approach has been considered accompanied with applying Error Correcting Codes (ECC) to enhance the performance at the cost of loosing some secrecy. However, it is still unclear whether using ECC would achieve a good trade-o between performance and secrecy loss; as a matter of fact that the transmission of syndromes and/or parity check bits causes a loss of secrecy dependent on the code rate. Therefore, it is reasonable to question whether using ECC is the right choice to use to enhance the performance, particularly for relatively small block sizes. In this paper, we investigate the application of a BCH code to increase the performance (measured as probability of error: Pe ) of the Phase Shifting (PS) mechanism presented in [1]. This mechanism mainly targets lowering Pe though a phase correction mechanism, and increasing the eciency by optimizing the quantization precision for a Pe < 103 . More precisely, we compare the tradeo performance/secrecy-loss between two approaches. In the rst approach, we simply apply the PS mechanism with 1bit lower quantization precision than that elaborated in [1]. While in the second approach, we consider enhancing the PS quantization mechanism by including an error correction step. As a result, a lower number of secret bits would be expected but with a lower probability of error.
25

Youssef El Hajj Shehadeh et al.

The rest of this paper is organized as follows. In section II, we present the system model and give an overview of the channel quantization and key extraction procedure. And nally in section III, we show some simulation results and draw out the conclusions.

Channel Quantization and Key Extraction

The wireless multipath channel can be modeled as a vector of independent channel taps following, without loss of generality, a Rayleigh distribution. Thus, representing each channel tap as a complex Gaussian term, the channel can be expressed as: h = (h0 , h1 , ..., hL1 ), (1) where L is the number of taps also called the length of the channel. We consider that the multipath wireless channel is reciprocal and common between two communicating nodes mainly called Alice and Bob, and uncorrelated from an eavesdropper which is located suciently far in space. Each of the legitimate nodes will then observe a noised estimate of the channel: hA = h + zA , and hB = h + zB , (2)

where zA and zB are added white Gaussian noise at the two nodes. In this case, the theoretical bound on the maximum number of secret bits that can be generated can be found to be [2]:
i=L1

Nk = I(hA , hB ) =
i=0

log2 (1 + T N Ri

1 ), 2 + 1/T N Ri

(3)

where T N Ri is here the Tap power to Noise Ratio for channel tap i. In our previous work, we have proposed a quantization mechanism, called Phase Shifting (PS) [1], achieving high eciency in secret bit extraction and low probability of disagreement (less than 102 ). In Fig. 1, we show the probability of error using this quantization mechanism as a function of the quantization precision for a T N R = 36 dB. It is clear here that using lower quantization precision leads to a lower probability of error. Therefore, a trivial approach to enhance the performance and reliability of the key generation mechanism is to use lower quantization precision. However, this leads to a lower number of secret bits extracted. Another approach to enhance the performance is to use ECCs. But using ECCs would also lead to loss in secrecy due to the need of sending syndromes and/or parity check bits. Therefore there is a performance-eciency trade-o. The aim of this paper is to compare these two approaches in terms of performance at a certain cost of loss of secrecy.

Simulation Results and Discussions

In this section, we compare the performance of the two proposed approaches. Particularly, we consider using a 1bit lower quantization for the rst approach,
26

Key Generation from Wireless Channels

10

10

Probability of disagreement

10

10

10 Pe

Higher quantization Higher quantization + BCH Lower quantization 10


10

10

10

10

10

7 8 9 10 11 Quantization Precision, bits/sample

12

13

10

15

36

36.5

37 TNR, dB

37.5

38

Fig. 1.: Probability of error as a function of the quantization precision for a TNR=36dB, PS mechanism.

Fig. 2.: Probability of disagreement as a function of TNR for the two approaches.

while we use a BCH(127,106) ECC for the second approach. We consider quantizing a channel vector of 21 taps. At a TNR greater than 36 dB, a 6 bits quantization level is used. We will therefore obtain 126 secret bits at a higher probability of disagreement or 105 secret bits at a lower probability of disagreement following the rst approach. On the other hand, using the second approach with a BCH(127,106) code, we would obtain 106 secret bits. In Fig. 2, we plot the probability of disagreement as a function of TNR using these two approaches in addition to the main PS mechanism. We can observe clearly here that both approaches provide better performance. But the lower quantization precision approach provides a much better performance enhancement than the second approach. This is mainly due to the fact that using the BCH(127,106) ECC helps in correcting up to 3 bit errors while using a 1-bit lower quantization decreases dramatically the bit error rate as can be seen in Fig. 1. Finally, we note that we tended to use small block size ECCs as a small number of secret bits is expected to be extracted from a single channel observation. However, it is interesting to study more powerful ECCs and larger block sizes, and compare their performance against using lower quantization precision for an equal secrecy loss. This would be the subject of our future research.

References
1. El Hajj Shehadeh, Y., Alfani, O., Tout, K., Hogrefe, D.: Intelligent mechanisms for key generation from multipath wireless channels. In: IEEE WTS 11, New York, NY, April 2011. 2. Ye, C., Mathur, S., Reznik, A., Shah, Y., Trappe, W., Mandayam, N.: Informationtheoretically Secret Key Generation for Fading Wireless Channels. In: IEEE Trans. on Information Forensics and Security, vol. 5, no. 2, pp. 240-254, June 2010. 3. Chen, C., Jensen, M.: Secret Key Establishment Using Temporally and Spatially Correlated Wireless Channel Coecients. In: IEEE Transactions on Mobile Computing, pp. 1-11, July 2010.
27

28

Practical Message Manipulation Attacks in IEEE 802.15.4 Wireless Networks


Matthias Wilhelm1 , Jens B. Schmitt1 , and Vincent Lenders2
Disco Labs, TU Kaiserslautern, Germany 2 armasuisse, Thun, Switzerland {wilhelm,jschmitt}@cs.uni-kl.de, [email protected]
1

Abstract. We assess the ability of adversaries to modify the content of messages on the physical layer of wireless networks. In contrast to related work, we consider signal overshadowing to achieve such manipulations during transmission. We present preliminary experimental results, which suggest that our approach enables deterministic message manipulations, even in unpredictable radio environments.

Introduction

In this research project, we consider message manipulation attacks in wireless networks. The attackers goal is to violate the integrity of a message, tricking a victim receiver to accept a message of the attackers choice, while the sender considers its original message to be delivered successfully. While such attacks can also be realized on higher layers (e.g., modications by forwarding hops or memory manipulations on sender or receiver), we focus on attacks on the physical layer of wireless communications. A recent study by Ppper et al. [1] shows that such message manipulations are possible if an attacker emits well-chosen RF waves that combine with the original signal to a new signal, which is then received as a packet of the attackers choice; this method is called symbol ipping. However, the results also show that this attack is challenging in practice because a correct timing and matching amplitude and phase at the receiving antenna are required, which is hard to attain in realistic radio propagation environments. We consider an alternative manipulation method using signal overshadowing, i.e., the property that in angular modulation schemes only the stronger of two colliding signals is received. The expected benet of our approach is that it is less sensitive to the physical properties of the victim signal, making it more practical and reliable. However, the technical challenges of tight timing and phase synchronization requirements still remain. We aim to analyze our method in IEEE 802.15.4 networks, implement a system that manipulates messages over the air deterministically, and evaluate its attack performance against o-the-shelf receivers in realistic scenarios.

System Challenges and Implementation

Challenges. Correct reception requires that the attacker matches its timing and phase closely to the legitimate sender. While the sender does not suer from symbol errors because the receiver uses preamble and SFD (start-of-frame delimiter)
29

Sender Preamble SFD Header 0 0 0 0 0 0 0 0 CRC + Attacker d e a d b e e f 4 f b d Receiver Preamble SFD Header d e a d b e e f 4 f b d (a) The attacker synchronizes with the packet and alters the received content.

(b) The attack at the physical layer: signal replacement by overshadowing.

Fig. 1: Physical layer message manipulation attack. to synchronize with the signal, the attacker cannot exploit this. Especially the phase oset may play a major role because the used MSK modulation generates symbol ips if the relative phase deviates by more than . Since this relation 2 between original and attack signal at the receiver is hard to control by the attacker, we might face the challenge that the attack is still unreliable, even with optimal timing. However, as the standard uses spread spectrum modulation, we may hope that receivers can compensate such deviations. Fig. 1a shows that the attacker can directly send the desired symbols when using overshadowing, but must time its attack precisely to be successful (with a deviation of less than 1 s in IEEE 802.15.4). This requires the attacker to detect and synchronize with the victim signal with tight timing constraints. Attack system implementation. We use RFReact [2] to implement the attack. This USRP2-based software radio system implements an IEEE 802.15.4 transceiver in FPGA logic and uses programmable rmware to control its operation. The system detects the preamble of an incoming packet, achieving symbol synchronization and timing recovery, and enables the attacker to start transmitting arbitrary waveforms after a tunable delay, with a timing precision of 10 ns.

Initial Experimental Results

Experimental setup. We use three USRP2s in our experiment, taking the role of attacker (using RFReact), legitimate sender, and as a signal scope for RF monitoring. The victim receiver is a COTS device, an Atmel RZ Raven USB stick. The experiment takes place in an indoor oce environment with distances of 2 m3 m between the antennas. No attempts to match the carrier phase at the receiver are made in the setup. The attack depicted in Fig. 1a is performed 10000 times: the attacker attempts to replace the last 12 symbols of a packet, altering 8 symbols of payload (to 0xdeadbeef) and 4 symbols of CRC (to 0x4fdb). A physical layer view of the overshadowing30 attack is shown in Fig. 1b.

Rel. frequency Received symbols #errors Rel. frequency Received symbols #errors 66.97 % deadbeef4fbd 0 0.94 % deadbe7f4fbd 1 2.95 % 00000bc8b9cc 12 0.91 % d7adbeef4fbd 1 2.68 % 4eadbeef4fbd 1 0.76 % deadb7ef4fbd 1 2.19 % 0deadbeef4fb 11 0.72 % 000000000000 12 1.23 % 7eadbeef4fbd 1 0.57 % 00000bc8b9ce 12 1.12 % 0fbecff858ce 12 18.96 % Rel. freq. < 0.5% var.

Table 1: Experimental results: modied payload as received by the victim. Symbols errors are underlined and highlighted in red. Experimental results. The results are shown in Table 1; the attack succeeds in 6697 attempts of 10000. We can divide the observed errors into two classes: (C1) the timing error is less than one symbol duration (16 s) such that no leading zero symbols are present (23 % of the cases), and (C2) completely missed symbol timing (> 16 s, 10 %) that may be attributed to problems in the attack system. These results show that such a manipulation attack is indeed feasible. We see a good timing synchronization and small timing errors, and achieve a deterministic manipulation outcome in the majority of attempts. Surprisingly, phase errors seem to play a minor role. As the attacker does not synchronize with the carrier phase, the phase error should be distributed uniformly in the range 0 to 2. When considering the used MSK modulation and a constant phase oset during the attack, this should lead to a signicant number of 12 symbol errors observations in C1, even with optimal timing. However, we notice that the receiver is able to correctly detect the attackers symbols in most cases, and that single symbol errors are prevailing in the others. Symbol timing seems to be the decisive factor to attack success.

Conclusion

Our experimental results suggest that the described message manipulation attack method is reliable, even in unpredictable indoor radio environments. Therefore message integrity measures must be taken, even when sender and receiver are in transmission range and closely monitor the channel state and packet timing. We plan to analyze this attack for IEEE 802.15.4 networks, extend our experimental study to various COTS receivers and radio environments, and devise methods to detect and mitigate such attacks.

References
1. C. Ppper, N. O. Tippenhauer, B. Danev, and S. apkun. Investigation of signal and message manipulations on the wireless channel. In Computer Security ESORICS 2011, volume 6879 of LNCS, pages 4059. Springer Berlin Heidelberg, Sept. 2011. 2. M. Wilhelm, I. Martinovic, J. B. Schmitt, and V. Lenders. WiSec 2011 demo: RFReacta real-time capable and channel-aware jamming platform. SIGMOBILE Mobile Computing and Communications Review, 15:4142, Nov. 2011.
31

32

MACoRN

34

Kronecker-based Innite Level-dependent QBDs: Matrix Analytic Solution versus Simulation


Tugrul Dayar
Bilkent University, Turkey

In this talk, we show how systems of stochastic chemical kinetics can be modeled using innite level-dependent quasi-birth-and-death processes (LDQBDs), expressed in the form of Kronecker products, and analyzed for their steadystate probability distribution with the help of Lyapunov theory. Experiments are performed on systems having two or more countably innite state space subsystems. Results indicate that, albeit more memory consuming, there are many cases where a matrix analytic solution coupled with Lyapunov theory yields a faster and more accurate steady-state measure compared to that obtained with simulation. This is a joint work with Muhsin Can Orhan.

35

36

The Monte Carlo EM Method for the Parameter Estimation of Biological Models
Andras Horvath
University of Turin, Italy

It is often the case in modeling biological phenomena that the structure and the eect of the involved interactions are known but the rates of the interactions are neither known nor can easily be determined by experiments. This talk deals with the estimation of the rate parameters of reaction networks in a general and abstract context. In particular, we consider the case in which the phenomenon under study is stochastic and a continuous-time Markov chain (CTMC) is appropriate for its modeling. Further, we assume that the evolution of the system under study cannot be observed continuously but only at discrete sampling points between which a large amount of reactions can occur. The parameter estimation of stochastic reaction networks is often performed by applying the principle of maximum likelihood. In this talk we describe how the Expectation-Maximisation (EM) method, which is a technique for maximum likelihood estimation in case of incomplete data, can be adopted to estimate kinetic rates of reaction networks. In particular, because of the huge state space of the underlying CTMC, it is convenient to use such a variant of the EM approach, namely the Monte Carlo EM (MCEM) method, which makes use of simulation for the analysis of the model. We show that in case of mass action kinetics the application of the MCEM method results in an ecient and surprisingly simple estimation procedure. We provide examples to illustrate the characteristics of the approach and show that it is applicable in case of systems of reactions involving several species.

37

38

Robustness Analysis for Biological Systems - from Qualitative to Quantitative Models

Frank Allgwer
University of Stuttgart, Germany

For most biological systems only models with large structural and parametric uncertainties are available. While for some signal transduction pathways rough estimates for kinetic parameters can be determined, for most gene regulation networks not even the interaction structure is fully understood. This complicates the already dicult problem of analyzing and predicting the often complex dynamical behavior of these systems and shows that there is a need for new analysis methods accounting for the respective degree of uncertainty. In this talk, we present two methods which allow to study the dynamical robustness properties of an uncertain system. The rst method is capable of assessing the ability of a gene regulation network to generate a desired multistable behavior in a maximally robust way. For this analysis, merely qualitative knowledge of the interaction structure is required. The second method was developed to study the existence of oscillations and bistability of systems with large parametric uncertainties. Therefore, variations in feedback circuit gains are studied. Both methods can be used to gain insight into highly uncertain systems using dierent levels of information. This is a joint work with Steen Waldherr, Christian Breindl and Daniella Schittler.

39

ISBN: 978-3-00-037728-0

You might also like