1995NSMC Discrete

Download as pdf or txt
Download as pdf or txt
You are on page 1of 20

20

DISCRETE-TIME MARKOVIAN
STOCHASTIC PETRI NETS
Gianfranco Ciardo
1
Department of Computer Science, College of William and Mary
Williamsburg, VA 23187-8795, USA [email protected]
ABSTRACT
We revisit and extend the original denition of discrete-time stochastic Petri nets, by
allowing the ring times to have a defective discrete phase distribution. We show
that this formalism still corresponds to an underlying discrete-time Markov chain.
The structure of the state for this process describes both the marking of the Petri net
and the phase of the ring time for of each transition, resulting in a large state space.
We then modify the well-known power method to perform a transient analysis even
when the state space is innite, subject to the condition that only a nite number
of states can be reached in a nite amount of time. Since the memory requirements
might still be excessive, we suggest a bounding technique based on truncation.
1 INTRODUCTION
In the past decade, stochastic Petri nets (SPNs) have received much atten-
tion from researchers in the performance and reliability arena and have been
extensively applied to the performance and reliability modeling of computer,
communication, manufacturing, and aerospace systems [4, 5, 7, 10, 23]. While
there is agreement on the appropriateness of SPNs as a description formalism
for a large class of systems, two radically dierent solution approaches are com-
monly employed: simulation and state-space-based analysis. Simulation allows
to associate general distributions to the duration of activities (SPN transitions),
but it requires multiple runs to obtain meaningful statistics. This problem is
1
This research was partially supported by the National Aeronautics and Space Adminis-
tration under NASA Contract No. NAS1-19480.
339
340 Chapter 20
particularly acute in reliability studies, where many runs might be required
to obtain tight condence intervals. With simulation, the state of the SPN is
composed of the marking, describing the structural state of the SPN, and the
remaining ring times, describing how long each transition in the SPN must
still remain enabled before it can re. The simulated time is advanced by
ring the transition with the smallest remaining ring time.
State-space-based analysis has been mostly applied to SPNs whose underlying
process is a continuous-time Markov chain (CTMC), that is, to SPNs with
exponentially distributed ring times [3, 12, 25, 26]. Except for numerical
truncation and roundo, exact results are obtained, but the approach has two
limitations: the number of SPN markings increases combinatorially, rendering
unfeasible the solution of large models, and generally-distributed activities must
be modeled using phase-type (PH) expansion [15]. PH distributions can
approximate any distribution arbitrarily well, but it is dicult to exploit this
fact in practice because the expansion exacerbates the state-space size problem.
Discrete distributions for the timing of SPNs have received less attention. This
is unfortunate, since deterministic distributions (constants) are often needed to
model low-level phenomena in both hardware and software, and the geometric
distribution is the discrete equivalent of the exponential distribution and can
approximate it arbitrarily well as the size of the step decreases. Furthermore,
there is evidence supporting the use of deterministic instead of exponential
distributions when modeling parallel programs [1].
If all the ring distributions are geometric with the same step, the underlying
process is a discrete-time Markov chain (DTMC) [25]. Such SPNs can model
synchronous behavior, as well as the main aspect of asynchronous systems:
the uncertainty about the ordering of quasi-simultaneous events. A DTMC is
described by a square one-step state transition probability matrix and an
initial state probability vector
[0]
. The state probability vector at step n can
be obtained with the iteration (power method):
[n]
=
[n1]
. This result was
extended in [11] to include immediate transitions, which re in zero time, and
geometric ring distributions with steps multiple of a basic unit step, possibly
with parameter equal one, that is, constants. [29] restates these results in more
detail, and uses the concept of weight to break the ties, following [3] and, more
closely, [13]. Generalized Timed Petri Nets (GTPN) have also been proposed
[19], where the steps of the geometric ring times for each transition can be
arbitrary, unrelated, real numbers. A DTMC can be obtained by embedding,
but the analysis is restricted to steady-state behavior and the state space of
the DTMC can be innite even when the underlying untimed PN has a nite
reachability set. Analogous considerations hold for D-timed PNs [30].
Discrete-time Markovian stochastic Petri nets 341
We generalize and formalize the results in [13] and show how, using phase-
expansion, a DTMC can be obtained even if the ring time distributions are
not geometric, as long as rings can occur only at some multiple of a unit
step. The state can then be described by the marking plus the phase of each
transition. This extends the class of SPNs that can be solved analytically, but
two limitations still exist: the existence of a basic step and the size of the state
space. By using a ne step, arbitrary steps can be approximated, but this
increases the state space.
Approaches to solve models with a large state space have been proposed for
both steady-state and transient analysis. [6] considers the reliability study of a
SPN with exponentially distributed ring times, under the condition that the
reachability graph is acyclic. The underlying CTMC is then acyclic as well, and
a state can be discareded as soon as the transitions out of it have been explored,
resulting in an algorithm oering large savings in memory and computations
with respect to traditional numerical approaches. However, acyclic state spaces
arise only in special cases, such as reliability models of non-repairable systems.
For transient analysis of a general CTMC, Jensens method [21], also called
uniformization [17, 27], is widely adopted. [18] outlines a dynamic implemen-
tation of the algorithm, where the state space is explored as the computation
of the transient probability vector proceeds, not in advance, as normally done.
This allows to obtain a transient solution even if the state space is innite,
provided that the transition rates have an upper bound.
If the CTMC contains widely dierent rates, the number of matrix-vector mul-
tiplications required by uniformization can be excessive. Proposals to alleviate
this problem are selective randomization [24] and adaptive uniformization [28],
both based on the idea of allowing dierent uniformization rates, according to
the set of states that can be reached at each step. The latter, in addition,
can be used with innite state spaces even if the rates have no upper bound.
However, the method can incur a substantial overhead, and it appears that an
adaptive step is advantageous only in special cases or for short time horizons.
In Sections 2, 3, and 4 we dene the underlying untimed PN model, the class
of DDP distributions used for the temporization of a PN, and the resulting
DDP-SPN formalism, respectively. Section 5 discusses the numerical solution
of a DDP-SPN, by building and solving its underlying stochastic process, a
DTMC. Section 6, examines approaches to cope with large state spaces.
342 Chapter 20
2 THE PN FORMALISM
We recall the (extended) PN formalism used in [12, 14]. A PN is a tuple
_
P, T, D

, D
+
, D

, ~, g,
[0]
_
where:
P is a nite set of places, which can contain tokens. A marking IN
|P|
denes the number of tokens in each place p P, indicated by
p
(when
relevant, a marking should be considered a column vector). D

, D
+
, D

,
and g are marking-dependent, that is, they are specied as functions of
the marking.
T is a nite set of transitions. P T = .
p P, t T, IN
|P|
, D

p,t
() IN, D
+
p,t
() IN, and D

p,t
() IN
are the multiplicities of the input arc from p to t, the output arc from t to
p, and the inhibitor arc from p to t, when the marking is , respectively.
~ T T is an acyclic (pre-selection) priority relation.
t T, IN
|P|
, g
t
() 0, 1 is the guard for t in marking .

[0]
IN
|P|
is the initial marking.
Places and transitions are drawn as circles and rectangles, respectively. The
number of tokens in a place is written inside the place itself (default is zero).
Input and output arcs have an arrowhead on their destination, inhibitor arcs
have a small circle. The multiplicity is written on the arc (default is the constant
1); a missing arc indicates that the multiplicity is the constant 0. The default
value for guards is the constant 1.
A transition t T is enabled in marking i all the following conditions hold:
1. g
t
() = 1.
2. p P, D

p,t
()
p
.
3. p P, D

p,t
() >
p
or D

p,t
() = 0.
4. u T, u ,~ t or u is not enabled in (well dened because ~ is acyclic).
Let c() be the set of transitions enabled in marking . A transition t c()
can re, causing a change to marking /(t, ), obtained from by subtracting
Discrete-time Markovian stochastic Petri nets 343
the input bag D

,t
() and adding the output bag D
+
,t
() to it: /(t, ) =
D

,t
()+D
+
,t
() = +D
,t
(), where D = D
+
D

is the incidence matrix.


/ can be extended to its reexive and transitive closure by considering the
marking reached from after ring a sequence of transitions. The reachability
set is given by = : T

= /(,
[0]
), where T

indicates the set


of transition sequences.
3 DISCRETE PHASE DISTRIBUTIONS
We now dene the class T of (possibly defective) discrete phase (DDP) distri-
butions, which will be used to specify the duration of a ring time in a SPN. A
random variable X is said to have a DDP distribution, X T, i there exists
an absorbing DTMC A
[k]
: k IN with nite state space / = 0, 1, . . . , n
and initial probability distribution given by [PrA
[0]
= i, i /], such that
states / 0, n are transient and states 0, n are absorbing, and X is the
time to reach state 0: X = mink 0 : A
[k]
= 0. If PrA
[0]
= 0 > 0, the
distribution has a mass at the origin. If PrA
[0]
= i > 0 and state i can reach
state n, the distribution is (strictly) defective.
T is the smallest class containing the distributions Const(0), Const(1), and
Const() and closed under:
Finite convolution: if X
1
T and X
2
T, then X = X
1
+X
2
T.
Finite weighted sum: if X
1
T, X
2
T and B 0, 1 is a Bernoulli
random variable, then X = BX
1
+ (1 B)X
2
T.
Innite geometric sum: if X
k
T : k IN
+
is a family of iids and N is
a geometric random variable, then X =

1kN
X
k
T.
The geometric and modied geometric distributions with arbitrary positive in-
teger step, Geom(, ) and ModGeom(, ), 0 1, IN
+
, the constant
non-negative integer distribution, Const(), IN, and any discrete distribu-
tion with nite non-negative integer support are special cases of DDP distri-
butions. An example of a random variable with non-negative integer support
which does not have a DDP distribution is N
2
, where N Geom().
Fig. 1 shows examples of DDP distributions. The initial state b, for begin,
has zero sojourn time and is introduced to represent graphically the initial
344 Chapter 20
Geom(,3) 3 2 1 0 1 1
1-
Const(2) 2 1 0 1 1
2 w.p.
3 w.p.
w.p. 1--
3 2 1 0 1 1-

/(1-)
4
(1--)/(1-)
b 1
b 1
b 1
0 1
0 1
0 1
4 1
4 1
3 1
Figure 1 Examples of DDP distributions.
3 2 1 0 1 1 1 b 1/4
1/4
1/4
1/4
3 2 1 0 2/3 1/2 1 b 3/4
1/2
1/4
1/3
0 1
0 1
Figure 2 Equivalent DTMC representations
probability distribution. We use this representation since it allows to estimate
the complexity of a DTMC by counting the number of nodes and arcs in its
graph. For simplicity, the last state, e.g., 4 for Geom(, 3) and 3 for Const(2),
can be omitted if it is not reachable from b (if the distribution is actually not
defective). Unfortunately, the DTMC corresponding to a given DDP distribu-
tion might not be unique, even if the number of states if xed. For example, the
time X to reach state 0 for the DTMCs in Fig. 2, both with ve nodes and seven
arcs, has distribution Unif(0, 3), that is, PrX = i = 1/4, for i 0, 1, 2, 3.
4 THE DDP-SPN FORMALISM
SPNs are obtained when the time that must elapse between the instant a
transition becomes enabled and the instant it can re, or ring time, is a
random variable. By restricting the ring times distributions to T, we ob-
tain the DDP-SPNs, corresponding to a stochastic process where the state
Discrete-time Markovian stochastic Petri nets 345
has the form s = (, ) IN
|P|
IN
|T|
. The structural component is
simply the current marking. The timing component describes the current
phases, the state for the DTMC chosen to encode the DDP distribution
associated with the ring time of each transition. The ring time of a transi-
tion t elapses when its phase
t
reaches 0. Formally, a DDP-SPN is a tuple
_
P, T, D

, D
+
, D

, ~, g,
[0]
, , G, F,
[0]
, ~~, w
_
where:
_
P, T, D

, D
+
, D

, ~, g,
[0]
_
dene a PN.
t T, ,
t
() IN is the nite set of possible phases in which
transition t can be when the marking is .
, t c(), i, j
t
(), G
t
(, i, j) is the probability that the
phase of t changes from i to j at the end of one step, when t is enabled in
marking . Hence,

jt()
G
t
(, i, j) = 1.
, u c(), t T, i
t
(), j
t
(/(u, )), F
u,t
(, i, j) is
the probability that the phase of t changes from i to j when u res in
marking . Hence,

jt(M(u,))
F
u,t
(, i, j) = 1.
t T,
[0]
t

t
(
[0]
) is the phase of t at time 0.
~~ T T is an acyclic (post-selection) priority relation.
, S c(), t S, w
t|S
() IR
+
is the ring weight for t when S
is the set of candidates to re in marking .
A transition t T is said to be a candidate (to re) in state s = (, ) i all
the following conditions hold:
1. t c().
2.
t
= 0.
3. u T, u ,~~ t or u is not a candidate in s (remember that ~~ is acyclic).
Let ((s) be the set of candidates in state s. G
t
(, , ) is the one-step transition
probability matrix of the DTMC
[k]
t
: k IN, with state space
t
(), cor-
responding to the DDP-distributed ring time for transition t in marking in
isolation, that is, assuming that no other transition ring aects the ring time
of t. However, if another transition u res before t, leading to marking

, the
346 Chapter 20
phase
t
of t will change according to the distribution F
u,t
(,
t
, ). Further-
more, after the ring of u, the phase of t will evolve according to G
t
(

, , ),
which might dier from G
t
(, , ), it can even have a dierent state space,

t
(

) instead of
t
().
We stress that pre-selection and post-selection have a dierent semantic. Only
in the case of immediate transitions the two become equivalent. Assume that
only t and u satisfy the input, inhibitor, and guard conditions in . We have
three options, resulting in three dierent behaviors:
Specify a pre-selection priority between them, for example t ~ u, so that
u will not be enabled when t is. This means that the phase
t
of t evolves
according to G
t
(, , ), while
u
does not. The same eect would be
achieved using a guard g
u
() = 0.
Specify no pre-selection priority, but a post-selection priority between
them, for example t ~~ u. This means that the phases of both t and u
evolve in . The rst one to reach phase 0 will re but, in case of a tie, t
will be chosen. However, if
u
= 0 when t res and if F
t,u
(, 0, 0) = 1, u
might be a candidate in the new marking, and re immediately after t.
Specify neither a pre-selection nor a post-selection priority between them.
Then, as in the previous case, t and u are in a race to reach phase 0, but a
tie is now resolved by a probabilistic choice according to the the weights:
w
t|{t,u}
() and w
u|{t,u}
(), respectively, where w is a normalization of w
to ensure that the weights of the candidates in a marking sum to one.
Let (
[n]
,
[n]
) be the state of the DDP-SPN at step n. Then, the process
(
[n]
,
[n]
) : n IN is a DTMC with state space o IN
|P|
IN
|T|
. Its one-
step transition probability matrix is determined by considering the possibility
of simultaneous rings. Consider a state s = (, ). If ((s) ,= , one of the
candidates will re immediately, and the sojourn time in s is zero. Otherwise,
the sojourn time in s is one. Following GSPN [3] terminology, we call s a
vanishing or tangible state, respectively. Hence, s is tangible i > 0.
Let S
s,s
be the set of possible event sequences events leading from a tangible
state s = (, ) to a tangible state s

= (

) in one time step:


S
s,s
=
_
= (
(0)
,
(0)
, t
(0)
,
(1)
,
(1)
, t
(1)
, . . .
(n1)
,
(n1)
t
(n1)
,
(n)
,
(n)
) :
n 0,
(0)
= ,
(n)
=

,
(n)
=

,
Discrete-time Markovian stochastic Petri nets 347
t c(), G
t
(,
t
,
(0)
t
) > 0, (20.1)
i, 0 i < n, t
(i)
((
(i)
,
(i)
),
(i+1)
= /(t
(i)
,
(i)
), (20.2)
t T, F
t
(i)
,t
(
(i)
,
(i)
t
,
(i+1)
t
) > 0
_
. (20.3)
(20.1) considers the one-step evolution of the phases for the enabled transitions
in isolation, while (20.2) and (20.3) consider the sequentialized ring in zero
time of zero or more transitions at the end of the one-step period. Hence,
(
(i)
,
(i)
) is a vanishing state, for 0 i < n.
The value of the nonzero entries of is obtained by summing the probability
of all possible sequences leading from s to s

s,s
=

S
s,s

_
_

tE()
G
t
(,
t
,
(0)
t
)
_
_

_
n1

i=0
w
t
(i)
|C(
(i)
,
(i)
)
(
(i)
)
_

tT
F
t
(i)
,t
(
(i)
,
(i)
t
,
(i+1)
t
)
__
In a practical implementation, is computed one row at a time. The complex-
ity of computing row s of can be substantial, depending on the length and
number of sequences in

s
S
s,s
. If

s
S
s,s
is innite, special actions must be
taken. This can happen for two reasons:
is itself innite, and state s can reach an innite number of states in a
single step. Consider, for example, a single queue with batch arrivals of
size N > 0, where N Geom(), as in Fig. 3. Following the ring of t, a
geometrically distributed number of tokens will be placed in p
2
: when the
token is nally removed from p
1
(by the ring of v), p
2
contains N tokens
with probability (1 )
N1
. This represents a batch arrival of size N at
the server modeled by place p
2
and transition y. Unfortunately, niteness
of is an undecidable question for the class of Petri nets we dened, since
transition priorities alone make them Turing equivalent [2].
S
s,s
can be innite for a particular s

. If is nite, this requires the


presence of arbitrarily long paths over a nite set of vanishing states, just
as for a vanishing loop in a GSPN [11]. In a practical implementation,
these cycles can be detected and managed appropriately.
The size of the DTMC underlying a DDP-SPN is aected by the choice of the
representation for the DDP distributions involved. Consider, for example, the
348 Chapter 20
F
t

~ Geom(p) t
p
2
p
1
00
u v
F
u
~ Const(0) F
v
~ Const(0)
F
y
~ Geom(q) y
w
u
= 1- w
v
=
01
p
02
03
(1-)p
(1-)
2
p
q
q
q
1-p
1-q
1-q
1-q
Figure 3 (0, 0) can reach an innite number of markings in one time step.
DDP-SPN in Fig. 4(a), and assume that transitions t
1
, t
2
, and t
3
have ring
time distributions Const(1), Unif(0, 3), and Const(2), respectively. The corre-
sponding DTMCs obtained using the two representations of Fig. 2 for Unif(0, 3)
are shown in Fig. 4(b) and 4(c), respectively. The number of states is ten in
the rst case, seven in the second (the value of
t
is specied as whenever
t is not enabled and either it cannot become enabled again or its phase is going
to be reset upon becoming enabled). The dierence between the size of the two
DTMCs is due to a lumping [22] of the states, and it would be even greater if
t
3
had a more complex distribution. By postponing the probabilistic decision
as much as possible, the second DTMC lumps states (011, 12), (011, 22), and
(011, 32) of the rst DTMC into a single one, (011, 32), and states (011, 11)
and (011, 21) into (011, 21).
5 ANALYSIS OF DDP-SPNS
When using a SPN to model a system, a reward rate

is associated to each
marking . Starting from (
[n]
,
[n]
) : n IN, it is then possible to dene two
continuous-parameter processes: y(), 0, describing the instantaneous
reward rate at time : y() =
()
, where () =
[max{n}]
, and Y (),
0, describing the reward accumulated up to time , Y () =
_

0

()
d.
Discrete-time Markovian stochastic Petri nets 349
100,1
001,2 001,1 000,
011,12
011,22
011,32
011,11
011,21 010,1
1/4
1 1
1/4
1/4
1/4
1
1
1 1
1
1
100,1
001,2 001,1 000,
011,32 011,21 010,1
1/4
1 1
3/4
2/3 1/2
1
1/3 1/2
p
1
t
1
t
2
t
3
p
2
p
3
(a) (b)
(c)
Figure 4 The eect of equivalent Unif(0, 3) representations.
We consider the computation of the expected value of y(
F
) and Y (
F
) for
nite values of
F
. Let
[n]
=
_

[n]
s
_
=
_
Prs
[n]
= s

be the state probability


vector at time n. Once the state-space o corresponding to the initial state
(
[0]
,
[0]
) has been generated, any initial probability vector over o can be used
for the initial probability vector
[0]
, there is no requirement to use a vector
having a one in position (
[0]
,
[0]
) and a zero elsewhere. From
[0]
, we can
obtain
[n]
iteratively, performing n matrix-vector multiplications:

[n]
=
[n1]
(20.4)
Since the DTMC can change state only at integer times, () =
[n]
for
[n, n + 1). Practical implementations assume that the state space is nite
and that the transition probability matrix is computed before starting the
iterations. The following shows the pseudo-code to compute E[y(
F
)] and
E[Y (
F
)] with the power method:
1. compute o, , and
[0]
;
2. Y 0;
[0]
;
3. for n = 1 to
F
| do
4. Y = Y +

(,)S

(,)
;
5. ;
6. E[Y (
F
)] Y + (
F

F
|)

(,)S

(,)
;
7. E[y(
F
)]

(,)S

(,)
;
If the state space o is nite, it is possible to approximate the steady-state prob-
ability vector

= lim
n

[n]
by iterating the power method long enough.
350 Chapter 20
t
1
p
1
p
4
111000,132 011100,21 p 111000,121 1-p
110001,11
1-p
010101,1
p
000111,1
pq
q
1
100011,1
(1-p)q
010101,3
(1-q)
(1-p)(1-q)
(1-q)p
1-p
p
1
010101,2 110001,12
1-p
p
1-p
p
1
110001,13
t
2
p
2
p
5
t
3
p
3
p
6
t
4
1
F
t
1
~ Geom(p,1)
F
t
2
~ Geom(q,3)
F
t
3
~ Const(2)
F
t
4
~ Const(1)
3 1 0 1 1 q
1-q
b 1 2
31 0 p
1-p
b 1
1 0 1 1 b 2 1
1 0 1 b 1
1
0 1
0 1
0 1
Figure 5 A DDP-SPN with an ergodic underlying DTMC.
If the DTMC is ergodic, though, other numerical approaches are preferable,
based on the relation

, which can be rewritten as the homogeneous


linear system

( I) = 0, subject to

sS

s
= 1. Fast iterative methods
such as successive over-relaxation (SOR) [12] or multilevel methods [20] can
then be employed, although their convergence is not guaranteed. Fig. 5 oers
an example of an ergodic DTMC obtained from a DDP-SPN.
6 COPING WITH LARGE STATE SPACES
The power method algorithm described requires to generate the state space
o and , and then to iterate using Eq. (20.4), hence it assumes a nite o.
However, a dynamic state space exploration has been proposed to remove
this restriction [16, 18]. The general idea is to start from the initial state, or
Discrete-time Markovian stochastic Petri nets 351
set of initial states and iteratively compute both the set of reachable states
and the probability of being in them after n steps, for increasing values of
n. The approach has been proposed for the transient analysis of CTMCs using
uniformization [17, 21, 27], where, in practice, the iterations must be stopped at
a large but nite n, thus resulting in a truncation error which can be bounded.
However, the same approach is even more appropriate for the transient analysis
of DTMCs, since, in this case, no truncation is required: the exact number of
steps to be considered is determined by the time
F
at which the results are
desired.
Let o
[n]
be the set of states explored at step n. States in oo
[n]
have zero prob-
ability at step n, given the initial state(s). Then, o
[0]
is completely determined
by
[0]
, which is given, and o
[n]
is obtained from o
[n1]
by considering the
nonzero entries in
s,
for each s o
[n1]
. The pseudo-code for this modied
power method algorithm is:
1. 0; 0; 0; o (
[0]
,
[0]
); ^ o;
(
[0]
,
[0]
)
1.0;
2. for n = 1 to
F
| do
3. Y = Y +

(,)S

(,)
;
4. ^

;
5. while s ^ do
6. for each s

such that S
s,s
,= do
7. compute
s,s
;
8. if s

, o then
9. ^

; o o s

;
10. ^ ^ s;
11. ^ ^

;
12. ;
13. E[Y (
F
)] Y +

(,)S

(,)
(
F

F
|);
14. E[y(
F
)]

(,)S

(,)
;
At the beginning of the n-th iteration, o and o^ contain the states reachable
in less than n and n1 steps, respectively. The rows
s,
for the states s o^
have been built in previous iterations, while those corresponding to states s ^
still need to be computed. During the n-th iteration, ^

accumulates the states


reachable in exactly n, but not fewer, steps. These states will be explored at
the next iteration. This algorithm allows to study a DDP-SPN regardless of
whether o is nite or not, provided that:
352 Chapter 20

F
is nite (transient analysis).
A nite set of states has nonzero initial probability: [s :
[0]
s
> 0[ < .
Each row of contains a nite number of nonzero entries or, in other
words, if the marking is at time , the set of possible markings at time
+ 1 is nite.
The rst two requirements can be easily veried. The third requirement is
certainly satised if S
s,s
does not contain arbitrarily long sequences. This
requirement does not allow to analyze exactly, for example, the DDP-SPN in
Fig. 3. However, this behavior can be approximated arbitrarily well using a
truncated geometric distribution for the size of the batch arrivals. Inciden-
tally, we observe that the continuous version of this SPN, where t and y are
exponentially distributed, shows that Proposition 1 in [9] does not hold for
unbounded systems: there is no SPN with only exponentially distributed r-
ing times equivalent to this GSPN (equivalently, there is no SPN with only
geometrically distributed ring times equivalent to the DDP-SPN in Fig. 3).
6.1 Truncating the state space
The modied power method algorithm can, in principle, perform the transient
analysis of any DDP-SPN that reaches only a nite number of markings (hence
states) in a nite amount of time. In practice, though, the number of markings
reachable in a nite time might still be too large, hence we need to nd ways
to reduce the memory requirements.
A rst observation allows us to reduce the number of states that must be stored
without introducing any approximation. If all the ring times have geometric
distributions with parameters less than one, there is a nonzero probability
of remaining in a state s for an arbitrary number of steps, once s is entered.
Indeed, the assumption of our modied power method algorithm, and of [16, 18],
is that the set of explored states never decreases: o
[0]
o
[1]
o
[2]
.
However, some ring times might have distributions with nite support, so it
is possible that
[n]
s
> 0 while
[n+1]
s
= 0 and, in this case, state s can be
discarded before computing o
[n+2]
. Then, we can redene o
[n]
to be the set of
time-reachable states at step n, that is, the states having a nonzero probability
at step n: o
[n]
= s :
[n]
s
> 0.
Discrete-time Markovian stochastic Petri nets 353
p
1
10,1
t
1
F
t
1
~Const(1)
p
2
11,1 12,1 1 1 1
p
1
10,1
t
1
F
t
1
~Geom(0.9)
p
2
11,1 12,1 0.9 0.9 0.9
0.1 0.1 0.1 (a)
(b)
Figure 6 A case where S
[n]
S
[n+1]
and another where S
[n]
S
[n+1]
.
o
[0]
is completely determined by
[0]
, which is given, and
[n]
, hence o
[n]
, is
obtained from
[n1]
by computing
s,
for each s o
[n1]
, and then re-
stricting the usual matrix-vector multiplication
[n]
=
[n1]
to the entries
corresponding to o
[n1]
, since the other entries are zero anyway. Extreme cases
are illustrated in Fig. 6, where, in (a), o
[n]
= (1j, 1) : 0 j n, while, in
(b), o
[n]
= (1n, 1).
Hence, if a state s is time-reachable at step n, but time-unreachable at step
n + 1, we can destroy it and its corresponding row in at the end of step
n+1. At worst, the same state s might become time-reachable again at a later
step, and the algorithm will have to compute the corresponding row
s,
for
the transition probability matrix again.
The observation that the o
[n]
are not required to be a sequence of nondecreas-
ing subsets is, we believe, new. Unfortunately, geometric distributions with
parameter less than one are often used in practice, resulting in an increasingly
larger set of states to be stored at each step of the modied power method.
Further observing that some markings might have negligible probability, how-
ever, allows us to avoid keeping o
[n]
in its entirety, at the cost of an approximate
solution, but with computable bounds. For example, in Fig. 6(a), the proba-
bility of marking (1k, 1) at step n, k n, is (
n
k
) 0.9
k
0.1
nk
, which is extremely
small when the dierence between n and k is large. An approximate solution
approach based on truncation of the state-space might then be appropriate. At
step n, only the states in

o
[n]
o
[n]
are considered. For each state s

o
[n]
,
its computed probability
n
s
is an approximation of the exact probability
[n]
s
at step n:
354 Chapter 20
1. Initially,
[0]
is known, so set

[0]

[0]
and

o
[0]
s :
[0]
s
> 0.
The total known probability mass and the total known sojourn time
at the beginning are

[0]
[[
[0]
[[
1
=

s

S
[0]

[0]
s
= 1 and K
[0]
0.
2. As the iteration progresses, the size of

o
[n]
might grow too large and states
with probability below a threshold c must be truncated, destroying them
and their corresponding row in , de facto setting their probability to zero
for each s

o
[n]
do if
[n]
s
< c then
[n]
s
0.
Compute the new set of kept states

o
[n]
s :
[n]
s
> 0.
Regardless of whether truncation is performed, the total known probability
mass at step n and the total known sojourn time up to step n are

[n]
[[
[n]
[[
1
=

s

S
[n]

[n]
s
1 and K
[n]
K
[n1]
+
[n]
n.
Without other information, we can only say that the probability of being
in state s

o
[n]
at step n is at least
[n]
s
, while we do not know how
the unaccounted probability mass
[0]

[n]
should be redistributed (we
know that it should be redistributed over the states in o
[n]
, hence some
of it could be over states in

o
[n]
o
[n]
, but we have no way to tell). An
analogous interpretation holds for K
[n]
.
3. Truncation can be performed as many times as needed, although every
application reduces the value of
[n]
, thus increases our uncertainty about
the state of the system.
4. Upon reaching time
F
, we know that, with probability at least
[F ]
, the
system is in one of the non-truncated states

o
[F ]
. Conversely, a total of
K(
F
)
F
K
[F ]
+
[F ]
(
F

F
|)
sojourn time units are unaccounted for. Hence, assuming that the reward
rates associated to the states have an upper and lower bound
L
and
U
,
Discrete-time Markovian stochastic Petri nets 355
E[Y (
F
)] and E[y(
F
)] can be bounded as well. If E[

Y (
F
)] and E[ y(
F
)]
are the approximations obtained using our truncation approach,
E[ y(
F
)] +
L
(1
[F ]
) E[y(
F
)] E[ y(
F
)] +
U
(1
[F ]
),
E[

Y (
F
)] +
L
K(
F
) E[Y (
F
)] E[

Y (
F
)] +
U
K(
F
).
Highly-reliable systems are particularly good candidates for this state-space
truncation, since they have a large number of low-probability states.
6.2 Embedding the DTMC
When performing steady-state analysis, it is possible to perform an embedding
of the DTMC, observing it only when particular state-to-state transitions occur.
For a simple example, consider the DTMC in Fig. 5, which has a transition
from state (000111, 1) to state (111000, 132) with probability one. If the
ring time of transition t
4
were changed to Const(7), instead of Const(1), the
DTMC would have to contain six additional states, (000111, 7) through
(000111, 2). This is obviously undesirable, and it can be easily avoided by
an embedding. The DTMC of the embedded process is exactly that of Fig. 5,
we must simply set the expected holding time h
s
of each state s to one, except
that of (000111, 1), which is set to seven. Then, we can solve the embedded
DTMC for steady state and obtain a steady-state probability vector for the
embedded process. The steady-state probability vector of the actual process
is then obtained by weighting according to the holding times, a well known
result applicable to the steady-state solution of any semi-Markov process [8]:

s
=
s
h
s
_
uS

u
h
u
_
1
.
For transient analysis, the same idea can be applied, but in a much more re-
stricted way. If, at step n, every state in o
[n]
is such that the minimum time
before a change of marking is k > 1, we can eectively perform an embedding.
In the modied power method algorithm, this requires advancing n by k in-
stead of just one step in the outermost loop and adjusting the increment of Y
in statement 3 accordingly. It should be noted, however, that this situation
is unlikely to occur, since the set o
[n]
may contain many states s = (, ),
and, for each of them, the DTMC describing the ring time of each enabled
transition t in must satisfy min
_
l IN : Pr
[l]
t
= 0 [
[0]
t
=
t
> 0
_
k.
This is analogous to the requirement for an ecient application of adaptive
uniformization [28] and, as stated in the introduction, it is unlikely to happen
in general, especially for large values of
F
.
356 Chapter 20
7 CONCLUSION AND FUTURE WORK
We dened a class of discrete-time distributions which, when used to specify
the ring time of the transitions in a stochastic Petri net, ensures that the
underlying stochastic process is a DTMC. We then gave conditions under which
the transient analysis of this DTMC can be performed even if the state space is
innite. In practice, though, the memory requirements might still be excessive,
hence we explored some state-space reduction techniques.
The implementation of a computer tool based on the DDP-SPN formalism is
under way. In particular, algorithms for the ecient computation of the rows
of the transition probability matrix,
s,
, are being explored.
REFERENCES
[1] V. S. Adve and M. K. Vernon. The inuence of random delays on parallel
execution times. In Proc. 1993 ACM SIGMETRICS Conf. on Measure-
ment and Modeling of Computer Systems, Santa Clara, CA, May 1993.
[2] T. Agerwala. A complete model for representing the coordination of asyn-
chronous processes. Hopkins Computer Research Report 32, Johns Hop-
kins University, Baltimore, Maryland, July 1974.
[3] M. Ajmone Marsan, G. Balbo, and G. Conte. A class of Generalized
Stochastic Petri Nets for the performance evaluation of multiprocessor
systems. ACM Trans. Comp. Syst., 2(2):93122, May 1984.
[4] M. Ajmone Marsan and V. Signore. Timed Petri nets performance models
for ber optic LAN architectures. Proc. 2rd Int. Workshop on Petri Nets
and Performance Models (PNPM87), pages 6674, 1987.
[5] R. Y. Al-Jaar and A. A. Desrochers. Modeling and analysis of transfer
lines and production networks using Generalized Stochastic Petri Nets. In
Proc. 1988 UPCAEDM Conf., pages 1221, Atlanta, GA, June 1988.
[6] K. Barkaoui, G. Florin, C. Fraize, B. Lemaire, and S. Natkin. Reliability
analysis of non repairable systems using stochastic Petri nets. In Proc.
18th Int. Symp. on Fault-Tolerant Computing, pages 9095, Tokyo, Japan,
June 1988.
[7] J. Bechta Dugan and G. Ciardo. Stochastic Petri net analysis of a repli-
cated le system. IEEE Trans. Softw. Eng., 15(4):394401, Apr. 1989.
Discrete-time Markovian stochastic Petri nets 357
[8] E. C inlar. Introduction to Stochastic Processes. Prentice-Hall, 1975.
[9] G. Chiola, S. Donatelli, and G. Franceschinis. GSPNs versus SPNs: what
is the actual role of immediate transitions? In Proc. 4th Int. Workshop on
Petri Nets and Performance Models (PNPM91), Melbourne, Australia,
Dec. 1991. IEEE Computer Society Press.
[10] H. Choi and K. S. Trivedi. Approximate performance models of polling
systems using stochastic Petri nets. In Proc. IEEE INFOCOM 92, pages
23062314, Florence, Italy, May 1992.
[11] G. Ciardo. Analysis of large stochastic Petri net models. PhD thesis, Duke
University, Durham, NC, 1989.
[12] G. Ciardo, A. Blakemore, P. F. J. Chimento, J. K. Muppala, and K. S.
Trivedi. Automated generation and analysis of Markov reward models
using Stochastic Reward Nets. In C. Meyer and R. J. Plemmons, editors,
Linear Algebra, Markov Chains, and Queueing Models, volume 48 of IMA
Volumes in Mathematics and its Applications, pages 145191. Springer-
Verlag, 1993.
[13] G. Ciardo, R. German, and C. Lindemann. A characterization of the
stochastic process underlying a stochastic Petri net. IEEE Trans. Softw.
Eng., 20(7):506515, July 1994.
[14] G. Ciardo and C. Lindemann. Analysis of deterministic and stochastic
Petri nets. In Proc. 5th Int. Workshop on Petri Nets and Performance
Models (PNPM93), pages 160169, Toulouse, France, Oct. 1993. IEEE
Computer Society Press.
[15] A. Cumani. ESP - A package for the evaluation of stochastic Petri nets
with phase-type distributed transitions times. In Proc. Int. Workshop on
Timed Petri Nets, Torino, Italy, July 1985.
[16] E. de Souza e Silva and P. Mej

ia Ochoa. State space exploration in Markov


models. In Proc. 1992 ACM SIGMETRICS Conf. on Measurement and
Modeling of Computer Systems, pages 152166, Newport, RI, USA, June
1992.
[17] W. K. Grassmann. Means and variances of time averages in Markovian
environments. Eur. J. Oper. Res., 31(1):132139, 1987.
[18] W. K. Grassmann. Finding transient solutions in Markovian event systems
through randomization. In W. J. Stewart, editor, Numerical Solution of
Markov Chains, pages 357371. Marcel Dekker, Inc., New York, NY, 1991.
358 Chapter 20
[19] M. Holliday and M. Vernon. A Generalized Timed Petri Net model for
performance analysis. In Proc. Int. Workshop on Timed Petri Nets, Torino,
Italy, July 1985.
[20] G. Horton and S. T. Leutenegger. A multi-level solution algorithm for
steady state Markov chains. In Proc. 1994 ACM SIGMETRICS Conf.
on Measurement and Modeling of Computer Systems, pages 191200,
Nashville, TN, May 1994.
[21] A. Jensen. Marko chains as an aid in the study of Marko processes.
Skand. Aktuarietidskr., 36:8791, 1953.
[22] J. G. Kemeny and J. L. Snell. Finite Markov Chains. D. Van Nostrand-
Reinhold, New York, NY, 1960.
[23] C. Lindemann, G. Ciardo, R. German, and G. Hommel. Performability
modeling of an automated manufacturing system with deterministic and
stochastic Petri nets. In Proc. 1993 IEEE Int. Conf. on Robotics and
Automation, pages 576581, Atlanta, GA, May 1993. IEEE Press.
[24] B. Melamed and M. Yadin. Randomization procedures in the computa-
tion of cumulative-time distributions over discrete state Markov rocesses.
Operations Research, 32(4):926944, July-Aug. 1984.
[25] M. K. Molloy. On the integration of delay and throughput measures in
distributed processing models. PhD thesis, UCLA, Los Angeles, CA, 1981.
[26] S. Natkin. Reseaux de Petri stochastiques. These de docteur ingeneur,
CNAM-Paris, Paris, France, June 1980.
[27] A. L. Reibman and K. S. Trivedi. Numerical transient analysis of Markov
models. Computers and Operations Research, 15(1):1936, 1988.
[28] A. P. A. van Moorsel and W. H. Sanders. Adaptive uniformization.
Stochastic Models, 10(3), 1994.
[29] R. Zijal and R. German. A new approach to discrete time stochastic Petri
nets. In Proc. 11th Int. Conf. on Analysis and Optimization of Systems,
Discrete Event Systems, pages 198204, Sophia-Antipolis, France, June
1994.
[30] W. M. Zuberek. D-timed Petri nets and the modeling of timeouts and
protocols. Trans. of the Society for Computer Simulation, 4(4):331357,
1987.

You might also like