Markov Processes: Fundamental of Stochastic Networks-Oliver C.Ibe, John-Wiley, 2011
Markov Processes: Fundamental of Stochastic Networks-Oliver C.Ibe, John-Wiley, 2011
Markov Processes: Fundamental of Stochastic Networks-Oliver C.Ibe, John-Wiley, 2011
This means that, given the present state of the process, the future state is
independent of the past. This property is usually refereed to as the Markov
property. In second-order Markov processes, the future state depends on both the
current state and the last immediate state, and so on for higher order Markov
processes. Here we consider only first-order Markov processes.
Markov processes are classified according to the nature of the time parameter
and the nature of the state space. With respect to state space, a Markov process
can be either a discrete-state Markov process or a continuous –state Markov
process. A discrete –state Markov process is called a Markov chain.
The quantity 𝑝𝑖𝑗𝑘 is called the state transition probability, which is the conditional
probability that the process will be in state 𝒋 at time 𝒌 immediately after the next
transition , given that it is in state 𝒊 at time 𝒌 − 𝟏. A Markov chain that obeys the
preceding rule is called a nonhomogeneous Markov chain. Here we consider only
the homogeneous Markov chains , which are Markov chains such that
𝑝𝑖𝑗𝑘 = 𝑝𝑖𝑗 .
This means that homogeneous Markov chains do not depend on the time unit,
which implies that
𝑃 𝑋𝑘 = 𝑗|𝑋𝑘−1 = 𝑖, 𝑋𝑘−2 = 𝑛, … . . , 𝑋0 = 𝑚 = 𝑃 𝑋𝑘 = 𝑗|𝑋𝑘−1 = 𝑖 = 𝑝𝑖𝑗 ,
which is the so-called Markov property.
The homogeneous state transition probability 𝑝𝑖𝑗 satisfies the following conditions:
1. 0 ≤ 𝑝𝑖𝑗 ≤ 1
2. σ𝑖 𝑝𝑖𝑗 = 1, 𝑖 = 1,2,3, … , 𝑛, which follows from the fact that the states are
mutually exclusive and collectively exhaustive.
From the above definition we obtain the following Markov chain rule:
𝑃 𝑋𝑘 = 𝑖𝑘 , 𝑋𝑘−1 = 𝑖𝑘−1 , 𝑋𝑘−2 = 𝑖𝑘−2 , … . , 𝑋0 = 𝑖0
= 𝑃 𝑋𝑘 = 𝑖𝑘 |𝑋𝑘−1 = 𝑖𝑘−1 , 𝑋𝑘−2 = 𝑖𝑘−2 , … . , 𝑋0 = 𝑖0
× 𝑃 𝑋𝑘−1 = 𝑖𝑘−1 , 𝑋𝑘−2 = 𝑖𝑘−2 , … . , 𝑋0 = 𝑖0
= 𝑃 𝑋𝑘 = 𝑖𝑘 |𝑋𝑘−1 = 𝑖𝑘−1 × 𝑃 𝑋𝑘−1 = 𝑖𝑘−1 , 𝑋𝑘−2 = 𝑖𝑘−2 , … . , 𝑋0 = 𝑖0
= 𝑃[𝑋𝑘 = 𝑖𝑘 |𝑋𝑘−1 = 𝑖𝑘−1 ] × 𝑃[𝑋𝑘−1 = 𝑖𝑘−1 |𝑋𝑘−2 = 𝑖𝑘−2 , … , 𝑋0 = 𝑖0 ] ×
𝑃[𝑋𝑘−2 = 𝑖𝑘−2 , … , 𝑋0 = 𝑖0 ]
= 𝑃 𝑋𝑘 = 𝑖𝑘 𝑋𝑘−1 = 𝑖𝑘−1 × 𝑃 𝑋𝑘−1 = 𝑖𝑘−1 𝑋𝑘−2 = 𝑖𝑘−2 × ⋯ ×
𝑃 𝑋1 = 𝑖1 𝑋0 = 𝑖0 × 𝑃 𝑋0 = 𝑖0
= 𝑃 𝑋0 = 𝑖0 𝑝𝑖0𝑖1 𝑝𝑖1𝑖2 𝑝𝑖2𝑖3 … … 𝑝𝑖𝑘−1 𝑖𝑘 .
Thus , once we know the initial state 𝑿𝟎 we can evaluate the joint probability
𝑃 𝑋𝑘 = 𝑖𝑘 , 𝑋𝑘−1 = 𝑖𝑘−1 , 𝑋𝑘−2 = 𝑖𝑘−2 , … . , 𝑋0 = 𝑖0
State Transition Probability Matrix
It is customary to display the state transition probabilities as the entries of an 𝑛 × 𝑛
matrix 𝑃, where 𝑝𝑖𝑗 is the entry in the 𝑖th row and 𝑗th column:
To
State:
𝑝11 𝑝12 … 𝑝1𝑛 1
𝑝21 𝑝22 … 𝑝2𝑛 2
𝑃= :
… … … …
𝑝𝑛1 𝑝𝑛2 … 𝑝𝑛𝑛 n
From 1 2 ….. n
State:
𝑃 is called the transition probability matrix. It is a stochastic matrix because for any
row 𝑖, σ𝑗 𝑝𝑖𝑗 = 1
The 𝒏 − 𝑺𝒕𝒆𝒑 State Transition Probability
Let 𝑝𝑖𝑗 (𝑛) denote the conditional probability that the system will be in state 𝑗 after exactly 𝑛
transitions , given that it is currently in state 𝑖. That is,
𝑝𝑖𝑗 𝑛 = 𝑝𝑖𝑗 𝑛 = 𝑃 𝑋𝑚+𝑛 = 𝑗 𝑋𝑚 = 𝑖 ,
.
1 𝑖=𝑗
𝑝𝑖𝑗 0 = ቊ ,
0 𝑖≠𝑗
𝑝𝑖𝑗 (1) = 𝑝𝑖𝑗.
Consider the two –step transition probability 𝑝𝑖𝑗 (2), which is defined by:
𝑝𝑖𝑗 2 = 𝑃[𝑋𝑚+2 = 𝑗|𝑋𝑚 = 𝑖]
Assume that 𝑚 = 0, then:
𝑝𝑖𝑗 2 = 𝑃 𝑋2 = 𝑗 𝑋0 = 𝑖 = 𝑃[𝑋2 = 𝑗, 𝑋1 = 𝑘|𝑋0 = 𝑖]
𝑘
= 𝑃 𝑋2 = 𝑗 𝑋1 = 𝑘 𝑃[𝑋1 = 𝑘|𝑋0 = 𝑖]
𝑘
Proof: The proof is a generalization of the proof for the case of 𝑛 = 2 and is as follows:
𝑝𝑖𝑗 𝑛 = 𝑃 𝑋𝑛 = 𝑗 𝑋0 = 𝑖 = 𝑃[𝑋𝑛 = 𝑗, 𝑋𝑟 = 𝑘|𝑋0 = 𝑖]
𝑘
= 𝑃 𝑋𝑛 = 𝑗 𝑋𝑟 = 𝑘, 𝑋0 = 𝑖 𝑃[𝑋𝑟 = 𝑘|𝑋0 = 𝑖]
𝑘
= 𝑝𝑖𝑘 (𝑟)𝑝𝑘𝑗 (𝑛 − 𝑟)
𝑘
From the preceding discussion it can be shown that 𝑝𝑖𝑗 (𝑛) is the 𝑖𝑗th entry (𝑖th
row and 𝑗th column) in the matrix 𝑃𝑛 . That is , for an 𝑁 −state Markov chain , 𝑃𝑛 is
the matrix:
𝑝11 (𝑛) 𝑝12 (𝑛) 𝑝13 (𝑛) … 𝑝1𝑁 (𝑛)
𝑝21 (𝑛) 𝑝22 (𝑛) 𝑝23 (𝑛) … 𝑝2𝑁 (𝑛)
𝑛
𝑃 = 𝑝31 (𝑛) 𝑝32 (𝑛) 𝑝33 (𝑛) … 𝑝3𝑁 (𝑛)
… … … … …
𝑝𝑁1 (𝑛) 𝑝𝑁2 (𝑛) 𝑝𝑁3 (𝑛) … 𝑝𝑁𝑁 (𝑛)
If we define state 1 to represent heads and state 2 to represent tails, then the transition probability
matrix for this problem is the following :
𝟎. 𝟔 𝟎. 𝟒
𝑷=
𝟎. 𝟑𝟓 𝟎. 𝟔𝟓
0.4
0.6 1 2 0.65
0.35
Fig.1 Example of a state transition diagram
All the properties of the Markov process can be determined from this matrix. However , the
analysis of the problem can be simplified by the use of the state transition diagram in which
the states are represented by circles and directed arcs/edges represent transitions between
states. The state transition probabilities are labeled on the appropriate arcs/edges. Thus ,
with respect to the above problem, we obtain the state transition diagram as shown in Fig.1.
#Transition diagram
2
0.2
0.6
0.1 1 0.4
0.1 0.2 0.7
𝑃 = 0.6 0 0.4
0.7 0.4 0 0.6
0.4
3 0.6
= 𝑎𝑖 𝑝𝑖3 𝑝31 𝑝14 = (𝑎1 𝑝13 +𝑎2 𝑝23 + 𝑎3 𝑝33 + 𝑎3 𝑝43 )𝑝31 𝑝14 = 0.06
𝑖=1
Michael Baron: Probability and Statistics for Computer scientists, Chapman & Hall
# (WEATHER FORECASTS) In some town , each day is either sunny or rainy. A sunny day is followed by another
sunny day with probability 0.7, whereas a rainy day is followed by a sunny day with probability 0.4.
It rains on Monday. Make forecasts for Tuesday, Wednesday, and Thursday.
Sol.: Weather condition here represents a homogeneous Markov chain with 2 states. Let
1 → Sunny day“ and 2 → "𝑅𝑎𝑖𝑛𝑦 𝑑𝑎𝑦“
We represent the transition probability matrix as:
𝑝11 𝑝12 0.7 0.3
𝑃= 𝑝 =
21 𝑝22 0.4 0.6
Now if it rains on Monday, then Tuesday is sunny with probability 𝑝21 = 0.4 and Tuesday is rainy with
probability 𝑝22 = 0.6.
Wednesday forecast involves 2-step transition probability matrix. Therefore
𝑝11 𝑝12 𝑝11 𝑝12 𝑝11 𝑝11 + 𝑝12 𝑝21 𝑝11 𝑝12 + 𝑝12 𝑝22 0.61 0.39
𝑃(2) = 𝑝 𝑝 𝑝 𝑝 = 𝑝21 𝑝11 + 𝑝22 𝑝21 𝑝21 𝑝12 + 𝑝22 𝑝22 =
21 22 21 22 0.52 0.48
(2)
Since 𝑝21 = 0.52, we find 52% chances of rain on Wednesday.
Further for the forecast on Thursday, we need 𝑃(3) . It could be readily seen that
0.61 0.39 0.7 0.3 0.583 0.417
𝑃(3) = =
0.52 0.48 0.4 0.6 0.556 0.444
(3)
Therefore, chances of rain , 𝑝21 on Thursday is 55.6%.
0.3
0.7 1 2 0.6
Fig.3 illustrates the transition diagram for the
problem.
0.4
#(SHARED DEVICE) . A computer is shared by 2 users who send tasks to a computer remotely and work
independently. At any minute, any connected user may disconnect with probability 0.5, and any
disconnected user may connect with a new task with probability 0.2. (a) Draw the transition diagram , (b)
Compute transition probabilities.
Sol.: Let 𝑋 𝑡 be the number of concurrent users at time 𝑡 𝑚𝑖𝑛𝑢𝑡𝑒𝑠 . Further let 𝑋(0) =0 correspond to no
users at time 𝑡 = 0. Then 𝑋(1) is the number of new connections within next minute and it has the binomial
distribution 𝑏 2,0.2 . Thus
2 0 2 2 1 1 2 2 0
𝑝00 = 𝑝 𝑞 = (0.8)2 =0.64 ; 𝑝01 = 𝑝 𝑞 = 2 ∗ 0.2 ∗ 0.8 = 0.32; 𝑝02 = 𝑝 𝑞 = 0.04
0 1 2
Next suppose 𝑋 0 = 1, i.e., one user is connected, and other user is not. The number of new connections is
Binomial with distribution 𝑏 1,0.2 and number of disconnection is distributed as 𝑏 1,0.5 .Therefore
𝑝10 = 𝑝𝑟𝑜𝑏. 𝑜𝑓 𝑛𝑜 𝑛𝑒𝑤 𝑐𝑜𝑛𝑛𝑒𝑐𝑡𝑖𝑜𝑛 ∗ 𝑝𝑟𝑜𝑏. 𝑜𝑓 𝑎 𝑑𝑖𝑠𝑐𝑐𝑜𝑛𝑒𝑐𝑡𝑖𝑜𝑛 = 0.8 ∗ 0.5 = 0.40
𝑝11 =0.2*0.5+0.8*0.5=0.50 ,𝑝12 = 0.2 ∗ 0.5 = 0.10
Finally for 𝑋 0 = 2, we find no new user to connect, and the number of disconnection follows Binomial
distribution 𝑏(2,0.5), such that
2 2 0 2 1 1
𝑝20 = 𝑝 𝑞 = 0.5 ∗ 0.5 = 0.25, 𝑝21 = 𝑝 𝑞 = 2 ∗ 0.5 ∗ 0.5 = 0.50,
0 1
2 0 2
𝑝22 = 𝑝 𝑞 =0.5*0.5=0.25
2
Therefore transition matrix may be represented as:
0.64 0.32 0.04
𝑃 = 0.40 0.50 0.10
0.25 0.50 0.25
TWO STATE MARKOV CHAIN ( Trivedi)
# We observe the state of a system (or a component) at discrete points in time. We
say that the system is in state “0” if it is operating properly. If the system is
undergoing repair (following a breakdown) , then the system is in state “1”. If we
assume that the system possesses the Markov property, then we have a two-state
discrete parameter Markov chain. Further assuming that the Markov chain is
homogeneous, we could specify its transition probability by
𝟏−𝜶 𝜶
𝑷= , 𝟎 ≤ 𝜶, 𝜷≤𝟏
𝜷 𝟏−𝜷
The actual values of the entries will have to be estimated from the measurements
made on the system using statistical technique.
(1 − 𝛼)
𝑛−2
𝑙 𝑛−1
=𝛽 1−𝛼−𝛽 + 1−𝛼−𝛽 (1 − 𝛼)
𝑙=0
1 − 1 − 𝛼 − 𝛽 𝑛−1
=𝛽 + 1 − 𝛼 − 𝛽 𝑛−1 (1 − 𝛼)
1− 1−𝛼−𝛽
𝛽 𝛼
= + 1−𝛼−𝛽 𝑛
𝛼+𝛽 𝛼+𝛽
Thus
𝛽 𝛼 𝑛
𝑝00 𝑛 = + 1−𝛼−𝛽
𝛼+𝛽 𝛼+𝛽
Since
𝑝00 𝑛 + 𝑝01 𝑛 = 1
Therefore
𝛼 𝛼 𝑛
𝑝01 𝑛 = 1 − 𝑝00 𝑛 = − 1−𝛼−𝛽
𝛼+𝛽 𝛼+𝛽
𝛼 𝛼 𝑛
𝑝01 𝑛 = − 1−𝛼−𝛽
𝛼+𝛽 𝛼+𝛽
For 𝑖 = 1, 𝑗 = 0, we get
𝑝10 𝑛 = 𝑝1𝑘 𝑛 − 1 𝑝𝑘0 1 = 𝑝10 𝑛 − 1 𝑝00 1 + 𝑝11 (𝑛 − 1)𝑝10 (1)
𝑘
= 1 − 𝛼 𝑝10 𝑛 − 1 + 𝛽𝑝11 𝑛 − 1 (2)
Since
𝑝11 𝑛 − 1 = 1 − 𝑝10 (𝑛 − 1)
Therefore (2) reduces to
𝑝10 𝑛 = 1 − 𝛼 𝑝10 𝑛 − 1 + 𝛽 − 𝛽𝑝10 (𝑛 − 1)
𝑛−2
𝛼 𝛽
𝑝11 𝑛 = + (1 − 𝛼 − 𝛽)𝑛
𝛼+𝛽 𝛼+𝛽
Thus given a two state Markov chain with the transition probability matrix as:
1−𝛼 𝛼
𝑃=
𝛽 1−𝛽
,the 𝑛-step transition probability matrix is given by:
𝑝 (𝑛) 𝑝01 (𝑛)
𝑃𝑛 = 00
𝑝10 (𝑛) 𝑝11 (𝑛)
𝛽 𝛼 𝛼 𝛼
+ (1 − 𝛼 − 𝛽)𝑛 − (1 − 𝛼 − 𝛽)𝑛
𝛼+𝛽 𝛼+𝛽 𝛼+𝛽 𝛼+𝛽
=
𝛽 𝛽 𝛼 𝛽
− (1 − 𝛼 − 𝛽)𝑛 + (1 − 𝛼 − 𝛽)𝑛
𝛼+𝛽 𝛼+𝛽 𝛼+𝛽 𝛼+𝛽
Since 1 − 𝛼 − 𝛽 < 1, we observe that
𝛽 𝛼
𝛼+𝛽 𝛼+𝛽
lim 𝑃(𝑛) =
𝑛→∞ 𝛽 𝛼
𝛼+𝛽 𝛼+𝛽
𝑛
And thus two rows of 𝑃 match in their corresponding elements.
Finding 𝑷𝒏 using the eigenvalues and eigenvectors of 𝑷
Consider the two state Markov chain with transition matrix as:
𝟏−𝜶 𝜶
𝑷= , 𝟎 ≤ 𝜶, 𝜷≤𝟏
𝜷 𝟏−𝜷
To find 𝑃𝑛 , we first find the eigenvalues of 𝑃. They are given by the solution of the
determinant equation det 𝑃 − 𝜆𝐼2 = 0, that is
1−𝛼−𝜆 𝛼
= 0 ⇒ 1 − 𝛼 − 𝜆 1 − 𝛽 − 𝜆 − 𝛼𝛽 = 0
𝛽 1−𝛽−𝜆
Hence 𝜆 satisfies the quadratic equation
𝜆2 − 𝜆 2 − 𝛼 − 𝛽 + 1 − 𝛼 − 𝛽 = 0
Or
𝜆 − 1 𝜆 − 1 + 𝛼 + 𝛽 = 0.
Therefore eigenvalues are:
𝜆 = 1 ; 𝜆 = 1 − 𝛼 − 𝛽 = 𝑠 (say)
We now find the eigenvectors associated with each of these eigenvalues. Let 𝒓𝟏
be the (column) eigenvector of 𝜆1 = 1 which is defined by
1 − 𝛼 − 𝜆1 𝛼
𝑃 − 𝜆1 𝐼2 𝒓𝟏 = 𝟎 ⇒ 𝒓 =𝟎
𝛽 1 − 𝛽 − 𝜆1 𝟏
Or
−𝛼 𝛼
𝛽 −𝛽 𝒓𝟏 = 0
Choose any (nonzero) solution of this equation, say
1
𝒓𝟏 = .
1
Similarly the second eigenvector 𝒓𝟐 satisfies
1 − 𝛼 − 𝜆2 𝛼
𝒓 =0
𝛽 1 − 𝛽 − 𝜆2 𝟐
Or
𝛽 𝛼
𝒓 =0
𝛽 𝛼 𝟐
In this case we choose
−𝛼
𝒓𝟐 = 𝛽
Now form the matrix 𝐶 which has the eigenvectors 𝒓𝟏 and 𝒓𝟐 as columns, so that
1 −𝛼
𝑪 = 𝒓𝟏 𝒓 𝟐 =
1 𝛽
The inverse of the matrix 𝐶 may be written as:
1 𝛽 𝛼
𝐶 −1 =
𝛼 + 𝛽 −1 1
Now consider the matrix
1 𝛽 𝛼 1−𝛼 𝛼 1 −𝛼
𝐷 = 𝐶 −1 𝑃𝐶 =
𝛼 + 𝛽 −1 1 𝛽 1−𝛽 1 𝛽
1 𝛽 𝛼 1 − 𝛼 + 𝛼 −𝛼 1 − 𝛼 + 𝛼𝛽
=
𝛼 + 𝛽 −1 1 𝛽 + 1 − 𝛽 −𝛼𝛽 + 𝛽(1 − 𝛽)
1 𝛽 𝛼 1 −𝛼(1 − 𝛼 − 𝛽)
=
𝛼 + 𝛽 −1 1 1 𝛽(1 − 𝛼 − 𝛽)
1 𝛽 𝛼 1 −𝛼𝑠
=
𝛼 + 𝛽 −1 1 1 𝛽𝑠
1 𝛽 + 𝛼 −𝛼𝛽𝑠 + 𝛼𝛽𝑠
=
𝛼 + 𝛽 −1 + 1 𝛼𝑠 + 𝛽𝑠
1 0 𝜆 0
= = 1
0 𝑠 0 𝜆2
Now 𝐷 is a diagonal matrix with the eigenvalues of 𝑃 as its diagonal elements:
this process is known in linear algebra as the diagonalization of a matrix. The
result is significant since diagonal matrices are easy to multiply. From above, if
we premultiply 𝐷 by matrix 𝐶 and post multiply by 𝐶 −1, then we find that
𝑃 = 𝐶𝐷𝐶 −1 .
Thus
2 −1 −1 2 −1 𝜆1 2 0
𝑃 = 𝐶𝐷𝐶 𝐶𝐷𝐶 = 𝐶𝐷 𝐶 =
0 𝜆2 2
Extending this product to higher powers, we find that
𝑃𝑛 = 𝐶𝐷𝑛 𝐶 −1
where
1 0
𝐷𝑛 =
0 𝑠𝑛
The product matrix
1 1 −𝛼 1 0 𝛽 𝛼
𝑃𝑛 = 𝐶𝐷𝑛 𝐶 −1 =
𝛼 + 𝛽 1 𝛽 0 𝑠 𝑛 −1 1
1 1 −𝛼 𝛽 𝛼
=
𝛼 + 𝛽 1 𝛽 −𝑠 𝑛 𝑠 𝑛
𝛽 𝛼 𝑛
𝛼 𝛼
+ 𝑠 − 𝑠𝑛
𝛼+𝛽 𝛼+𝛽 𝛼+𝛽 𝛼+𝛽
=
𝛽 𝛽 𝑛
𝛼 𝛽
− 𝑠 + 𝑠𝑛
𝛼+𝛽 𝛼+𝛽 𝛼+𝛽 𝛼+𝛽
Which is same as was shown using Chapman –Kolmogorov equation.
Since 𝑠 𝑛 → 0 as 𝑛 → ∞, we find that
𝛽 𝛼
𝛼+𝛽 𝛼+𝛽
𝑃𝑛 = 𝑎𝑠 𝑛 → ∞.
𝛽 𝛼
𝛼+𝛽 𝛼+𝛽
Now we consider a cascade of binary channels that are so noisy that the digit
transmitted is always complemented. In other words, 𝛼 = 𝛽 = 1. The matrix 𝑃 in
this case is given by:
0 1
𝑃=
1 0
It is readily observed that
1 0 𝑖𝑓 𝑛 𝑖𝑠 𝑒𝑣𝑒𝑛
𝑃𝑛 = ൞ 0 1
0 1 𝑖𝑓 𝑛 𝑖𝑠 𝑜𝑑𝑑
1 0
This Markov chain has an interesting behaviour. Starting in state 𝟎(𝒐𝒓 𝟏), we
return to state 𝟎(𝒐𝒓 𝟏) after an even number of steps. Therefore , the time
between visits to a given state exhibits a periodic behaviour . Such a chain is
called a periodic Markov chain.
# Find the eigenvalues and eigenvectors of the stochastic matrix
1ൗ 1ൗ 1ൗ
4 2 4
𝑃 = 1ൗ2 1ൗ4 1ൗ4
1ൗ 1ൗ 1ൗ
4 4 2
𝑛 𝑛
Construct a formula for 𝑃 , and find lim 𝑃 .
𝑛→∞
1 1 1 1 1 1
1 1 1ൗ 1 1 1ൗ
= 1 − 𝜆 ൗ2 ൗ4 − 𝜆 4 = (1 − 𝜆) ൗ2 ൗ4 − 𝜆 4
1ൗ 1ൗ 1ൗ − 𝜆 0 0 1ൗ − 𝜆
4 4 2 4
= 1 − 𝜆 1ൗ4 − 𝜆 −1ൗ4 − 𝜆 = 0
Therefore the eigenvalues are:
𝜆1 = 1 , 𝜆2 = 1ൗ4 , 𝜆3 = − 1ൗ4
The eigenvectors could be obtained using
𝑃 − 𝜆𝑖 𝐼3 𝒓𝒊 = 𝟎, 𝑖 = 1,2,3
as:
1 −1 1
𝒓𝟏 = 1 , 𝒓𝟐 = −1 , 𝒓𝟑 = −1
1 2 0
Therefore the matrix 𝐶 may be written as:
1 −1 1
𝐶 = 1 −1 −1
1 2 0
The 𝑛 −step transition probability matrix may therefore is obtained as:
1 0 0
1 1 −1 1 1ൗ 𝑛 2 2 2
𝑛 𝑛 −1
𝑃 = C𝐷 𝐶 = 1 −1 −1 0 4 0 −1 −1 2
6 𝑛
1 2 0 0 0 − 1ൗ4 3 −3 0
As 𝑛 → ∞, we find that
1ൗ 1ൗ 1ൗ
1 −1 1 1 0 0 2 2 2 3 3 3
1
𝑃𝑛 = 1 −1 −1 0 0 0 −1 −1 2 = 1ൗ3 1ൗ3 1ൗ3
6
1 2 0 0 0 0 3 −3 0 1ൗ 1ൗ 1ൗ
3 3 3
Suppose now that 𝑃 is the transition matrix of a 3-state Markov chain, and that
initial probability distribution is 𝒑(𝟎) . Then the probability distribution after 𝑛 steps
is:
𝒑(𝒏) = 𝒑(𝟎) 𝑃𝑛
The invariant probability distribution 𝑝 is
𝒑 = lim 𝒑(𝑛) = 𝒑(𝟎) lim 𝑃𝑛 = 1ൗ3 1ൗ3 1ൗ3
𝑛→∞ 𝑛→∞
The vector 𝒑 gives the long term probability distribution across the three states. In
other words, if any snapshot of the system is eventually taken for large 𝑛, then the
system is equally likely ( in this example) to lie in each of the states, and this is
independent of the initial distribution 𝒑(𝟎) .