Ramos Intro HMMINTRODUCTIONTOMARKOVMODELS
Ramos Intro HMMINTRODUCTIONTOMARKOVMODELS
Ramos Intro HMMINTRODUCTIONTOMARKOVMODELS
INTRODUCTION TO MARKOV
MODELS
OUTLINE
Markov model
Hidden Markov model (HMM)
P sentence
is
next of 0.6
P P
P
P P P
P P paragraph
P 0.05
What
message
P
MARKOV CHAIN: WEATHER EXAMPLE
Design a Markov Chain to predict
the weather of tomorrow using
previous information of the past
days.
𝑃 𝑞1 , … , 𝑞𝑛 = 𝑃(𝑞𝑖 |𝑞𝑖−1 )
𝑖=1
Exercise 1: Given that today is Sunny, what’s the probability that
tomorrow is Sunny and the next day Rainy?
𝑃 𝑞2 , 𝑞3 𝑞1 = 𝑃 𝑞2 𝑞1 𝑃 𝑞3 𝑞1 , 𝑞2
= 𝑃 𝑞2 𝑞1 𝑃 𝑞3 𝑞2
= 𝑃 𝑆𝑢𝑛𝑛𝑦 𝑆𝑢𝑛𝑛𝑦 𝑃 𝑅𝑎𝑖𝑛𝑦 𝑆𝑢𝑛𝑛𝑦
= 0.8 (0.05)
= 0.04
Exercise 2: Assume that yesterday’s weather was Rainy, and today is
Cloudy, what is the probability that tomorrow will be Sunny?
𝑃(𝑞3 |𝑞1 , 𝑞2 ) = 𝑃 𝑞3 𝑞2
= 𝑃 𝑆𝑢𝑛𝑛𝑦 𝐶𝑙𝑜𝑢𝑑𝑦
= 0.2
WHAT IS A MARKOV MODEL?
A Markov Model is a stochastic model which models
temporal or sequential data, i.e., data that are ordered.
U = Umbrella
NU = Not Umbrella
Let’s assume that 𝑡 days had passed. Therefore, we
will have an observation sequence O = {𝑜1 , … , 𝑜𝑡 } ,
where 𝑜𝑖 𝜖 𝑈𝑚𝑏𝑟𝑒𝑙𝑙𝑎, 𝑁𝑜𝑡 𝑈𝑚𝑏𝑟𝑒𝑙𝑙𝑎 .
𝑃 𝑜𝑖 𝑞𝑖 𝑃(𝑞𝑖 )
𝑃 𝑞𝑖 𝑜𝑖 =
𝑃(𝑜𝑖 )
For a sequence of length 𝑡:
𝑃 𝑜1 , … , 𝑜𝑡 𝑞1 , … , 𝑞𝑡 𝑃(𝑞1 , … , 𝑞𝑡 )
𝑃 𝑞1 , … , 𝑞𝑡 𝑜1 , … , 𝑜𝑡 =
𝑃(𝑜1 , … , 𝑜𝑡 )
From the Markov property:
𝑃 𝑞1 , … , 𝑞𝑡 = 𝑃(𝑞𝑖 |𝑞𝑖−1 )
𝑖=1
𝑃 𝑜1 , … , 𝑜𝑡 𝑞1 , … , 𝑞𝑡 = 𝑃(𝑜𝑖 |𝑞𝑖 )
𝑖=1
Thus:
𝑡 𝑡
HMM Parameters:
• Transition probabilities 𝑃(𝑞𝑖 |𝑞𝑖−1 )
• Emission probabilities 𝑃(𝑜𝑖 |𝑞𝑖 )
• Initial state probabilities 𝑃(𝑞𝑖 )
HMM PARAMETERS
A HMM is governed by the following parameters:
λ = {𝐴, 𝐵, 𝜋}
State-transition probability matrix 𝐴
Emission/Observation/State Conditional Output
probabilities 𝐵
Initial (prior) state probabilities 𝜋
𝑆 = 𝑠1 , … , 𝑠𝑁
State-transition probability matrix:
𝑎𝑖𝑗
𝑠𝑖 𝑠𝑗
Emission probabilities: A state will generate an
observation (output), but a decision must be taken
according on how to model the output, i.e., as discrete
or continuous.
𝑏𝑖 𝑣𝑘 = 𝑃 𝑜𝑡 = 𝑣𝑘 𝑞𝑡 = 𝑠𝑖 , 1≤𝑘≤𝑊
𝑏1 𝑣2 … 𝑏1 𝑣𝑊
𝑣1
𝑏1 (𝑣1 ) 𝑏1 (𝑣2 ) . . . 𝑏1 (𝑣𝑊 )
𝑏2 (𝑣1 ) 𝑏2 (𝑣2 ) . . . 𝑏2 (𝑣𝑊 ) 𝑣2 𝑣𝑊
. . .
𝐵= .
. . .
. . . .
𝑏𝑁 (𝑣1 )𝑏𝑁 (𝑣2 ) . . . 𝑏𝑁 (𝑣𝑊 )
Initial (prior) probabilities: these are the probabilities
of starting the observation sequence in state 𝑞𝑖 .
𝜋1
𝜋2
. 𝜋𝑖 = 𝑃 𝑞1 = 𝑠𝑖 , 1≤𝑖≤𝑁
𝜋= .
.
𝑁
𝜋𝑁
𝜋𝑖 = 1
𝑖=1
HMM EXAMPLE: COINS & DICE
http://www.mathworks.com/help/stats/hidden-markov-models-hmm.html
HMM EXAMPLE: COINS & DICE
𝑃 𝐻 𝑅𝑒𝑑 𝐶𝑜𝑖𝑛 = 0.9 𝑃 𝑇 𝑅𝑒𝑑 𝐶𝑜𝑖𝑛 = 0.1 𝑃 𝐻 𝐺𝑟𝑒𝑒𝑛 𝐶𝑜𝑖𝑛 = 0.95
State2
State1
Green
Red Die
Die
(6 sides)
(12 sides)
1
𝑃 𝑇 𝐺𝑟𝑒𝑒𝑛 𝐶𝑜𝑖𝑛 = 0.05
2
3 1
6
4 5 6 1
1 5
1 4
1 1 1 2 3
0.9 0.1 1
𝐴= 𝜋=
0.05 0.95 0
http://www.mathworks.com/help/stats/hidden-markov-models-hmm.html
0.2
HMM EXAMPLE: COINS & DICE
0.18
0.16
0.14
0.12 1 1 1 1 1 1
𝑏1 𝑜𝑡 ={ , , , , , }
Probaility
0.1
6 6 6 6 6 6
0.08
0.06
0.04
0.02
0
0 1 2 3 4 5 6 7
Red Die Outcome
0.6
7 1 1 1 1 1
𝑏2 𝑜𝑡 ={ , , , , , }
0.5
12 12 12 12 12 12
0.4
Probaility
0.3
0.2
0.1
0
0 1 2 3 4 5 6 7
Green Die Outcome
HMM EXAMPLE: COINS & DICE
𝑃 𝐻 𝑅𝑒𝑑 𝐶𝑜𝑖𝑛 = 0.9 𝑃 𝑇 𝑅𝑒𝑑 𝐶𝑜𝑖𝑛 = 0.1 𝑃 𝐻 𝐺𝑟𝑒𝑒𝑛 𝐶𝑜𝑖𝑛 = 0.95
State2
State1
Green
Red Die
Die
(6 sides)
(12 sides)
1
𝑃 𝑇 𝐺𝑟𝑒𝑒𝑛 𝐶𝑜𝑖𝑛 = 0.05
2
3 1
6
4 5 6 1
1 5
1 4
1 1 1 2 3
1 1 1 1 1 1
0.9 0.1 1 6 6 6 6 6
𝐴= 𝜋=
0 𝐵= 6
0.05 0.95 7 1 1 1 1 1
http://www.mathworks.com/help/stats/hidden-markov-models-hmm.html 12 12 12 12 12 12
HMM TO CLASSIFY WRIST MOTIONS
RELATED TO EATING ACTIVITIES
273 Participants
Wrist Motion:
Rest 24 44 21 87
Utensiling 21 37 16 74
Bite 29 44 18 91
Drink 5 15 4 24
DATA SEQUENCE:
Training Data
drink
utensiling
States
bite
rest
0.11
0.24 0.13 0.04
0.42
0.50
0.00 Sate 4
0.08
Sate 3
GMM Utensiling Drink
0.38 GMM
0.04 0.29
WHAT CAN WE DO NEXT?