3 - WQU - 622 CTSP - M3 - CompiledContent
3 - WQU - 622 CTSP - M3 - CompiledContent
3 - WQU - 622 CTSP - M3 - CompiledContent
Revised: 09/09/2019
Module 3 introduces the semimartingale as an extension of the Ito processes discussed in the
previous module. The module begins by using an arbitrary square integrable martingale as an
integrator of the stochastic integral to replace the Brownian motion, which results in a more
general definition of stochastic processes. The module continues by introducing the concept of
localization and local martingales and concludes by providing the solution of several problem
sets on semimartingales. Note that the Ito process presented in Module 2 is a type of
semimartingale; therefore, most of the Ito process properties and operations introduced in
Module 2 can be extended to semimartingales, as demonstrated through the solution of the
problem sets provided at the end of this module.
4
The first step in the extension of the stochastic integral is to replace Brownian motion
𝑊 as an integrator with an arbitrary square integrable martingale 𝑀. This step is the
easiest, and the presentation is very similar to the Brownian case.
We will continue to work on a fixed filtered probability space (Ω, ℱ, 𝔽, ℙ), where 𝔽 =
ℱ𝑡 : 0 ≤ 𝑡 ≤ 𝑇 satisfies the usual conditions.
Let 𝑀 be a square integrable martingale null at 0. The last condition (𝑀0 = 0) is not
𝑡 𝑡
really a restriction since we will always define ∫0 𝜑𝑠 𝑑𝑀𝑠 to be ∫0 𝜑𝑠 𝑑(𝑀𝑠 − 𝑀0 ) if 𝑀0 ≠ 0.
where 0 = τ0 < τ1 < ⋯ < τ𝑛 are stopping times and each 𝐻𝑘 is bounded and ℱτ𝑘−1 -
For a simple process 𝜑, we define the stochastic integral of 𝜑 with respect to 𝑀 as the
stochastic process (𝜑⦁𝑀) = {(𝜑⦁𝑀)𝑡 : 0 ≤ 𝑡 ≤ 𝑇} defined by
where
1
𝑇 2
‖𝜑‖𝑀 ≔ (𝔼 (∫ 𝜑𝑠2 𝑑〈𝑀〉𝑠 )) .
0
It can be shown again that each 𝕊 ⊆ 𝐿2 (𝑀) and for each 𝜑 ∈ 𝐿2 (𝑀), there exists a
sequence (𝜑𝑛 ) in 𝕊 such that ||𝜑𝑛 − 𝜑||𝑀 → 0 as 𝑛 → ∞. We then define (𝜑⦁𝑀) as an
appropriate limit of the sequence (𝜑𝑛 ⦁𝑀), and this definition is independent of the
chosen sequence. For more details, see On Square Integrable Martingales by Kunita and
Watanabe (1967).
𝑡
∫ 𝜑𝑠 𝑑𝑀𝑠 .
0
The stochastic integral satisfies the following (for 𝑀 a square integrable martingale and
𝜑 ∈ 𝐿2 (𝑀)):
1. 𝜑 ↦ (𝜑⦁𝑀) is linear.
2. (𝜑⦁𝑀) is a square integrable martingale with
⟨(𝜑⦁𝑀)〉𝑡 = (𝜑2 ⦁⟨𝑀⟩)𝑡 ,
. 𝑡
⟨∫ 𝜑𝑠 𝑑𝑀𝑠 ⟩ = ∫ 𝜑𝑠2 𝑑⟨𝑀⟩𝑠
0 0
𝑡
. 𝑡
⟨∫ 𝜑𝑠 𝑑𝑀𝑠 , 𝑁⟩ = ∫ 𝜑𝑠 𝑑⟨𝑀, 𝑁⟩𝑠
0 0
𝑡
We will see how to extend the definition to integrands beyond 𝐿2 (𝑀) in the next
section. Unfortunately, we lose some of the above properties of the integral for these
general integrands.
We end this section by mentioning that the same construction can be carried out
verbatim using the quadratic variation [𝑀] in place of 〈𝑀〉.
Hi, in this video we extend the theory of stochastic integration to the case where the
integrator is a square integrable martingale.
𝑡
∫ 𝜑𝑠 𝑑𝑀𝑠 .
0
We are going to start with what is called a simple process, which is a stochastic process
that can be written like this:
𝑛
𝜑t = ∑ 𝐻𝑖 𝐼(𝜏 𝑖 − 1, 𝜏 𝑖] (𝑡) ,
𝑖=1
where, the τ𝑖 𝑠 are stopping times (i.e. when 0 = τ0 < τ1 < ⋯ τ𝑛 = 𝑇).
What we see above is the definition of the stochastic integral, which is similar to the
case of a Brownian motion. This means that it also satisfies similar properties to the
case of a Brownian motion.
𝑡
1. 𝐸 (∫0 𝜑𝑠 𝑑𝑀𝑠 ) = 0.
𝑡
2. This stochastic process, {∫0 𝜑𝑠 𝑑𝑀𝑠 : 0 ≤ 𝑡 ≤ 𝑇} is a square integrable martingale
𝑡 𝑡
⟨∫ 𝜑𝑠 𝑑𝑀𝑠⟩ = ∫ 𝜑𝑠2 𝑑⟨𝑀⟩𝑠 .
0 0
𝑡
𝑡 2
3. 𝐸 ((∫0 𝜑𝑠 𝑑𝑀𝑠 ) ) is equal to the expected value of the integral of the square of 𝜑𝑠
𝑡 2 𝑡
𝐸 ((∫ 𝜑𝑠 𝑑𝑀𝑠 ) ) = 𝐸 (∫ 𝜑𝑠2 𝑑⟨𝑀⟩𝑠 ).
0 0
1
𝑇 2
‖𝜑‖𝑀 = (𝐸 ∫ 𝜑𝑠2 𝑑⟨𝑀⟩𝑠 ) .
0
Once we have the above, we can show that for every element of 𝐿2 (𝑀), there exists a
sequence of simple processes that we are going to denote by (𝜑𝑛 ), such that 𝜑𝑛
converges to 𝜑 in the following sense:
‖𝜑𝑛 − 𝜑‖𝑀 → 0 𝑎𝑠 𝑛 → ∞.
In that case, we define the stochastic integral of 𝜑 to be equal to the limit as 𝑛 tends to
infinity of the stochastic integrals of 𝜑𝑛 , where this limit is taken in 𝐿2 . Written in full:
𝑡 𝑡
∫ 𝜑𝑑𝑀𝑠 : = lim ∫ 𝜑𝑛 𝑑𝑀𝑠 .
0 𝑛→∞ 0
This limit is independent of the sequence 𝜑𝑛 that we have chosen and that completely
defines the stochastic integral with respect to a square integrable martingale.
Now that we have defined a stochastic integral with respect to a square integrable
Let 𝑀 be a local martingale and (𝜏𝑛 ) be a localizing sequence, then the sequence
(τ𝑛 ∧ 𝑛)is also a localizing sequence. Indeed, since 𝑀𝑡τ𝑛 ∧𝑛 = 𝑀𝑛∧𝑡
τ𝑛
, we have (for 𝑠 < 𝑡),
𝜏 ∧𝑛 τ τ τ ∧𝑛
𝔼(𝑀𝑡 𝑛 |ℱ𝑠 ) = 𝔼(𝑀𝑛∧𝑡
𝑛 𝑛
|ℱ𝑠 ) = 𝑀𝑛∧𝑠 = 𝑀𝑠 𝑛
τ
by the martingale property of 𝑀𝑛∧⋅
𝑛
for each fixed 𝑛 (since τ𝑛 is a localizing sequence).
Recall from the definition of the stochastic integral with respect to a Brownian motion
that if 𝜑 ∈ 𝐿2 (𝑊),then (𝜑⦁𝑊) is a square integrable martingale. We then extended the
definition of the integral to progressive processes 𝜑 such that
𝑡
∫ 𝜑𝑠2 𝑑𝑠 < ∞ ∀𝑡 𝑎. 𝑠.,
0
but mentioned that the stochastic integral for such processes is no longer a martingale
in general. We now show that it is always a local martingale though.
𝑡
∫ 𝜑𝑠2 𝑑𝑠 < ∞ ∀𝑡 a. s. .
0
𝑡
τ𝑛 ≔ inf{𝑡 ≥ 0: ∫ 𝜑𝑠2 𝑑𝑠 > 𝑛}.
0
Then the stopped process 𝜑τ𝑛 belongs to 𝐿2 (𝑊), hence the stochastic integral is a local
martingale.
There are several examples that show that the stochastic integral can be a strict local
martingale – i.e. a local martingale that is not a martingale. One is given by the solution
to the following SDE:
for α > 1.
The following gives a necessary and sufficient condition for a local martingale to be a
martingale:
Now suppose 𝑀 is a locally square integrable local martingale. That is, there exists a
localizing sequence (τ𝑛 ) such that 𝑀τ𝑛 is a square integrable martingale. Then for each
𝑛, the quadratic variation ⟨𝑀τ𝑛 ⟩ is well-defined and unique. Now define the process 〈M〉
as
This process is unambiguously defined since ⟨𝑀τ𝑛+1 ⟩τ𝑛 = ⟨𝑀τ𝑛 ⟩ for every 𝑛. We will also
call it the (predictable) quadratic variation of the locally square integrable local
martingale 𝑀. The covariation process and orthogonality are defined similarly.
A typical example of a local martingale that is locally square integrable is a continuous
local martingale. So, for the rest of the section we focus on continuous local
martingales.
𝑡
𝐿2{𝑙𝑜𝑐} (𝑀) ≔ {𝜑 progressive: ∫ 𝜑𝑠2 𝑑⟨𝑀⟩𝑠 < ∞ ∀𝑡 ≥ 0}.
0
Theorem 2.2. Let 𝑀 be a continuous local martingale and 𝜑 ∈ 𝐿2𝑙𝑜𝑐 (𝑀). Then there exists
a unique continuous local martingale (𝜑⦁𝑀) such that
for any continuous local martingale 𝑁. The process (𝜑⦁𝑀) is called stochastic integral
of 𝜑 with respect to 𝑀 and we will sometimes write it as
𝑡
(𝜑⦁𝑀)𝑡 = ∫ 𝜑𝑠 𝑑𝑀𝑠 .
0
Hi, in this video we introduce the notion of localization and we use it to extend the
theory of stochastic integration.
ℰ = 𝑀.
As a famous example, if we take ℰ to be a set of all martingales, then ℰ𝑙𝑜𝑐 , which is the
set of all stochastic processes that are locally in ℰ, in this case will simply coincide with
the set of all stochastic processes that are local martingales:
ℰ = 𝑀, ℰ𝑙𝑜𝑐 = 𝑀𝑙𝑜𝑐 .
A popular example of a local martingale that we will see many times in this course, is
when 𝑊 is a Brownian motion and 𝜑 is progressive, with the additional condition that
𝑡
∫0 𝜑𝑠2 𝑑𝑠 < ∞∀𝑡 and this is true almost surely. So, if this condition is satisfied, then we
𝑡
know that the stochastic integral ∫0 𝜑𝑠 𝑑𝑊𝑠 of the stochastic process 𝜑𝑠 is well-defined.
However, we mentioned in the last module that this is not generally a martingale if 𝜑
𝑡
does not belong to 𝐿2 (𝑊). If 𝜑 ∉ 𝐿2 (𝑊), then the stochastic integral ∫0 𝜑𝑠 𝑑𝑊𝑠′ need not be
𝑡
However, we will show now that ∫0 𝜑𝑠 𝑑𝑊𝑠 is always a local martingale.
As a sequence, τ𝑛 , which is called a localizing sequence, can be taken as being the first
𝑡
time the infimum, which is ∫0 𝜑𝑠2 𝑑𝑠, is greater than or equal to 𝑛. Written in full:
𝑡
τ𝑛 : = 𝑖𝑛𝑓{𝑡 ≥ 0: ∫ 𝜑𝑠2 𝑑𝑠 ≥ 𝑛}.
0
τ
Now, if we look at the stopped process 𝜑𝑛𝑛 , it is always bounded above by 𝑛 and
τ
therefore belongs to 𝐿2 (𝑊). This means that 𝜑𝑛𝑛 will be a square integrable martingale
𝑡 τ
for all 𝑛. So, the stochastic integral ∫0 𝜑𝑠 𝑛 𝑑𝑊𝑠 will be a square integrable martingale.
𝑡 τ t τn
∫0 𝜑𝑠 𝑛 𝑑𝑊𝑠 is also equal to the stopped process (∫0 𝜑𝑠 dWs ) . We can check that this is
equivalent to stopping the integral process itself, and therefore, the integral process is a
local martingale because when we stop it we get a martingale.
Finally, this allows us to extend the theory of stochastic integration to the case where
𝑡
we have ∫0 𝜑𝑠 𝑑𝑀 > and this here no longer belongs to 𝐿2 of 𝑀, and the martingale 𝑀 is
not necessarily square integrable – in fact, we can extend it even further to the case
where it is just a local martingale via localization. Further details are covered in the
notes.
Now that we've covered localization, in the next video, we are going to
look at semimartingales.
In this section, we finally extend the stochastic integral to semimartingales. Luckily, all
the hard work has been covered in previous sections, so this last step will be relatively
straightforward.
where 𝑀 is a local martingale and 𝐴 is a finite variation process, both null at zero. Note
that this decomposition of 𝑋 is in general not unique, since there are local martingales
of finite variation. It is unique, however, when 𝐴 is also predictable, and in that case, 𝑋 is
called a special semimartingale.
provided both integrals on the right-hand side exist. If both integrals exist, we will say
that 𝜑 is X-integrable and we will denote by L(X) the set of all X-integrable predictable
processes 𝜑. It is a known result that the space of semimartingales is closed under
taking stochastic integrals.
We now move on to Ito's Lemma for continuous semimartingales. First, we extend the
definition of the predictable quadratic variation 〈〉 to continuous semimartingales. If
𝑋 = 𝑋0 + 𝑀 + 𝐴 and 𝑌 = 𝑌0 + 𝑁 + 𝐵 are continuous semimartingales, we define 〈𝑋, 𝑌〉: =
〈𝑀, 𝑁〉 and 〈𝑋〉: = 〈𝑋, 𝑋〉 = 〈𝑀〉. Note that for continuous semimartingales 𝑋 and 𝑌, 〈𝑋, 𝑌〉 =
[𝑋, 𝑌], where [𝑋, 𝑌] is the quadratic covariation of 𝑋 and 𝑌. We also have ⟨𝜑⦁𝑋, 𝜓⦁𝑌⟩ =
𝜑𝜓⦁⟨𝑋, 𝑌⟩ for 𝜑 ∈ 𝐿(𝑋) and 𝜓 ∈ 𝐿(𝑌).
𝑡 𝑑 𝑡 𝑑 𝑑
𝜕𝑓 1 𝜕2𝑓
𝑌𝑡 = 𝑌0 + ∫ ∑ 𝑑𝑋𝑠𝑖 + ∫ ∑ ∑ 𝑑⟨𝑋 𝑖 , 𝑋𝑗 ⟩𝑠 .
0 𝜕𝑥𝑖 0 2 𝜕𝑥𝑖 𝜕𝑥𝑗
𝑖=1 𝑗=1 𝑖=1
𝑑 𝑑 𝑑
𝜕𝑓 1 𝜕2𝑓
𝑑𝑌𝑡 = ∑ 𝑑𝑋𝑡𝑖 + ∑ ∑ 𝑑⟨𝑋 𝑖 , 𝑋𝑗 ⟩𝑡 .
𝜕𝑥𝑖 2 𝜕𝑥𝑖 𝜕𝑥𝑗
𝑖=1 𝑗=1 𝑖=1
Let 𝑋 be a continuous semimartingale. We want to define the stochastic exponential
of 𝑋, which is a continuous semimartingale 𝑌 satisfying
𝑑𝑌𝑡 = 𝑌𝑡 𝑑𝑋𝑡 , 𝑌0 = 1.
That is,
𝑡
𝑌𝑡 = 1 + ∫ 𝑌𝑠 𝑑𝑋𝑠 .
0
1 1 1 1
𝑑 ln 𝑌𝑡 = 𝑑𝑌𝑡 − 2 𝑑⟨𝑌〉𝑡 = 𝑑𝑋𝑡 − 𝑑⟨𝑋⟩𝑡 .
𝑌𝑡 2 𝑌𝑡 2
Hence
1
ln 𝑌𝑡 = 𝑋𝑡 − 〈𝑋〉𝑡 ,
2
1
𝑌𝑡 = 𝑒 𝑋𝑡 −2〈𝑋〉𝑡 .
process. Using the tools of this chapter, we can define the so-called component-wise
stochastic integral of 𝜑 with respect to 𝑋 as
(comp)𝜑⦁𝑋 ≔ ∑ 𝜑𝑖 ⦁𝑋 𝑖 ,
𝑖=1
which exists if and only if all the component integrals exist. It turns out that this
definition is not general enough for applications to mathematical finance. Indeed, let 𝑋̃
be a one-dimensional semimartingale and 𝜑̃ be a one-dimensional predictable process
such that 𝜑̃ ∉ 𝐿(𝑋̃). Define the 2-dimensional processes 𝑋 and 𝜑 as 𝑋 ≔ (𝑋̃, 𝑋̃) and 𝜑 ≔
(𝜑̃, −𝜑̃). Then the component-wise stochastic integral of 𝜑 with respect to 𝑋 does not
exist since both component integrals do not exist. On the other hand, however, we
expect the integral 𝜑⦁𝑋 to be equal to 0 due to the structure of the processes. This is one
way in which the component-wise stochastic integral is not sufficient.
The right generalization for vector-valued processes is achieved by defining the so-
called vector stochastic integral of 𝜑 with respect to 𝑋. This is a generalization of the
component-wise integral since when the latter exists the former will also exist and the
two integrals are the same. The details for its construction, together with some
applications to mathematical finance can be found in Vector Stochastic Integrals and
The Fundamental Theorems of Asset Pricing by A.N. Shiryaev and A.S. Cherny.
Hi, in this video we look at semimartingales.
𝑋 = 𝑋0 + 𝑀 + 𝐴,
where 𝑋0 is of course the starting point of the stochastic process. We are going to insist
that 𝑀 is continuous, but it is also a local martingale. 𝐴 is a continuous finite variation
process, meaning that the sample paths of 𝐴 have finite variation.
𝑡 𝑡 𝑡
∫ 𝜑𝑠 𝑑𝑋𝑠 : = ∫ 𝜑𝑠 𝑑𝑀𝑠 + ∫ 𝜑𝑠 𝑑𝐴𝑠 .
0 0 0
Now 𝜑𝑠 will not exist for any arbitrary 𝜑, so, necessarily, we need to make sure that the
𝑡
stochastic integral exists for when 𝜑 is locally bounded and predictable so that ∫0 𝜑𝑠 𝑑𝑀𝑠
𝑡
and ∫0 𝜑𝑠 𝑑𝐴𝑠 both exist. Those conditions will both be satisfied if we assume that 𝜑 is
locally bounded and predictable. However, there are cases where the stochastic integral
exists even though 𝜑 is not locally bounded and integrable.
We are going to denote by 𝐿(𝑋) the set of all processes 𝜑, such that 𝜑 is 𝑋-integrable in
the sense that the stochastic integral exists.
Now that we have introduced the stochastic integral, we can move on to Ito's Lemma
for continuous semimartingales.
Ito's Lemma says that 𝑌𝑡 is also a semimartingale and it gives us the stochastic
differential for 𝑌𝑡 with respect to the partial derivatives of the function ℱ as follows:
𝑑 δℱ 1 δ2 ℱ
dYt = ∑ 𝑑𝑋𝑡 𝑖 + ∑ ∑ 𝑑[𝑋 𝑖 , 𝑋𝑗 ]𝑡 .
𝑖=1 δ𝑥𝑖 2 𝑖 𝑗 δ𝑥𝑖, δ𝑥𝑗
That brings us to the end of this module. In the next module, we are going to look at
Continuous Trading.
Consider the processes 𝑋 and 𝑌 satisfying the following SDEs:
Solution:
In the lecture notes, we have extended the definition of the predictable quadratic
variation 〈〉 to continuous semimartingales. If 𝑋 = 𝑋0 + 𝑀 + 𝐴 and 𝑌 = 𝑌0 + 𝑁 + 𝐵 are
continuous semimartingales, we define 〈𝑋, 𝑌〉: = 〈𝑀, 𝑁〉 and 〈𝑋〉: = 〈𝑋, 𝑋〉 = 〈𝑀〉. Therefore,
we can apply it to our problem as follows:
⟨𝑑𝑋, 𝑑𝑌⟩ ≔ ⟨𝑑𝑀, 𝑑𝑁⟩ = (𝑌𝑡 𝑋𝑡 𝑑𝑊𝑡1 + 𝑋𝑡 𝑑𝑊𝑡2 )(5𝑑𝑊𝑡1 + 𝑋𝑡 𝑑𝑊𝑡2 ) = (5𝑌𝑡 𝑋𝑡 + 𝑋𝑡2 )𝑑𝑡.
𝑡
⟨𝑋, 𝑌⟩𝑡 = ∫ (5𝑌𝑠 𝑋𝑠 + 𝑋𝑠2 )𝑑𝑠.
0
Consider the process 𝑋 satisfying the following SDE:
Find ⟨𝑋⟩𝑡 .
Solution:
In the lecture notes, we have extended the definition of the predictable quadratic
variation 〈〉 to continuous semimartingales. If 𝑋 = 𝑋0 + 𝑀 + 𝐴 and 𝑌 = 𝑌0 + 𝑁 + 𝐵 are
continuous semimartingales, we define 〈𝑋, 𝑌〉: = 〈𝑀, 𝑁〉 and 〈𝑋〉: = 〈𝑋, 𝑋〉 = 〈𝑀〉. Therefore,
we can apply it to our problem as follows,
𝑡
⟨𝑋⟩𝑡 = ∫ 𝑋𝑠2 𝑠 4 𝑑𝑠.
0
Find ln ℰ (𝑋)𝑡 .
Solution:
1
ln ε (𝑋)𝑡 = 𝑋𝑡 − ⟨𝑋⟩𝑡 .
2
𝑡
⟨𝑋⟩𝑡 = ∫ 42 𝑑𝑠 = 16𝑡.
0
1 1
ln ε (𝑋)𝑡 = 𝑋𝑡 − 〈𝑋〉𝑡 = 𝑋𝑡 − 16𝑡 = 𝑋𝑡 − 8𝑡.
2 2
Solution: 𝑋𝑡 − 8𝑡.
Find ln ℰ (𝑋)𝑡 .
Solution:
1
ln ℰ (𝑋) = 𝑋𝑡 − 〈𝑋〉𝑡 .
2
𝑡
⟨𝑋⟩𝑡 = ∫ 10 𝑋𝑠2 𝑑𝑠.
0
Finally,
1 1 𝑡 𝑡
ln ℰ (𝑋) = 𝑋𝑡 − ⟨𝑋〉𝑡 = 𝑋𝑡 − ∫ 10 𝑋𝑠2 𝑑𝑠 = 𝑋𝑡 − 5 ∫ 𝑋𝑠2 𝑑𝑠.
2 2 0 0
Solution:
In the lecture notes, we have extended the definition of the predictable quadratic
variation 〈〉 to continuous semimartingales. If 𝑋 = 𝑋0 + 𝑀 + 𝐴 and 𝑌 = 𝑌0 + 𝑁 + 𝐵 are
continuous semimartingales, we define ⟨𝑋, 𝑌⟩ ≔ ⟨𝑀, 𝑁⟩ and ⟨𝑋⟩ ≔ ⟨𝑋, 𝑋⟩ = ⟨𝑀⟩. Therefore,
we can apply it to our problem as follows:
⟨𝑑𝑋, 𝑑𝑌⟩ ≔ ⟨𝑑𝑀, 𝑑𝑁⟩ = (4𝑋𝑡 𝑑𝑊𝑡1 + 3𝑋𝑡 𝑑𝑊𝑡2 )(2𝑑𝑊𝑡1 ) = 8𝑋𝑡 𝑑𝑡.
𝑡
⟨𝑋, 𝑌⟩𝑡 = ∫ 8𝑋𝑠 𝑑𝑠.
0
Kunita, H. and Watanabe, S. (1967). ‘On Square Integrable Martingales’. Nagoya
Mathematical Journal, 30(30).
Shiryaev, A.N. and Cherny, A.S.. (2002). ‘Vector Stochastic Integrals and The
Fundamental Theorems of Asset Pricing’. Proceedings of the Steklov Institute of
Mathematics, 237.