4 Omx

Download as pdf or txt
Download as pdf or txt
You are on page 1of 34

Chapter 4

Migration by nite di erences


In the last chapter we learned how to extrapolate wave elds down into the earth. The
process proceeded simply, since it is just a multiplication in the frequency domain by
exp[ikz (!; kx)z]. Finite-di erence techniques will be seen to be complicated. They
will involve new approximations and new pitfalls. Why should we trouble ourselves to
learn them? To begin with, many people nd nite-di erence methods more compre-
hensible. In (t; x; z)-space, there are no complex numbers, no complex exponentials,
and no \magic" box called FFT.
The situation is analogous to the one encountered in ordinary frequency ltering.
Frequency ltering can be done as a product in the frequency domain or a convolution
in the time domain. With wave extrapolation there are products in both the temporal
frequency !-domain and the spatial frequency kx-domain. The new ingredient is the
two-dimensional (!; kx)-space, which replaces the old one-dimensional !-space. Our
question, why bother with nite di erences?, is a two-dimensional form of an old
question: After the discovery of the fast Fourier transform, why should anyone bother
with time-domain ltering operations?
Our question will be asked many times and under many circumstances. Later we
will have the axis of o set between the shot and geophone and the axis of midpoints
between them. There again we will need to choose whether to work on these axes
with nite di erences or to use Fourier transformation. It is not an all-or-nothing
proposition: for each axis separately either Fourier transform or convolution ( nite
di erence) must be chosen.
The answer to our question is many-sided, just as geophysical objectives are many-
sided. Most of the criteria for answering the question are already familiar from
ordinary lter theory where a lter can be made time-variable. Time-variable lters
are useful in re ection seismology because the frequency content of echoes changes
with time. An annoying aspect of time-variable lters is that they cannot be described
by a simple product in the frequency domain. So when an application of time-variable
lters comes along, the frequency domain is abandoned, or all kinds of contortions
are made (stretching the time axis, for example) to try to make things appear time-
73
74 CHAPTER 4. MIGRATION BY FINITE DIFFERENCES

invariant.
All the same considerations apply to the horizontal space axis x. On space axes,
a new concern is the seismic velocity v. If it is space-variable, say v(x), then the op-
eration of extrapolating wave elds upward and downward can no longer be expressed
as a product in the kx-domain. Wave-extrapolation procedures must abandon the
spatial frequency domain and go to nite di erences. The alternative again is all
kinds of contortions (such as stretching the x-axis) to try to make things appear to
be space-invariant.
Fourier methods are global. That is, the entire dataset must be in hand before
processing can begin. Remote errors and truncations can have serious local e ects.
On the other hand, nite-di erence methods are local. Data points are directly related
only to their neighbors. Remote errors propagate slowly.
Some problems of the Fourier domain have just been summarized. The problems
of the space domain will be shown in this chapter and chapter 9. Seismic data
processing is a multidimensional task, and the di erent dimensions are often handled
in di erent ways. But if you are sure you are content with the Fourier domain then
you can skip much of this chapter and jump directly to chapter 6 where you can learn
about shot-to-geophone o set, stacking, and migration before stack.

4.1 WAVE-EXTRAPOLATION EQUATIONS


A wave-extrapolation equation is an expression for the derivative of a wave eld (usu-
ally in the depth z direction). When the wave eld and its derivative are known,
extrapolation can proceed by various numerical representations of
P (z + z ) = P (z) + z dP dz
(4.1)
Extrapolation is moving information from z to z + z and what we need to do it
is a way to nd dP=dz. Two theoretical methods for nding dP=dz are the original
transformation method and the newer dispersion-relation method.

4.1.1 Meet the parabolic wave equation


At the time the parabolic equation was introduced to petroleum prospecting (1969), it
was well known that \wave theory doesn't work." At that time, petroleum prospectors
analyzed seismic data with rays. The wave equation was not relevant to practical
work. Wave equations were for university theoreticians. (Actually, wave theory did
work for the surface waves of massive earthquakes, scales 1000 times greater than
in exploration). Even for university workers, nite-di erence solutions to the wave
equation didn't work out very well. Computers being what they were, solutions
looked more like \vibrations of a drum head" than like \seismic waves in the earth."
4.1. WAVE-EXTRAPOLATION EQUATIONS 75

The parabolic wave equation was originally introduced to speed nite-di erence wave
modeling. The following introduction to the parabolic wave equation is via my original
transformation method.
The diculty prior to 1969 came from an inappropriate assumption central to all
then-existing seismic wave theory, namely, the horizontal layering assumption. Ray
tracing was the only way to escape this assumption, but ray tracing seemed to ignore
waveform modeling. In petroleum exploration almost all wave theory further limited
itself to vertical incidence. The road to success lay in expanding ambitions from
vertical incidence to include a small angular bandwidth around vertical incidence.
This was achieved by abandoning much known, but cumbersome, seismic theory.
A vertically downgoing plane wave is represented mathematically by the equation
P (t; x; z ) = P0 e i! (t z=v) (4.2)
In this expression, P0 is absolutely constant. A small departure from vertical incidence
can be modeled by replacing the constant P0 with something, say, Q(x; z), which is
not strictly constant but varies slowly.
P (t; x; z ) = Q(x; z ) e i ! (t z=v) (4.3)
Inserting (1.3) into the scalar wave equation Pxx + Pzz = Ptt =v2 yields
!
@2 i! @ 2 !2
Q + + @z Q = Q
@x2 v v2
@2Q 2 i! @Q @2Q
@x2
+ v @z
+ @z 2
= 0 (4.4)
The wave equation has been reexpressed in terms of Q(x; z). So far no approximations
have been made. To require the wave eld to be near to a plane wave, Q(x; z) must be
near to a constant. The appropriate means (which caused some controversy when it
was rst introduced) is to drop the highest depth derivative of Q, namely, Qzz . This
leaves us with the parabolic wave equation
@Q v @2Q
@z
= 2 i! @x2 (4.5)

When I rst introduced equation (1.5) for use in seismology, I thought its most
important property was this: For a wave eld close to a vertically propagating plane
wave, the second x-derivative is small, hence the z-derivative is small. Thus, the
nite-di erence method should allow a very large z and thus be able to treat models
more like the earth, and less like a drumhead. I soon realized that the parabolic wave
equation is also just what is needed for seismic imaging because you can insert it in
an equation like (1.1). (Curiously, equation (1.5) also happens to be the Schroedinger
equation of quantum mechanics.)
I called equation (1.5) the 15 equation. After using it for about a year I dis-
covered a way to improve on it by estimating the dropped @zz term. Di erentiate
76 CHAPTER 4. MIGRATION BY FINITE DIFFERENCES

equation (1.5) with respect to z and substitute the result back into equation (1.4)
getting
@2Q 2 i! @Q + v @3Q
@x2
+ v @z 2 i! @z@x2 = 0 (4.6)
I named equation (1.6) the 45 migration equation. It is rst order in @z , so it
requires only a single surface boundary condition, however, downward continuation
will require something more complicated than equation (1.1).
The above approach, the transformation approach, was and is very useful. But
people were confused by the dropping and estimating of the @zz derivative, and a
philosophically more pleasing approach was invented by Francis Muir, a way of getting
equations to extrapolate waves at wider angles by tting the dispersion relation of a
semicircle by polynomial ratios.

4.1.2 Muir square-root expansion


Muir's method of nding wave extrapolators seeks polynomial ratio approximations
to a square-root dispersion relation. Then fractions are cleared and the approximate
dispersion relation is inverse transformed into a di erential equation.
Substitution of the plane wave exp( i!t + ikxx + ikz z) into the two-dimensional
scalar wave equation (1.18) yields the dispersion relation
!2
kz2 + kx2 = (4.7)
v2
Solve for kz selecting the positive square root (thus for the moment selecting down-
going waves). s
v 2 kx2
kz = ! 1 (4.8)
v !2
To inverse transform the z-axis we only need to recognize that ikz corresponds to
@=@z . The resulting expression is a wave eld extrapolator, namely,
s
@ ! v 2 kx2
P (!; kx; z ) = i 1 P (!; kx; z ) (4.9)
@z v !2

Bringing equation (1.9) into the space domain is not simply a matter of substi-
tuting a second x derivative for kx2 . The problem is the meaning of the square root
of a di erential operator. The square root of a di erential operator is not de ned
in undergraduate calculus courses and there is no straightforward nite di erence
representation. The square root becomes meaningful only when the square root is
regarded as some kind of truncated series expansion. It will be shown in chapter 8
that the Taylor series is a poor choice. Francis Muir showed that my original 15 and
4.1. WAVE-EXTRAPOLATION EQUATIONS 77

45 methods were just truncations of a continued fraction expansion. To see this,
de ne
vkx vkz
X = and R = (4.10)
! !
So equation (1.8) is more simply and abstractly written as
p
R = 1 X2 (4.11)
which you recognize as meaning that cosine is the square root of one minus sine
squared. The desired polynomial ratio of order n will be denoted Rn, and it will be
determined by the recurrence
X2
Rn+1 = 1 1 + Rn (4.12)

The recurrence is a guess that we verify by seeing what it converges to (if it converges).
Set n = 1 in (1.12) and solve
2
R1 = 1 1 +X R
1
R1 (1 + R1 ) = 1 + R1 X 2
R2 = 1 X2 (4.13)
The square root of (1.13) gives the required expression (1.11). Geometrically, (1.13)
says that the cosine squared of the incident angle equals one minus the sine squared
and truncating the expansion leads to angle errors. Muir said, and you can verify,
that his recurrance relationship formalizes what I was doing by re-estimating the
@zz term. Although it is pleasing to think of large values of n, in real life only the
low-order terms in the expansion are used. Table 1.1 shows the result of three Muir
iterations beginning from R0 = 1
For various historical reasons, the equations in Table 1.1 are often referred to as the
5 , 15, and 45 equations, respectively, the names giving a reasonable qualitative (but

poor quantitative) guide to the range of angles that are adequately handled. A trade-
o between complexity and accuracy frequently dictates choice of the 45 equation.
It then turns out that a slightly wider range of angles can be accommodated if the
recurrence is begun with something like R0 = cos 45. Figure 1.1 shows some plots.

4.1.3 Dispersion relations


Performing the substitutions of Table 1.1 into equation (1.10) gives dispersion rela-
tionships for comparison to the exact expression (1.8). These are shown in Table 1.2.
78 CHAPTER 4. MIGRATION BY FINITE DIFFERENCES

5 R0 = 1

X2
15 R1 = 1 2

X2
45 R2 = 1 X2
2
2

X2
60 R3 = 1 X2
2
X2
2 2

Table 4.1: First four truncations of Muir's continued fraction expansion.

Figure 4.1: Dispersion relation of Table 1.2. The curve labeled 45+ was constructed
with R0 = cos 45 . It ts exactly at 0 and 45 . omx-disper [NR]
4.1. WAVE-EXTRAPOLATION EQUATIONS 79

!
5 kz =
v

! vkx2
15 kz = v 2!

! kx2
45 kz = v vkx2
2 !v 2!

Table 4.2: As displayed in Figure 1.1, the dispersion relations tend toward a semicircle.

4.1.4 Depth-variable velocity


Identi cation of ikz with @=@z converts the dispersion relations of Table 1.2 into the
di erential equations of Table 1.3.
The di erential equations in Table 4.3 were based on a dispersion relation that
in turn was based on an assumption of constant velocity. So you might not expect
that the equations have substantial validity or even great utility when the velocity
is depth-variable, v = v(z). The actual limitations are better characterized by their
inability, by themselves, to describe re ection.
Migration methods based on equation (1.9) or on Table 1.3 are called phase-shift
methods.

4.1.5 Retardation (frequency domain)


It is often convenient to arrange the calculation of a wave to remove the e ect of
overall translation, thereby making the wave appear to \stand still." This subject,
wave retardation, will be examined more thoroughly in chapter 5. Meanwhile, it
is easy enough to introduce the time shift t0 of a vertically propagating wave in a
hypothetical medium of velocity v(z), namely,
Z z dz
t0 = (4.14)
0 v(z )
A time delay t0 in the time domain corresponds to multiplication by exp(i!t0) in the
! -domain. Thus, the wave pressure P is related to the time-shifted mathematical
80 CHAPTER 4. MIGRATION BY FINITE DIFFERENCES

!
@P !
5 = i P
@z v

vkx2
!
@P !
15 @z = i
v 2! P

0 1
@P B! kx2 CC
45 @z = iB
B@ v CP
! vkx2 A
2
v 2!

Table 4.3: Extrapolation equations when velocity depends only on depth.

variable Q by
Z z dz !
P (z; ! ) = Q(z; ! ) exp i! (4.15)
0 v(z )

which is a generalization of equation (1.3) to depth-variable velocity. (Equations (1.15)


and (1.17) apply in both x- and kx-space). Di erentiating with respect to z gives
Z z dz ! Z z dz !
@P @Q i!
@z
= @z exp i ! v(z) + Q(z; !) v(z) exp i ! v(z) (4.16)
0 0
Z ! !
@P z dz @
= exp i! + i! Q (4.17)
@z 0 v(z ) @z v(z )

Next, substitute (1.15) and (1.17) into Table 1.3 to obtain the retarded equations in
Table 1.4.

4.1.6 Lateral velocity variation


Having approximated the square root by a polynomial ratio, Table 1.3 or Table 1.4 can
be inverse transformed from the horizontal wavenumber domain kx to the horizontal
space domain x by substituting (ikx)2 = @ 2 =@x2 . As before, the result has a wide
range of validity for v = v(x; z) even though the derivation would not seem to permit
this.
4.1. WAVE-EXTRAPOLATION EQUATIONS 81

!
1
+ i! 1
@Q
5 = zero Q
@z v v (z )

vkx2 1
!
+ i! v1
@Q
15 @z = i
2! Q v (z )
Q

!
@Q kx2 1 1
45 = i 2 Q + i! Q
@z vkx v v (z )
2 !v 2!

@Q
general = di raction + thin lens
@z

Table 4.4: Retarded form of phase-shift equations.

4.1.7 Two velocities


You may wonder where the two velocities v(x; z) and v(z) came from. The rst
arises in the wave equation, and it must be x-variable if the model is x-variable.
The second arises in a mathematical transformation, namely, equation (1.15), so it is
purely a matter of de nition. Experience shows that complications will abound if we
try to make the de ned velocity v depend on x. But it is nice if the two velocities
can be equal so the term containing their di erence drops out of the analysis. Thus
ordinarily, v(z) is chosen to be some kind of horizontal average of v(x; z).

4.1.8 Splitting di raction and lens terms


The customary numerical solution to the x-domain forms of the equations in Tables
1.3 and 1.4 is arrived at by splitting. That is, you march forward a small z-step
alternately with the two extrapolators
@Q
@z
= lens term (4.18)
@Q
@z
= di raction term (4.19)
82 CHAPTER 4. MIGRATION BY FINITE DIFFERENCES

@Q
5 = zero
@z

@2Q
= v(x;2i!z) @x2
@Q
15 @z

2 ! 3
v (x; z ) 2 @ 2 5 @Q v (x; z ) @ 2 Q
45 41 2i! @x2 @z = 2i! @x2

Table 4.5: Di raction equations for laterally variable media.

Formal justi cation of the splitting process is found in chapter 5. The rst equation,
called the lens equation, is solved analytically:
( Z z2 ! )
Q(z2 ) = Q(z1 ) exp i!
1 1 dz (4.20)
z1 v (x; z ) v(z )
Migrations that include the lens equation are called depth migrations. The term
is often omitted, giving a time migration.
Observe that the di raction parts of Tables 1.3 and 1.4 are the same. Let us use
them and equation (1.19) to de ne a table of di raction equations. Substitute @=@x
for ikx and clear @=@x from the denominators to get Table 1.5.

4.1.9 Time domain


To put the above equations in the time domain, it is necessary only to get ! into
the numerator and then replace i! by @=@t. For example, the 15, retarded, v = v
equation from Table 1.5 becomes
@2 @2
@z @t
Q = v2 @x 2 Q (4.21)
Interpretation of time t for a retarded-time variable Q awaits further clari cation in
chapter 5.

4.1.10 Upcoming waves


All the above equations are for downgoing waves. To get equations for upcoming
waves you need only change the signs of z and @=@z. Letting D denote a downgoing
4.1. WAVE-EXTRAPOLATION EQUATIONS 83

@2 v @2
D = + D
@z@t 2 @x2

@2 v @2
@z@t
U = 2 @x2 U

Table 4.6: Time-domain equations for downgoing and upcoming wave di raction with
retardation and the 15 approximation.

wave eld and U an upcoming wave eld, equation (1.21), for example, is found in
Table 1.6.
Using the exploding-re ector concept, it is the upcoming wave equation that is
found in both migration and di raction programs. The downgoing wave equation
is useful for modeling and migration procedures that are more elaborate than those
based on the exploding-re ector concept (chapter 6).

EXERCISES:

1 Consider a tilted straight line tangent to a circle, as shown in gure 1.2 . Use this
line to initialize the Muir square-root expansion. State equations and plot them
( 2  X  +2) for the next two Muir semicircle approximations.

Figure 4.2: omx-tanexer [NR] X


84 CHAPTER 4. MIGRATION BY FINITE DIFFERENCES

4.2 FINITE DIFFERENCING


The basic method for solving di erential equations in a computer is nite di erencing.
The nicest feature of the method is that it allows analysis of objects of almost any
shape, such as earth topography or geological structure. Ordinarily, nite di erencing
is a straightforward task. The main pitfall is instability. It often happens that
a seemingly reasonable approach to a reasonable physical problem leads to wildly
oscillatory, divergent calculations. Luckily, there is a fairly small body of important
and easily learned tricks that solves many stability problems, and we will be covering
them here.
Of secondary concern are the matters of cost and accuracy. These must be consid-
ered together since improved accuracy can be achieved simply by paying the higher
price of a more re ned computational mesh. Although the methods of the next sev-
eral pages have not been chosen for their accuracy or eciency, it turns out that in
these areas they are excellent. Indeed, to my knowledge, some cannot be improved
on at all, while others can be improved on only in small ways. By \small" I mean an
improvement in eciency of a factor of ve or less. Such an improvement is rarely of
consequence in research or experimental work; however, its importance in matters of
production will justify pursuit of the literature far beyond the succeeding pages.

4.2.1 The lens equation


The various wave-extrapolation operators can be split into two parts, a complicated
part called the di raction or migration part, and an easy part called the lens part.
The lens equation applies a time shift that is a function of x. The lens equation
acquires its name because it acts just like a thin optical lens when a light beam enters
on-axis (vertically). Corrections for nonvertical incidence are buried somehow in the
di raction part. The lens equation has an analytical solution, namely, exp[i!t0 (x)]. It
is better to use this analytical solution than to use a nite-di erence solution because
there are no approximations in it to go bad. The only reason the lens equation is
mentioned at all in a chapter on nite di erencing is that the companion di raction
equation must be marched forward along with the lens equation, so the analytic
solutions are marched along in small steps.

4.2.2 First derivatives, explicit method


The in ation of money q at a 10% rate can be described by the di erence equation
qt+1 qt = :10 qt (4.22)
(1:0) qt+1 + ( 1:1) qt = 0 (4.23)
This one-dimensional calculation can be reexpressed as a di erencing star and a data
table. As such it provides a prototype for the organization of calculations with two-
4.2. FINITE DIFFERENCING 85

dimensional partial-di erential equations. Consider


Di erencing Star Data Table

2.000

1.1 2.200
time
#
+ 1.0 2.420

2.662

Since the data in the data table satisfy the di erence equations (1.22) and (1.23),
the di erencing star may be laid anywhere on top of the data table, the numbers in
the star may be multiplied by those in the underlying table, and the resulting cross
products will sum to zero. On the other hand, if all but one number (the initial
condition) in the data table were missing then the rest of the numbers could be lled
in, one at a time, by sliding the star along, taking the di erence equations to be true,
and solving for the unknown data value at each stage.
Less trivial examples utilizing the same di erencing star arise when the numerical
constant .10 is replaced by a complex number. Such examples exhibit oscillation as
well as growth and decay.

4.2.3 First derivatives, implicit method


Let us solve the equation
dq
dt
= 2rq (4.24)
by numerical methods. The most obvious (but not the only) approach is the basic
de nition of elementary calculus. For the time derivative, this is
dq q (t + t) q (t)
dt
 t (4.25)
Using this in equation (1.24) yields the the in ation-of-money equations (1.22) and
(1.23), where 2 r = :1. Thus in the in ation-of-money equation the expression of dq=dt
is centered at t + t=2, whereas the expression of q by itself is at time t. There is no
86 CHAPTER 4. MIGRATION BY FINITE DIFFERENCES

reason the q on the right side of equation (1.24) cannot be averaged at time t with
time t + t, thus centering the whole equation at t + t=2. When writing di erence
equations, it is customary to write q(t + t) more simply as qt+1 . (Formally one
should say t = nt and write qn+1 instead of qt+1 , but helpful mnemonic information
is carried by using t as the subscript instead of some integer like n.) Thus, a centered
approximation of (1.24) is
q + qt
qt+1 qt = 2 r t t+1 (4.26)
2
Letting = rt, this becomes
(1 ) qt+1 (1 + ) qt = 0 (4.27)
which is representable as the di erence star

+1
t
#
For a xed t this star gives a more accurate solution to the di erential equation
(1.24) than does the star for the in ation of money. The reasons for the names
\explicit method" and \implicit method" above will become clear only after we study
a more complicated equation such as the heat- ow equation.

4.2.4 The explicit heat- ow equation


The heat- ow equation controls the di usion of heat. This equation is a prototype for
migration. The 15 migration equation is the same equation but the heat conductivity
constant is imaginary. (The migration equation is really the Schroedinger equation,
which controls the di usion of probability of atomic particles).
@q  @2 q
= (4.28)
@t C @x2
Implementing (1.28) in a computer requires some di erence approximations for the
partial di erentials. As before we use a subscript notation that allows (1.25) to be
compacted into
@q qt+1 qt
@t
 t (4.29)
where t + t is denoted by t + 1. The second-derivative formula may be obtained by
doing the rst derivative twice. This leads to qt+2 2 qt+1 + qt . The formula is usually
4.2. FINITE DIFFERENCING 87

Data Table x!
i n i t i a l

s s

i star i

d 2 1 d

e 1 e

Table 4.7: Di erencing star and table for one-dimensional heat- ow equation.

treated more symmetrically by shifting it to qt+1 2 qt + qt 1 . These two versions are


equivalent as t tends to zero, but the more symmetrical arrangement will be more
accurate when t is not zero. Using superscripts to describe x-dependence gives a
nite-di erence approximation to the second space derivative:
@2q q x+1 2 qx + qx 1
@x2
 x2 (4.30)
Inserting the last two equations into the heat- ow equation (and using = to denote
) gives
qtx+1 qtx  qtx+1 2 qtx + qtx 1
= (4.31)
t C (x)2
(Of course it is not justi ed to use = to denote , but the study of errors must
be deferred until the concepts have been laid out. Errors are studied in chapter 8.)
Letting =  t=(C x2 ), equation (1.31) becomes
qtx+1 qtx (qtx+1 2 qtx + qtx 1 ) = 0 (4.32)
88 CHAPTER 4. MIGRATION BY FINITE DIFFERENCES

Equation (1.32) can be explicitly solved for q for any x at the particular time t + 1
given q at all x for the particular time t and hence the name explicit method.
Equation (1.32) can be interpreted geometrically as a computational star in the
(x; t)-plane, as depicted in Table 1.7. By moving the star around in the data table you
will note that it can be positioned so that only one number at a time (the 1) lies over
an unknown element in the data table. This enables the computation of subsequent
rows beginning from the top. By doing this you are solving the partial-di erential
equation by the nite-di erence method. There are many possible arrangements of
initial and side conditions, such as zero-value side conditions. Next is a computer
program and its result.
# Explicit heat-flow equation
real q(12), qp(12)
nx = 12
do ia= 1, 2 { # stable and unstable cases
alpha = ia*.3333; write(6,'(/"alpha =",f5.2)') alpha
do ix= 1,6 { q(ix) = 0.} # Initial temperature step
do ix= 7,12 { q(ix) = 1.}
do it= 1, 6 {
write(6,'(20f6.2)') (q(ix),ix=1,nx)
do ix= 2, nx-1
qp(ix) = q(ix) + alpha*(q(ix-1)-2.*q(ix)+q(ix+1))
qp(1) = qp(2); qp(nx) = qp(nx-1)
do ix= 1, nx
q(ix) = qp(ix)
}
}
call exit(0); end

alpha = 0.33
0.00 0.00 0.00 0.00 0.00 0.00 1.00 1.00 1.00 1.00 1.00 1.00
0.00 0.00 0.00 0.00 0.00 0.33 0.67 1.00 1.00 1.00 1.00 1.00
0.00 0.00 0.00 0.00 0.11 0.33 0.67 0.89 1.00 1.00 1.00 1.00
0.00 0.00 0.00 0.04 0.15 0.37 0.63 0.85 0.96 1.00 1.00 1.00
0.00 0.00 0.01 0.06 0.19 0.38 0.62 0.81 0.94 0.99 1.00 1.00
0.00 0.00 0.02 0.09 0.21 0.40 0.60 0.79 0.91 0.98 1.00 1.00

alpha = 0.67
0.00 0.00 0.00 0.00 0.00 0.00 1.00 1.00 1.00 1.00 1.00 1.00
0.00 0.00 0.00 0.00 0.00 0.67 0.33 1.00 1.00 1.00 1.00 1.00
0.00 0.00 0.00 0.00 0.44 0.00 1.00 0.56 1.00 1.00 1.00 1.00
0.00 0.00 0.00 0.30 -0.15 0.96 0.04 1.15 0.70 1.00 1.00 1.00
0.00 0.00 0.20 -0.20 0.89 -0.39 1.39 0.11 1.20 0.80 1.00 1.00
0.13 0.13 -0.20 0.79 -0.69 1.65 -0.65 1.69 0.21 1.20 0.87 0.87

4.2.5 The leapfrog method


The diculty with the given program is that it doesn't work for all possible numer-
ical values of . You can see that when is too large (when x is too small) the
4.2. FINITE DIFFERENCING 89

solution in the interior region of the data table contains growing oscillations. What
is happening is that the low-frequency part of the solution is OK (for a while), but
the high-frequency part is diverging. The precise reason the divergence occurs is
the subject of some mathematical analysis that will be postponed till page ??. At
wavelengths long compared to x or t, we expect the di erence approximation
to agree with the true heat- ow equation, smoothing out irregularities in tempera-
ture. At short wavelengths the wild oscillation shows that the di erence equation
can behave in a way almost opposite to the way the di erential equation behaves.
The short wavelength discrepancy arises because di erence operators become equal
to di erential operators only at long wavelengths. The divergence of the solution is a
fatal problem because the subsequent round-o error will eventually destroy the low
frequencies too.
By supposing that the instability arises because the time derivative is centered at
a slightly di erent time t + 1=2 than the second x-derivative at time t, we are led to
the so-called leapfrog method, in which the time derivative is taken as a di erence
between t 1 and t + 1:
@q qt+1 qt 1
@t
 2 t
(4.33)
The resulting leapfrog di erencing star is

2 4 2

+1
t

Here the result is even worse. A later analysis shows that the solution is now diver-
gent for all real numerical values of . Although it was a good idea to center both
derivatives in the same place, it turns out that it was a bad idea to express a rst
derivative over a span of more mesh points. The enlarged operator has two solutions
in time instead of just the familiar one. The numerical solution is the sum of the two
theoretical solutions, one of which, unfortunately (in this case), grows and oscillates
for all real values of .
To avoid all these problems (and get more accurate answers as well), we now turn
to some slightly more complicated solution methods known as implicit methods.
90 CHAPTER 4. MIGRATION BY FINITE DIFFERENCES

4.2.6 The Crank-Nicolson method


The Crank-Nicolson method solves both the accuracy and the stability problem. Re-
call the di erence representation of the heat- ow equation (1.32).
 
qtx+1 qtx = a qtx+1 2qtx + qtx 1 (4.34)
Now, instead of expressing the right-hand side entirely at time t, it will be averaged
at t and t + 1, giving
h   +1 i
qtx+1 qtx = a2 qtx+1 2qtx + qtx 1 + qtx+1 2qtx+1 + qtx+11 (4.35)
This is called the Crank-Nicolson method. De ning a new parameter = a=2, the
di erence star is
x

2 1
(4.36)
2 +1

t
When placing this star over the data table, note that, typically, three elements at a
time cover unknowns. To say the same thing with equations, move all the t + 1 terms
in (1.35) to the left and the t terms to the right, obtaining
+1 + (1 + 2 )q x
qtx+1 qtx+11 = qtx+1 + (1 2 )qtx + qtx 1 (4.37)
t+1

Now think of the left side of equation (1.37) as containing all the unknown quantities
and the right side as containing all known quantities. Everything on the right can be
combined into a single known quantity, say, dxt. Now we can rewrite equation (1.37)
as a set of simultaneous equations. For de niteness, take the x-axis to be limited to
ve points. Then these equations are:
2 eleft 0 0 0 3 2 qt1+1 3 2 d1 3
66 1 + 2 0 0 77 66 qt2+1 77 66 dt2t 77
66 0 7 6 3 7 6 37
64 0 0 1 + 2 1 + 2 0 775 664 qqt4+1 775 = 664 ddt4 775 (4.38)
t+1 t
0 0 0 eright qt5+1 d5t
Equation (1.37) does not give us each qtx+1 explicitly, but equation (1.38) gives them
implicitly by the solution of simultaneous equations.
The values eleft and eright are adjustable and have to do with the side boundary
conditions. The important thing to notice is that the matrix is tridiagonal, that is,
4.2. FINITE DIFFERENCING 91

except for three central diagonals all the elements of the matrix in (1.38) are zero.
The solution to such a set of simultaneous equations may be economically obtained.
It turns out that the cost is only about twice that of the explicit method given by
(1.32). In fact, this implicit method turns out to be cheaper, since the increased
accuracy of (1.37) over (1.32) allows the use of a much larger numerical choice of
t . A program that demonstrates the stability of the method, even for large t, is
given next.
A tridiagonal simultaneous equation solving subroutine rtris() explained in the
next section. The results are stable, as you can see.

a = 8.00
0.00 0.00 0.00 0.00 0.00 0.00 1.00 1.00 1.00 1.00 1.00 1.00
0.17 0.17 0.21 0.30 0.47 0.76 0.24 0.53 0.70 0.79 0.83 0.83
0.40 0.40 0.42 0.43 0.40 0.24 0.76 0.60 0.57 0.58 0.60 0.60
0.44 0.44 0.44 0.44 0.48 0.68 0.32 0.52 0.56 0.56 0.56 0.56

# Implicit heat-flow equation


real q(12),d(12)
nx=12; a = 8.; write(6,'(/"a =",f5.2)') a; alpha = .5*a
do ix= 1,6 { q(ix) = 0.} # Initial temperature step
do ix= 7,12 { q(ix) = 1.}
do it= 1,4 {
write(6,'(20f6.2)') (q(ix),ix=1,nx)
d(1) = 0.; d(nx) = 0.
do ix= 2, nx-1
d(ix) = q(ix) + alpha*(q(ix-1)-2.*q(ix)+q(ix+1))
call rtris( nx, alpha, -alpha, (1.+2.*alpha), -alpha, alpha, d, q)
}
call exit(0); end

4.2.7 Solving tridiagonal simultaneous equations


Much of the world's scienti c computing power gets used up solving tridiagonal si-
multaneous equations. For reference and completeness the algorithm is included here.
Let the simultaneous equations be written as a di erence equation
aj qj +1 + bj qj + cj qj 1 = dj (4.39)
Introduce new unknowns ej and fj , along with an equation
qj = ej qj+1 + fj (4.40)
Write (1.40) with shifted index:
qj 1 = ej 1 qj + fj 1 (4.41)
92 CHAPTER 4. MIGRATION BY FINITE DIFFERENCES

Insert (1.41) into (1.39):


aj qj +1 + bj qj + cj (ej 1 qj + fj 1) = dj (4.42)
Now rearrange (1.42) to resemble (15):
qj= b + caje qj+1 + dbj + ccj efj 1 (4.43)
j j j 1 j j j 1

Compare (1.43) to (1.40) to see recursions for the new unknowns ej and fj :
aj
ej = (4.44)
bj + cj ej 1
dj cj fj 1
fj = (4.45)
bj + cj ej 1

First a boundary condition for the left-hand side must be given. This may involve
one or two points. The most general possible end condition is a linear relation like
equation (1.40) at j = 0, namely, q0 = e0 q1 + f0 . Thus, the boundary condition must
give us both e0 and f0 . With e0 and all the aj ; bj ; cj , we can use (1.44) to compute
all the ej .
On the right-hand boundary we need a boundary condition. The general two-point
boundary condition is
cn 1 qn 1 + eright qn = dn (4.46)
Equation (1.46) includes as special cases the zero-value and zero-slope boundary con-
ditions. Equation (1.46) can be compared to equation (1.41) at its end.
qn 1 = en 1 qn + fn 1 (4.47)
Both qn and qn 1 are unknown, but in equations (1.46) and (1.47) we have two
equations, so the solution is easy. The nal step is to take the value of qn and
use it in (1.41) to compute qn 1; qn 2; qn 3 ; etc. The subroutine rtris() solves
equation (1.38) for q where n=5, endl= eleft , endr= eright, a=c= , and b= 1 2 .
# real tridiagonal equation solver
subroutine rtris( n, endl, a, b, c, endr, d, q)
integer i, n
real q(n), d(n), a, b, c, den, endl, endr
temporary real f(n), e(n)
e(1) = -a/endl; f(1) = d(1)/endl
do i= 2, n-1 {
den = b+c*e(i-1); e(i) = -a/den; f(i) = (d(i)-c*f(i-1))/den
}
q(n) = (d(n)-c*f(n-1)) / (endr+c*e(n-1))
do i= n-1, 1, -1
q(i) = e(i) * q(i+1) + f(i)
return; end
4.2. FINITE DIFFERENCING 93

If you wish to squeeze every last ounce of power from your computer, note some
facts about this algorithm. (1) The calculation of ej depends on the medium through
aj , bj , cj , but it does not depend on the solution qj (even through dj ). This means
that it may be possible to save and reuse ej . (2) In many computers, division is much
slower than multiplication. Thus, the divisor in (19a,b) can be inverted once (and
perhaps stored for reuse).

4.2.8 The xxz derivative


The 45 di raction equation di ers from the 15 equation by the inclusion of a
@ 3 =@x2 @z -derivative. Luckily this derivative ts on the six-point di erencing star

1 2 1
1
x2 z
1 2 1

So other than modifying the six coecients on the star, it adds nothing to the com-
putational cost.

4.2.9 Diculty in higher dimensions


So far we have had no trouble obtaining cheap, safe, and accurate di erence meth-
ods for solving partial-di erential equations (PDEs). The implicit method has met
all needs. But in space dimensions higher than one, the implicit method becomes
prohibitively costly. For the common example of problems in which @ 2 =@x2 becomes
generalized to @ 2 =@x2 + @ 2 =@y2 , we will learn the reason why. The simplest case is the
heat- ow equation for which the Crank-Nicolson method gave us (1.37). Introducing
the abbreviation xxq = qx+1 2qx + qx 1, equation (1.37) becomes
(1 xx ) Qt+1 = (1 + xx ) Qt (4.48)
The nested expression on the left represents a tridiagonal matrix. The critical stage
is in solving the tridiagonal simultaneous equations for the vector of unknowns Qt+1 .
Fortunately there is a special algorithm for this solution, and the cost increases only
linearly with the size of the matrix. Now turn from the one-dimensional physical
space of x to two-dimensional (x; y)-space. Letting denote the numerical constant
in (1.48), the equation for stepping forward in time is
[1 (xx + yy ) ] Qt+1 = [1 + (xx + yy ) ] Qt (4.49)
94 CHAPTER 4. MIGRATION BY FINITE DIFFERENCES

The unknowns Qt+1 are a two-dimensional function of x and y that can be denoted
by a matrix. Next we will interpret the bracketed expression on the left side. It turns
out to be a four-dimensional matrix!
To clarify the meaning of this matrix, a mapping from two dimensions to one will
be illustrated. Take the temperature Q to be de ned on a 44 mesh. A natural way
of numbering the points on the mesh is
11 21 31 41
12 22 32 42 (4.50)
13 23 33 43
14 24 34 44
For algebraic purposes these sixteen numbers can be mapped into a vector. There
are many ways to do this. A simple way would be to associate the locations in (1.50)
with vector components by the column arrangement
1 2 3 4
5 6 7 8 (4.51)
9 10 11 12
13 14 15 16
The second di erence operator has the following star in the (x; y)-plane:

1 -4 1 (4.52)

Lay this star down in the (x; y)-plane (1.51) and move it around. Unfortunately,
with just sixteen points, much of what you see is dominated by edges and corners.
Try every position of the star that allows the center 4 to overlay one of the sixteen
points. Never mind the 1's going o the sides. Start with the 4 in (1.52) over the
1 in the upper left corner of (1.51). Observe 1's on the 2 and the 5. Copy the 1's
into the top row of Table 4.8 into the second and fth columns. Then put the 4
in (1.52) over the 2 in (1.51). Observe 1's on the 1, 3, and 6. Copy the 1's into the
next row of Table 4.8. Then put the 4 over the 3. Observe 1's on the 2, 4, and 7.
Continue likewise. The 1616 square matrix that results is shown in Table 4.8.
Now that Table 4.8 has been constructed we can return to the interpretation of
equation (1.49). The matrix of unknowns Qt+1 has been mapped into a sixteen-
point column vector, and the bracketed expression multiplying Qt+1 can be mapped
4.3. WAVEMOVIE PROGRAM 95

into a 1616 matrix. Clearly, the matrix contains zeroes everywhere that Table 4.8
contains dots. It seems fortunate that the table contains many zeroes, and we are
led to hope for a rapid solution method for the simultaneous equations. The bad
news is that no good method has ever been found. The best methods seem to require
e ort proportional to N 3 , where in this case N = 4. Based on our experience in one
dimension, those of us who worked on this problem hoped for a method proportional
to N 2 , which is the cost of an explicit method|essentially the cost of computing
the right side of (1.41). Even all the features of implicit methods do not justify an
additional cost of a factor of N . The next best thing is the splitting method.

EXERCISES:

1 Interpret the in ation-of-money equation when the interest rate is the imaginary
number i=10.

2 Write the 45 di raction equation in (x; z)-space for xed ! in the form of (1.36).

4.3 WAVEMOVIE PROGRAM


An old professor of education had a monochromatic theme. It was his only theme
and the topic of his every lecture. It was this:

 People learn by solving problems. Solving problems is the only way people learn,
etc., etc., etc......

All he ever did was lecture; he never assigned any problems.


Your problems all relate to the computer subroutine wavemovie() (Lynn, Gonza-
lez, JFC, Hale, Li, Karrenbach). It makes a movie of a sum of monochromatic waves.
As it stands it will produce a movie (three-dimensional matrix) of waves propagat-
ing through a focus. The whole process from compilation through computation to
nally viewing the lm loop takes about a minute. A sample frame of the movie is in
Figure 1.3. It shows a snapshot of the (x; z)-plane. Collapsing spherical waves enter
from the top, go through a focus and then expand again. Notice that the wave eld
is small but not zero in the region of geometrical shadow. In the shadow region you
see waves that appear to be circles emanating from point sources at the top corners.
Notice that the amplitudes of expanding spherical waves drop o with distance and
collapsing spherical waves grow towards the focus. We will study the program that
made this gure and see many features of waves and mathematics.
96 CHAPTER 4. MIGRATION BY FINITE DIFFERENCES

4 1   1           
1 4 1   1          
 1 4 1   1         
  1 4    1        

1    4 1   1       
 1   1 4 1   1      
  1   1 4 1   1     
   1   1 4    1    

    1    4 1   1   
     1   1 4 1   1  
      1   1 4 1   1 
       1   1 4    1

        1    4 1  
         1   1 4 1 
          1   1 4 1
           1   1 4

Table 4.8: The two-dimensional matrix of coecients for the Laplacian operator.
4.3. WAVEMOVIE PROGRAM 97

Figure 4.3: First frame of


movie generated by wavemovie().
omx-Mfocus15 [ER]

4.3.1 Earth's surface boundary condition


The program that created Figure 1.3 begins with an initial condition along the top
boundary, and then this initial wave eld is extrapolated downward. So, the rst
question is: what is the mathematical function of x that describes a collapsing
spherical (actually cylindrical) wave? An expanding spherical
q wave has an equa-
tion exp[ i!(t r=v)], where the radial distance is r = (x x0 )2 + (z z0 )2 from
the source. For a collapsing spherical wave we need exp[ i!(t + r=v)]. Parantheti-
cally, I'll add that theptheoretical solutions are not really these, but something more
like these divided by r, actually they should be a Hankel functions, but the picture
is little di erent when the exact initial condition is used. If you have been following
this analysis, you should have little diculty changing the initial conditions in the
program to create a downgoing plane wave shown in Figure 1.4.

Figure 4.4: Specify program


changes that give an initial plane
wave propagating downward at an
angle of 15 to the right of verti-
cal. omx-Mdipplane [ER]

Notice the weakened waves in the zone of theoretical shadow that appear to arise
from a point source on the top corner of the plot. You have probably learned in
98 CHAPTER 4. MIGRATION BY FINITE DIFFERENCES

physics classes of \standing waves". This is what you will see near the re ecting side
boundary if you recompute the plot with a single frequency nw=1. Then the plot will
acquire a \checkerboard" appearence near the re ecting boundary. Even this gure
with nw=4 shows the tendency.

4.3.2 Time-domain analysis


For a lm loop to make sense to a viewer, the subject of the movie must be periodic,
and organized so that the last frame leads naturally into the rst. In the movie created
by wavemovie() there is a parameter lambda that controls the basic repetition rate
of wave pulses red onto the screen from the top. When a wavelet travels one-quarter
of the way down the frame, another is sent in. This is de ned by the line
lambda = nz * dz / 4 = Nz4z

Take any point in (x; z)-space. The signal there will be a superposition of sinu-
soids of various frequencies, !j . We can choose what frequencies we will use in the
calculation and what amplitudes and phases we will attach to the initial conditions
at those frequencies. Here we will simply take uniformly spaced sinusoids of unit
amplitude and no phase. The nw frequencies are !j = !, 2 !, ..., nw !. The
lowest frequency dw = ! must be inversely proportional to the wavelength lambda
=
dw = v * pi2 / lambda =
2v

Finally, the time duration of the lm loop must equal the period of the lowest-
frequency sinusoid
Nt t =
2
!
This latter equation de nes the time interval on the line
dt = pi2 / ( nt * dw )

If you use more frequencies, you might like the result better because the wave pulses
will be shorter, and the number of wavelengths between the pulses will increase. Thus
the quiet zones between the pulses will get quieter. The frequency components can
be weighted di erently|but this becomes a digression into simple Fourier analysis.
# from par: integer n3:nt=12, n2:nx=48, n1:nz=96, nw=2, nlam=4
# from par: real dx=2, dz=1, v=1
#
subroutine wavemovie( nt, nx, nz, nw, nlam, dx,dz,v, p, cd, q)
integer it,nt,ix,nx,iz,nz,iw,nw, nlam
real dx,dz,v, phase,pi2,z0,x0,dt,dw,lambda,w,wov,x, p(nz,nx,nt)
complex aa,a,b,c,cshift, cd(nx),q(nx)
lambda=nz*dz/nlam; pi2=2.*3.141592; dw=v*pi2/lambda; dt=pi2/(nt*dw)
4.3. WAVEMOVIE PROGRAM 99

x0 = nx*dx/3; z0 = nz*dz/3
call null( p, nz*nx*nt)
do iw = 1,nw { # superimpose nw frequencies
w = iw*dw; wov = w/v # frequency / velocity
do ix = 1,nx { # initial conditions for a
x = ix*dx-x0; # collapsing spherical wave
phase = -wov*sqrt( z0**2+x**2)
q(ix) = cexp( cmplx( 0.,phase))
}
aa = (0.,1.)*dz/(4.*dx**2*wov) # tridiagonal matrix coefficients
a = -aa; b = 1.+2.*aa; c = -aa
do iz = 1,nz { # extrapolation in depth
do ix = 2,nx-1 # diffraction term
cd(ix) = aa*q(ix+1) + (1.-2.*aa)*q(ix) + aa*q(ix-1)
cd(1) = 0.; cd(nx) = 0.
call ctris( nx,-a,a,b,c,-c,cd,q)
# Solves complex tridiagonal equations
cshift = cexp( cmplx( 0.,wov*dz))
do ix = 1,nx # shifting term
q(ix) = q(ix) * cshift
do it=1,nt { # evolution in time
cshift = cexp( cmplx( 0.,-w*it*dt))
do ix = 1,nx
p(iz,ix,it) = p(iz,ix,it) + q(ix)*cshift
}
}
}
return; end

4.3.3 Internals of the lm-loop program


The di erential equation solved by the program is
@P v @2P
@z
= v(ix;!z) P + i ! 2 @x2
(4.53)

For each  z-step the calculation is done in two stages. The rst stage is to solve
@P v @2P
@z
= i ! 2 @x2
(4.54)
Using the Crank-Nicolson di erencing method this becomes
pxz+1 pxz v pxz +1 +1 2px + px 1 !
2pxz + pxz 1 + pxz+1 z +1 z +1
= (4.55)
z i! 2 2 x2 2 x2
Absorb all the constants into one and de ne
v z
= i ! 4 x2
(4.56)
100 CHAPTER 4. MIGRATION BY FINITE DIFFERENCES

getting
h i
pxz+1 pxz = ( pxz +1 2 pxz + pxz 1) + ( pxz+1
+1 2 px + px 1 )
z +1 z +1 (4.57)
Bring the unknowns to the left:
+1 + (1 + 2 )px
pxz+1 pxz+11 = pxz +1 + (1 2 )pxz + pxz 1 (4.58)
z +1
We will solve this as we solved equations (1.37) and (1.38). The second stage is to
solve the equation
@P i!
@z
= v
P (4.59)
analytically by
P (z + z ) = P (z ) eiz !=v (4.60)
By alternating between (1.58) and (1.60), which are derived from (1.54) and (1.59),
the program solves (1.53) by a splitting method. Formal justi cation of the splitting
method follows in chapter 5. The program uses the tridiagonal solver discussed earlier,
except the version here, ctris(), has all the real variables changed to complex.
Figure 1.5 shows a change of initial conditions where the incoming wave on the
top frame is de ned to be an impulse, namely, p(x; z = 0) = (   ; 0; 0; 1; 0; 0;   ). The
result is alarmingly noisy. What is happening is that for any frequencies anywhere
near the Nyquist frequency, the di erence equation departs from the di erential equa-
tion that it should mimic. This problem is addressed, analyzed, and ameliorated in
chapter 8. For now, the best thing to do is to avoid sharp corners in the initial wave
eld.

Figure 4.5: Observe and de-


scribe various computational arti-
facts by testing the program us-
ing a point source at (x; z) =
(xmax/2,0). Such a source is rich
in the high spatial frequencies for
which di erence equations do not
mimic their di erential counter-
parts. omx-Mcompart [ER]

4.3.4 Side-boundary analysis


In geophysics, we usually wish the side-boundary question did not arise. The only
real reason for side boundaries is that either our survey or our processing activity is
4.3. WAVEMOVIE PROGRAM 101

necessarily limited in extent. Given that side boundaries are inevitable, we must think
about them. The subroutine wavemovie() included zero-slope boundary conditions.
This type of boundary treatment resulted from taking
d(1) = 0. ; d(nx) = 0.

and in the call to ctris taking


endl = - a ; endr = - c

A quick way to get zero-value side-boundary conditions is to take

Figure 4.6: Given that the domain


of computation is 0  x  xmax
and 0  z zmax, how would
you modify the initial conditions
at z = 0 to simulate a point source
at (x; z) = (xmax/3, -zmax/2)?
omx-Mexpandsphere [ER]

endl = endr = 1030  1


Compare the side-boundary behavior of Figures 1.6 and 1.7.

Figure 4.7: Modify the program


so that zero-slope side boundaries
are replaced by zero-value side
boundaries. omx-Mzeroslope
[ER]

The zero slope boundary condition is explicitly visible as identical signal on the
two end columns. Likewise, the zero-value side boundary condition has a column of
102 CHAPTER 4. MIGRATION BY FINITE DIFFERENCES

zero-valued signal on each side. In each case, the end trace need not be explicitly
present, and it is not present in Dave Hale's coding of the boundary conditions which
I give, but do not derive, below.

p0 = bl * p(1); pnxp1 = br * p(nx)

cd(1) = aa * p(2) + ( 1. - 2. * aa ) * p( 1) + aa * p0
cd(nx) = aa * p(nx-1) + ( 1. - 2. * aa ) * p(nx) + aa * pnxp1

endl = c * bl + b;
endr = a * br + b

call ctris(nx,endl,a,b,c,endr,cd,p,ce,cf)

Note that bl=br=0 for zero-value boundaries, and bl=br=1 for zero-slope bound-
aries. Absorbing side boundaries, derived in chapter 9 are obtained by letting bl and
br be complex.

4.3.5 Including lateral velocity variation


Lateral velocity variation v = v(x) has not been included in the program, but it
is not dicult to install. It enters in two places. It enters rst in equation (1.60).
If the wave eld is such that kx is small enough, then equation (1.60) is the only
place it is needed. Second, it enters in the tridiagonal coecients through the v in
equation (1.56). The so-called thin-lens approximation of optics seems to amount to
including the equation (1.60) part only. An example of lateral velocity variation is in
Figure 1.8.

Figure 4.8: Make changes to the


program to include a thin-lens
term with a lateral velocity change
of 40% across the frame produced
by a constant slowness gradient.
Identify other parts of the pro-
gram which are a ected by lateral
velocity variation. You need not
make these other changes. Why
are they expected to be small?
omx-Mlateralvel [ER]
4.3. WAVEMOVIE PROGRAM 103

Figure 4.9: Incorporate the 45


term, @xxz , for the collapsing
spherical wave. Use zero-slope
sides. Compare your result with
the 15 result obtained via the
program wavemovie. Mark an X
at the theoretical focus location.
omx-Mforty ve [ER]

4.3.6 Phase shift


Theory predicts that in two dimensions, waves going through a focus su er a 90 phase
shift. You should be able to notice that a symmetrical waveform is incident on the
focus, but an antisymmetrical waveform emerges. This is easily seen in Figure 1.10.
In migrations, waves go just to a focus, not through it. So the migration impulse

Figure 4.10: The accuracy of the x-derivative may be improved by a technique that is
analyzed later in chapter 8. Brie y, instead of representing kx2 x2 by the tridiagonal
matrix T with ( 1; 2; 1) on the main diagonal, you use T=(I T=6). Modify
the extrapolation analysis by multiplying through by the denominator. Make the
necessary changes to the 45 collapsing wave program. Left without 1/6 trick; right,
with 1/6 trick. omx-trick [ER]
response in two dimensions carries a 45 phase shift. Even though real life is three
dimensional, the two-dimensional response is appropriate for migrating seismic lines
104 CHAPTER 4. MIGRATION BY FINITE DIFFERENCES

where focusing is presumed to arise from cylindrical, not spherical, re ectors.

Figure 4.11: Chapter 9 explains


how to absorb energy at the side
boundaries. Make the necessary
changes to the program to absorb
waves incident on the left-side
boundary. omx-Mabsorbside
[ER]

4.4 MIGRATION IN THE (omega,x)-DOMAIN (KJ)


The migration program is similar to the lm loop program. But there are some
di erences. The lm loop program has \do loops" nested four deep. It produces
results for many values of t. Migration requires a value only at t = 0. So one loop is
saved, which means that for the same amount of computer time, the space volume can
be increased. Unfortunately, loss of a loop seems also to mean loss of a movie. With
! -domain migration, it seems that the only interesting thing to view is the input and
the output.
The input for this process will probably be eld data, unlike for the lm loop
movie, so there will not be an analytic representation in the !-domain. The input
will be in the time domain and will have to be Fourier transformed. The beginning
of the program de nes some pulses to simulate eld data. The pulses are broadened
impulses and should migrate to approximate semicircles. Exact impulses were not
used because the departure of di erence operators from di erential operators would
make a noisy mess.
Next the program Fourier transforms the pseudodata from the time domain into
the !-frequency domain.
Then comes the downward continuation of each frequency. This is a loop on depth
z and on frequency ! . Either of these loops may be on the inside. The choice can be
made for machine-dependent eciency.
For migration an equation for upcoming waves is required, unlike the downgoing
wave equation required for the lm loop program. Change the sign of the z-axis in
equation (1.53). This a ects the sign of aa and the sign of the phase of cshift.
4.4. MIGRATION IN THE (OMEGA,X)-DOMAIN (KJ) 105

Another di erence with the lm loop program is that the input now has a time axis
whereas the output is still a depth axis. It is customary and convenient to reorganize
the calculation to plot travel-time depth, instead of depth, making the vertical axes
on both input and output the same. Using  = z=v , equivalently d=dz = 1=v ,
the chain rule gives
@
= @ @
= 1 @ (4.61)
@z @z @ v @
Substitution into (1.53) gives
@P v2 @2P
@
= i! P
i ! 2 @x2
(4.62)

In the program, the time sample size dt = t and the travel-time depth sample
dtau =  are taken to be unity, so the maximum frequency is the Nyquist. Notice
that the frequency loop covers only the positive frequency axis. The negative fre-
quencies serve only to keep the time function real, a task that is more easily done by
simply taking the real part. A program accompanies.
#% Migration in the (omega,x,z)-domain
subroutine kjartjac()
real p(48,64), pi, alpha, dt, dtau, dw, w0, omega
complex cp(48,64), cd(48), ce(48), cf(48), aa, a, b, c, cshift
integer ix, nx, iz, nz, iw, nw, it, nt, esize
nt= 64; nz= nt; nx= 48; pi= 3.141592
dt= 1.; dtau= 1.; w0=-pi/dt; dw= 2*pi/(dt*nt); nw= nt/2;
alpha = .25 # alpha = v*v*dtau/(4*dx*dx)
do iz= 1, nz { do ix=1,nx { p(ix,iz) = 0.; cp(ix,iz)=0. }}
do it= nt/3, nt, nt/4{
do ix= 1, 4 # Broadened impulse source
{ cp(ix,it) = (5.-ix); cp(ix,it+1) = (5.-ix) }}
call ft2axis( 0, 1., nx,nt, cp)
do iz= 1, nz {
do iw= 2, nw { omega = w0 + dw*(iw-1)
aa = - alpha /( (0.,-1.)*omega )
a = -aa; b = 1.+2.*aa; c = -aa
do ix= 2, nx-1
cd(ix) = aa*cp(ix+1,iw) + (1.-2.*aa)*cp(ix,iw) + aa*cp(ix-1,iw)
cd(1) = 0.; cd(nx) = 0.
call ctris( nx, -a, a, b, c, -c, cd, cp(1,iw))
cshift = cexp( cmplx( 0.,-omega*dtau))
do ix= 1, nx
cp(ix,iw) = cp(ix,iw) * cshift
do ix= 1, nx
p(ix,iz) = p(ix,iz)+cp(ix,iw) # p(t=0) = Sum P(omega)
}}
esize=4
to history: integer n1:nx, n2:nz, esize
call hclose()
call srite( 'out', p, nx*nz*4 )
return; end
106 CHAPTER 4. MIGRATION BY FINITE DIFFERENCES

The output of the program is shown in Figure 1.12. Mainly, you see semicircle
approximations. There are also some artifacts at late time that may be !-domain
wraparounds. The input pulses were apparently suciently broad-banded in dip that
the gure provides a preview of the fact, to be proved later, that the actual semicircle
approximation is an ellipse going through the origin.

Figure 4.12: Output of the pro-


gram kjartjac: semicircle ap-
proximations. omx-kjartjac
[ER]

Notice that the waveform of the original pulses was a symmetric function of time,
whereas the semicircles exhibit a waveform that is neither symmetric nor antisymmet-
ric, but is a 45 phase-shifted pulse. Waves from a point in a three-dimensional world
would have a phase shift of 90. Waves from a two-dimensional exploding re ector in
a three-dimensional world have the 45 phase shift.

You might also like