4 Omx
4 Omx
4 Omx
invariant.
All the same considerations apply to the horizontal space axis x. On space axes,
a new concern is the seismic velocity v. If it is space-variable, say v(x), then the op-
eration of extrapolating wave elds upward and downward can no longer be expressed
as a product in the kx-domain. Wave-extrapolation procedures must abandon the
spatial frequency domain and go to nite di erences. The alternative again is all
kinds of contortions (such as stretching the x-axis) to try to make things appear to
be space-invariant.
Fourier methods are global. That is, the entire dataset must be in hand before
processing can begin. Remote errors and truncations can have serious local e ects.
On the other hand, nite-di erence methods are local. Data points are directly related
only to their neighbors. Remote errors propagate slowly.
Some problems of the Fourier domain have just been summarized. The problems
of the space domain will be shown in this chapter and chapter 9. Seismic data
processing is a multidimensional task, and the di erent dimensions are often handled
in di erent ways. But if you are sure you are content with the Fourier domain then
you can skip much of this chapter and jump directly to chapter 6 where you can learn
about shot-to-geophone o set, stacking, and migration before stack.
The parabolic wave equation was originally introduced to speed nite-di erence wave
modeling. The following introduction to the parabolic wave equation is via my original
transformation method.
The diculty prior to 1969 came from an inappropriate assumption central to all
then-existing seismic wave theory, namely, the horizontal layering assumption. Ray
tracing was the only way to escape this assumption, but ray tracing seemed to ignore
waveform modeling. In petroleum exploration almost all wave theory further limited
itself to vertical incidence. The road to success lay in expanding ambitions from
vertical incidence to include a small angular bandwidth around vertical incidence.
This was achieved by abandoning much known, but cumbersome, seismic theory.
A vertically downgoing plane wave is represented mathematically by the equation
P (t; x; z ) = P0 e i! (t z=v) (4.2)
In this expression, P0 is absolutely constant. A small departure from vertical incidence
can be modeled by replacing the constant P0 with something, say, Q(x; z), which is
not strictly constant but varies slowly.
P (t; x; z ) = Q(x; z ) e i ! (t z=v) (4.3)
Inserting (1.3) into the scalar wave equation Pxx + Pzz = Ptt =v2 yields
!
@2 i! @ 2 !2
Q + + @z Q = Q
@x2 v v2
@2Q 2 i! @Q @2Q
@x2
+ v @z
+ @z 2
= 0 (4.4)
The wave equation has been reexpressed in terms of Q(x; z). So far no approximations
have been made. To require the wave eld to be near to a plane wave, Q(x; z) must be
near to a constant. The appropriate means (which caused some controversy when it
was rst introduced) is to drop the highest depth derivative of Q, namely, Qzz . This
leaves us with the parabolic wave equation
@Q v @2Q
@z
= 2 i! @x2 (4.5)
When I rst introduced equation (1.5) for use in seismology, I thought its most
important property was this: For a wave eld close to a vertically propagating plane
wave, the second x-derivative is small, hence the z-derivative is small. Thus, the
nite-di erence method should allow a very large z and thus be able to treat models
more like the earth, and less like a drumhead. I soon realized that the parabolic wave
equation is also just what is needed for seismic imaging because you can insert it in
an equation like (1.1). (Curiously, equation (1.5) also happens to be the Schroedinger
equation of quantum mechanics.)
I called equation (1.5) the 15 equation. After using it for about a year I dis-
covered a way to improve on it by estimating the dropped @zz term. Di erentiate
76 CHAPTER 4. MIGRATION BY FINITE DIFFERENCES
equation (1.5) with respect to z and substitute the result back into equation (1.4)
getting
@2Q 2 i! @Q + v @3Q
@x2
+ v @z 2 i! @z@x2 = 0 (4.6)
I named equation (1.6) the 45 migration equation. It is rst order in @z , so it
requires only a single surface boundary condition, however, downward continuation
will require something more complicated than equation (1.1).
The above approach, the transformation approach, was and is very useful. But
people were confused by the dropping and estimating of the @zz derivative, and a
philosophically more pleasing approach was invented by Francis Muir, a way of getting
equations to extrapolate waves at wider angles by tting the dispersion relation of a
semicircle by polynomial ratios.
Bringing equation (1.9) into the space domain is not simply a matter of substi-
tuting a second x derivative for kx2 . The problem is the meaning of the square root
of a di erential operator. The square root of a di erential operator is not de ned
in undergraduate calculus courses and there is no straightforward nite di erence
representation. The square root becomes meaningful only when the square root is
regarded as some kind of truncated series expansion. It will be shown in chapter 8
that the Taylor series is a poor choice. Francis Muir showed that my original 15 and
4.1. WAVE-EXTRAPOLATION EQUATIONS 77
45 methods were just truncations of a continued fraction expansion. To see this,
de ne
vkx vkz
X = and R = (4.10)
! !
So equation (1.8) is more simply and abstractly written as
p
R = 1 X2 (4.11)
which you recognize as meaning that cosine is the square root of one minus sine
squared. The desired polynomial ratio of order n will be denoted Rn, and it will be
determined by the recurrence
X2
Rn+1 = 1 1 + Rn (4.12)
The recurrence is a guess that we verify by seeing what it converges to (if it converges).
Set n = 1 in (1.12) and solve
2
R1 = 1 1 +X R
1
R1 (1 + R1 ) = 1 + R1 X 2
R2 = 1 X2 (4.13)
The square root of (1.13) gives the required expression (1.11). Geometrically, (1.13)
says that the cosine squared of the incident angle equals one minus the sine squared
and truncating the expansion leads to angle errors. Muir said, and you can verify,
that his recurrance relationship formalizes what I was doing by re-estimating the
@zz term. Although it is pleasing to think of large values of n, in real life only the
low-order terms in the expansion are used. Table 1.1 shows the result of three Muir
iterations beginning from R0 = 1
For various historical reasons, the equations in Table 1.1 are often referred to as the
5 , 15, and 45 equations, respectively, the names giving a reasonable qualitative (but
poor quantitative) guide to the range of angles that are adequately handled. A trade-
o between complexity and accuracy frequently dictates choice of the 45 equation.
It then turns out that a slightly wider range of angles can be accommodated if the
recurrence is begun with something like R0 = cos 45. Figure 1.1 shows some plots.
5 R0 = 1
X2
15 R1 = 1 2
X2
45 R2 = 1 X2
2
2
X2
60 R3 = 1 X2
2
X2
2 2
Figure 4.1: Dispersion relation of Table 1.2. The curve labeled 45+ was constructed
with R0 = cos 45 . It ts exactly at 0 and 45 . omx-disper [NR]
4.1. WAVE-EXTRAPOLATION EQUATIONS 79
!
5 kz =
v
! vkx2
15 kz = v 2!
! kx2
45 kz = v vkx2
2 !v 2!
Table 4.2: As displayed in Figure 1.1, the dispersion relations tend toward a semicircle.
!
@P !
5 = i P
@z v
vkx2
!
@P !
15 @z = i
v 2! P
0 1
@P B! kx2 CC
45 @z = iB
B@ v CP
! vkx2 A
2
v 2!
variable Q by
Z z dz !
P (z; ! ) = Q(z; ! ) exp i! (4.15)
0 v(z )
Next, substitute (1.15) and (1.17) into Table 1.3 to obtain the retarded equations in
Table 1.4.
!
1
+ i! 1
@Q
5 = zero Q
@z v v (z )
vkx2 1
!
+ i! v1
@Q
15 @z = i
2! Q v (z )
Q
!
@Q kx2 1 1
45 = i 2 Q + i! Q
@z vkx v v (z )
2 !v 2!
@Q
general = di raction + thin lens
@z
@Q
5 = zero
@z
@2Q
= v(x;2i!z) @x2
@Q
15 @z
2 ! 3
v (x; z ) 2 @ 2 5 @Q v (x; z ) @ 2 Q
45 41 2i! @x2 @z = 2i! @x2
Formal justi cation of the splitting process is found in chapter 5. The rst equation,
called the lens equation, is solved analytically:
( Z z2 ! )
Q(z2 ) = Q(z1 ) exp i!
1 1 dz (4.20)
z1 v (x; z ) v(z )
Migrations that include the lens equation are called depth migrations. The term
is often omitted, giving a time migration.
Observe that the di raction parts of Tables 1.3 and 1.4 are the same. Let us use
them and equation (1.19) to de ne a table of di raction equations. Substitute @=@x
for ikx and clear @=@x from the denominators to get Table 1.5.
@2 v @2
D = + D
@z@t 2 @x2
@2 v @2
@z@t
U = 2 @x2 U
Table 4.6: Time-domain equations for downgoing and upcoming wave di raction with
retardation and the 15 approximation.
wave eld and U an upcoming wave eld, equation (1.21), for example, is found in
Table 1.6.
Using the exploding-re ector concept, it is the upcoming wave equation that is
found in both migration and di raction programs. The downgoing wave equation
is useful for modeling and migration procedures that are more elaborate than those
based on the exploding-re ector concept (chapter 6).
EXERCISES:
1 Consider a tilted straight line tangent to a circle, as shown in gure 1.2 . Use this
line to initialize the Muir square-root expansion. State equations and plot them
( 2 X +2) for the next two Muir semicircle approximations.
2.000
1.1 2.200
time
#
+ 1.0 2.420
2.662
Since the data in the data table satisfy the di erence equations (1.22) and (1.23),
the di erencing star may be laid anywhere on top of the data table, the numbers in
the star may be multiplied by those in the underlying table, and the resulting cross
products will sum to zero. On the other hand, if all but one number (the initial
condition) in the data table were missing then the rest of the numbers could be lled
in, one at a time, by sliding the star along, taking the di erence equations to be true,
and solving for the unknown data value at each stage.
Less trivial examples utilizing the same di erencing star arise when the numerical
constant .10 is replaced by a complex number. Such examples exhibit oscillation as
well as growth and decay.
reason the q on the right side of equation (1.24) cannot be averaged at time t with
time t + t, thus centering the whole equation at t + t=2. When writing di erence
equations, it is customary to write q(t + t) more simply as qt+1 . (Formally one
should say t = nt and write qn+1 instead of qt+1 , but helpful mnemonic information
is carried by using t as the subscript instead of some integer like n.) Thus, a centered
approximation of (1.24) is
q + qt
qt+1 qt = 2 r t t+1 (4.26)
2
Letting = rt, this becomes
(1 ) qt+1 (1 + ) qt = 0 (4.27)
which is representable as the di erence star
+1
t
#
For a xed t this star gives a more accurate solution to the di erential equation
(1.24) than does the star for the in ation of money. The reasons for the names
\explicit method" and \implicit method" above will become clear only after we study
a more complicated equation such as the heat- ow equation.
Data Table x!
i n i t i a l
s s
i star i
d 2 1 d
e 1 e
Table 4.7: Di erencing star and table for one-dimensional heat- ow equation.
Equation (1.32) can be explicitly solved for q for any x at the particular time t + 1
given q at all x for the particular time t and hence the name explicit method.
Equation (1.32) can be interpreted geometrically as a computational star in the
(x; t)-plane, as depicted in Table 1.7. By moving the star around in the data table you
will note that it can be positioned so that only one number at a time (the 1) lies over
an unknown element in the data table. This enables the computation of subsequent
rows beginning from the top. By doing this you are solving the partial-di erential
equation by the nite-di erence method. There are many possible arrangements of
initial and side conditions, such as zero-value side conditions. Next is a computer
program and its result.
# Explicit heat-flow equation
real q(12), qp(12)
nx = 12
do ia= 1, 2 { # stable and unstable cases
alpha = ia*.3333; write(6,'(/"alpha =",f5.2)') alpha
do ix= 1,6 { q(ix) = 0.} # Initial temperature step
do ix= 7,12 { q(ix) = 1.}
do it= 1, 6 {
write(6,'(20f6.2)') (q(ix),ix=1,nx)
do ix= 2, nx-1
qp(ix) = q(ix) + alpha*(q(ix-1)-2.*q(ix)+q(ix+1))
qp(1) = qp(2); qp(nx) = qp(nx-1)
do ix= 1, nx
q(ix) = qp(ix)
}
}
call exit(0); end
alpha = 0.33
0.00 0.00 0.00 0.00 0.00 0.00 1.00 1.00 1.00 1.00 1.00 1.00
0.00 0.00 0.00 0.00 0.00 0.33 0.67 1.00 1.00 1.00 1.00 1.00
0.00 0.00 0.00 0.00 0.11 0.33 0.67 0.89 1.00 1.00 1.00 1.00
0.00 0.00 0.00 0.04 0.15 0.37 0.63 0.85 0.96 1.00 1.00 1.00
0.00 0.00 0.01 0.06 0.19 0.38 0.62 0.81 0.94 0.99 1.00 1.00
0.00 0.00 0.02 0.09 0.21 0.40 0.60 0.79 0.91 0.98 1.00 1.00
alpha = 0.67
0.00 0.00 0.00 0.00 0.00 0.00 1.00 1.00 1.00 1.00 1.00 1.00
0.00 0.00 0.00 0.00 0.00 0.67 0.33 1.00 1.00 1.00 1.00 1.00
0.00 0.00 0.00 0.00 0.44 0.00 1.00 0.56 1.00 1.00 1.00 1.00
0.00 0.00 0.00 0.30 -0.15 0.96 0.04 1.15 0.70 1.00 1.00 1.00
0.00 0.00 0.20 -0.20 0.89 -0.39 1.39 0.11 1.20 0.80 1.00 1.00
0.13 0.13 -0.20 0.79 -0.69 1.65 -0.65 1.69 0.21 1.20 0.87 0.87
solution in the interior region of the data table contains growing oscillations. What
is happening is that the low-frequency part of the solution is OK (for a while), but
the high-frequency part is diverging. The precise reason the divergence occurs is
the subject of some mathematical analysis that will be postponed till page ??. At
wavelengths long compared to x or t, we expect the di erence approximation
to agree with the true heat- ow equation, smoothing out irregularities in tempera-
ture. At short wavelengths the wild oscillation shows that the di erence equation
can behave in a way almost opposite to the way the di erential equation behaves.
The short wavelength discrepancy arises because di erence operators become equal
to di erential operators only at long wavelengths. The divergence of the solution is a
fatal problem because the subsequent round-o error will eventually destroy the low
frequencies too.
By supposing that the instability arises because the time derivative is centered at
a slightly di erent time t + 1=2 than the second x-derivative at time t, we are led to
the so-called leapfrog method, in which the time derivative is taken as a di erence
between t 1 and t + 1:
@q qt+1 qt 1
@t
2 t
(4.33)
The resulting leapfrog di erencing star is
2 4 2
+1
t
Here the result is even worse. A later analysis shows that the solution is now diver-
gent for all real numerical values of . Although it was a good idea to center both
derivatives in the same place, it turns out that it was a bad idea to express a rst
derivative over a span of more mesh points. The enlarged operator has two solutions
in time instead of just the familiar one. The numerical solution is the sum of the two
theoretical solutions, one of which, unfortunately (in this case), grows and oscillates
for all real values of .
To avoid all these problems (and get more accurate answers as well), we now turn
to some slightly more complicated solution methods known as implicit methods.
90 CHAPTER 4. MIGRATION BY FINITE DIFFERENCES
2 1
(4.36)
2 +1
t
When placing this star over the data table, note that, typically, three elements at a
time cover unknowns. To say the same thing with equations, move all the t + 1 terms
in (1.35) to the left and the t terms to the right, obtaining
+1 + (1 + 2 )q x
qtx+1 qtx+11 = qtx+1 + (1 2 )qtx + qtx 1 (4.37)
t+1
Now think of the left side of equation (1.37) as containing all the unknown quantities
and the right side as containing all known quantities. Everything on the right can be
combined into a single known quantity, say, dxt. Now we can rewrite equation (1.37)
as a set of simultaneous equations. For de niteness, take the x-axis to be limited to
ve points. Then these equations are:
2 eleft 0 0 0 3 2 qt1+1 3 2 d1 3
66 1 + 2 0 0 77 66 qt2+1 77 66 dt2t 77
66 0 7 6 3 7 6 37
64 0 0 1 + 2 1 + 2 0 775 664 qqt4+1 775 = 664 ddt4 775 (4.38)
t+1 t
0 0 0 eright qt5+1 d5t
Equation (1.37) does not give us each qtx+1 explicitly, but equation (1.38) gives them
implicitly by the solution of simultaneous equations.
The values eleft and eright are adjustable and have to do with the side boundary
conditions. The important thing to notice is that the matrix is tridiagonal, that is,
4.2. FINITE DIFFERENCING 91
except for three central diagonals all the elements of the matrix in (1.38) are zero.
The solution to such a set of simultaneous equations may be economically obtained.
It turns out that the cost is only about twice that of the explicit method given by
(1.32). In fact, this implicit method turns out to be cheaper, since the increased
accuracy of (1.37) over (1.32) allows the use of a much larger numerical choice of
t . A program that demonstrates the stability of the method, even for large t, is
given next.
A tridiagonal simultaneous equation solving subroutine rtris() explained in the
next section. The results are stable, as you can see.
a = 8.00
0.00 0.00 0.00 0.00 0.00 0.00 1.00 1.00 1.00 1.00 1.00 1.00
0.17 0.17 0.21 0.30 0.47 0.76 0.24 0.53 0.70 0.79 0.83 0.83
0.40 0.40 0.42 0.43 0.40 0.24 0.76 0.60 0.57 0.58 0.60 0.60
0.44 0.44 0.44 0.44 0.48 0.68 0.32 0.52 0.56 0.56 0.56 0.56
Compare (1.43) to (1.40) to see recursions for the new unknowns ej and fj :
aj
ej = (4.44)
bj + cj ej 1
dj cj fj 1
fj = (4.45)
bj + cj ej 1
First a boundary condition for the left-hand side must be given. This may involve
one or two points. The most general possible end condition is a linear relation like
equation (1.40) at j = 0, namely, q0 = e0 q1 + f0 . Thus, the boundary condition must
give us both e0 and f0 . With e0 and all the aj ; bj ; cj , we can use (1.44) to compute
all the ej .
On the right-hand boundary we need a boundary condition. The general two-point
boundary condition is
cn 1 qn 1 + eright qn = dn (4.46)
Equation (1.46) includes as special cases the zero-value and zero-slope boundary con-
ditions. Equation (1.46) can be compared to equation (1.41) at its end.
qn 1 = en 1 qn + fn 1 (4.47)
Both qn and qn 1 are unknown, but in equations (1.46) and (1.47) we have two
equations, so the solution is easy. The nal step is to take the value of qn and
use it in (1.41) to compute qn 1; qn 2; qn 3 ; etc. The subroutine rtris() solves
equation (1.38) for q where n=5, endl= eleft , endr= eright, a=c= , and b= 1 2 .
# real tridiagonal equation solver
subroutine rtris( n, endl, a, b, c, endr, d, q)
integer i, n
real q(n), d(n), a, b, c, den, endl, endr
temporary real f(n), e(n)
e(1) = -a/endl; f(1) = d(1)/endl
do i= 2, n-1 {
den = b+c*e(i-1); e(i) = -a/den; f(i) = (d(i)-c*f(i-1))/den
}
q(n) = (d(n)-c*f(n-1)) / (endr+c*e(n-1))
do i= n-1, 1, -1
q(i) = e(i) * q(i+1) + f(i)
return; end
4.2. FINITE DIFFERENCING 93
If you wish to squeeze every last ounce of power from your computer, note some
facts about this algorithm. (1) The calculation of ej depends on the medium through
aj , bj , cj , but it does not depend on the solution qj (even through dj ). This means
that it may be possible to save and reuse ej . (2) In many computers, division is much
slower than multiplication. Thus, the divisor in (19a,b) can be inverted once (and
perhaps stored for reuse).
1 2 1
1
x2 z
1 2 1
So other than modifying the six coecients on the star, it adds nothing to the com-
putational cost.
The unknowns Qt+1 are a two-dimensional function of x and y that can be denoted
by a matrix. Next we will interpret the bracketed expression on the left side. It turns
out to be a four-dimensional matrix!
To clarify the meaning of this matrix, a mapping from two dimensions to one will
be illustrated. Take the temperature Q to be de ned on a 44 mesh. A natural way
of numbering the points on the mesh is
11 21 31 41
12 22 32 42 (4.50)
13 23 33 43
14 24 34 44
For algebraic purposes these sixteen numbers can be mapped into a vector. There
are many ways to do this. A simple way would be to associate the locations in (1.50)
with vector components by the column arrangement
1 2 3 4
5 6 7 8 (4.51)
9 10 11 12
13 14 15 16
The second di erence operator has the following star in the (x; y)-plane:
1 -4 1 (4.52)
Lay this star down in the (x; y)-plane (1.51) and move it around. Unfortunately,
with just sixteen points, much of what you see is dominated by edges and corners.
Try every position of the star that allows the center 4 to overlay one of the sixteen
points. Never mind the 1's going o the sides. Start with the 4 in (1.52) over the
1 in the upper left corner of (1.51). Observe 1's on the 2 and the 5. Copy the 1's
into the top row of Table 4.8 into the second and fth columns. Then put the 4
in (1.52) over the 2 in (1.51). Observe 1's on the 1, 3, and 6. Copy the 1's into the
next row of Table 4.8. Then put the 4 over the 3. Observe 1's on the 2, 4, and 7.
Continue likewise. The 1616 square matrix that results is shown in Table 4.8.
Now that Table 4.8 has been constructed we can return to the interpretation of
equation (1.49). The matrix of unknowns Qt+1 has been mapped into a sixteen-
point column vector, and the bracketed expression multiplying Qt+1 can be mapped
4.3. WAVEMOVIE PROGRAM 95
into a 1616 matrix. Clearly, the matrix contains zeroes everywhere that Table 4.8
contains dots. It seems fortunate that the table contains many zeroes, and we are
led to hope for a rapid solution method for the simultaneous equations. The bad
news is that no good method has ever been found. The best methods seem to require
e ort proportional to N 3 , where in this case N = 4. Based on our experience in one
dimension, those of us who worked on this problem hoped for a method proportional
to N 2 , which is the cost of an explicit method|essentially the cost of computing
the right side of (1.41). Even all the features of implicit methods do not justify an
additional cost of a factor of N . The next best thing is the splitting method.
EXERCISES:
1 Interpret the in ation-of-money equation when the interest rate is the imaginary
number i=10.
2 Write the 45 di raction equation in (x; z)-space for xed ! in the form of (1.36).
People learn by solving problems. Solving problems is the only way people learn,
etc., etc., etc......
4 1 1
1 4 1 1
1 4 1 1
1 4 1
1 4 1 1
1 1 4 1 1
1 1 4 1 1
1 1 4 1
1 4 1 1
1 1 4 1 1
1 1 4 1 1
1 1 4 1
1 4 1
1 1 4 1
1 1 4 1
1 1 4
Table 4.8: The two-dimensional matrix of coecients for the Laplacian operator.
4.3. WAVEMOVIE PROGRAM 97
Notice the weakened waves in the zone of theoretical shadow that appear to arise
from a point source on the top corner of the plot. You have probably learned in
98 CHAPTER 4. MIGRATION BY FINITE DIFFERENCES
physics classes of \standing waves". This is what you will see near the re ecting side
boundary if you recompute the plot with a single frequency nw=1. Then the plot will
acquire a \checkerboard" appearence near the re ecting boundary. Even this gure
with nw=4 shows the tendency.
Take any point in (x; z)-space. The signal there will be a superposition of sinu-
soids of various frequencies, !j . We can choose what frequencies we will use in the
calculation and what amplitudes and phases we will attach to the initial conditions
at those frequencies. Here we will simply take uniformly spaced sinusoids of unit
amplitude and no phase. The nw frequencies are !j = !, 2 !, ..., nw !. The
lowest frequency dw = ! must be inversely proportional to the wavelength lambda
=
dw = v * pi2 / lambda =
2v
Finally, the time duration of the lm loop must equal the period of the lowest-
frequency sinusoid
Nt t =
2
!
This latter equation de nes the time interval on the line
dt = pi2 / ( nt * dw )
If you use more frequencies, you might like the result better because the wave pulses
will be shorter, and the number of wavelengths between the pulses will increase. Thus
the quiet zones between the pulses will get quieter. The frequency components can
be weighted di erently|but this becomes a digression into simple Fourier analysis.
# from par: integer n3:nt=12, n2:nx=48, n1:nz=96, nw=2, nlam=4
# from par: real dx=2, dz=1, v=1
#
subroutine wavemovie( nt, nx, nz, nw, nlam, dx,dz,v, p, cd, q)
integer it,nt,ix,nx,iz,nz,iw,nw, nlam
real dx,dz,v, phase,pi2,z0,x0,dt,dw,lambda,w,wov,x, p(nz,nx,nt)
complex aa,a,b,c,cshift, cd(nx),q(nx)
lambda=nz*dz/nlam; pi2=2.*3.141592; dw=v*pi2/lambda; dt=pi2/(nt*dw)
4.3. WAVEMOVIE PROGRAM 99
x0 = nx*dx/3; z0 = nz*dz/3
call null( p, nz*nx*nt)
do iw = 1,nw { # superimpose nw frequencies
w = iw*dw; wov = w/v # frequency / velocity
do ix = 1,nx { # initial conditions for a
x = ix*dx-x0; # collapsing spherical wave
phase = -wov*sqrt( z0**2+x**2)
q(ix) = cexp( cmplx( 0.,phase))
}
aa = (0.,1.)*dz/(4.*dx**2*wov) # tridiagonal matrix coefficients
a = -aa; b = 1.+2.*aa; c = -aa
do iz = 1,nz { # extrapolation in depth
do ix = 2,nx-1 # diffraction term
cd(ix) = aa*q(ix+1) + (1.-2.*aa)*q(ix) + aa*q(ix-1)
cd(1) = 0.; cd(nx) = 0.
call ctris( nx,-a,a,b,c,-c,cd,q)
# Solves complex tridiagonal equations
cshift = cexp( cmplx( 0.,wov*dz))
do ix = 1,nx # shifting term
q(ix) = q(ix) * cshift
do it=1,nt { # evolution in time
cshift = cexp( cmplx( 0.,-w*it*dt))
do ix = 1,nx
p(iz,ix,it) = p(iz,ix,it) + q(ix)*cshift
}
}
}
return; end
For each z-step the calculation is done in two stages. The rst stage is to solve
@P v @2P
@z
= i ! 2 @x2
(4.54)
Using the Crank-Nicolson di erencing method this becomes
pxz+1 pxz v pxz +1 +1 2px + px 1 !
2pxz + pxz 1 + pxz+1 z +1 z +1
= (4.55)
z i! 2 2 x2 2 x2
Absorb all the constants into one and de ne
v z
= i ! 4 x2
(4.56)
100 CHAPTER 4. MIGRATION BY FINITE DIFFERENCES
getting
h i
pxz+1 pxz = ( pxz +1 2 pxz + pxz 1) + ( pxz+1
+1 2 px + px 1 )
z +1 z +1 (4.57)
Bring the unknowns to the left:
+1 + (1 + 2 )px
pxz+1 pxz+11 = pxz +1 + (1 2 )pxz + pxz 1 (4.58)
z +1
We will solve this as we solved equations (1.37) and (1.38). The second stage is to
solve the equation
@P i!
@z
= v
P (4.59)
analytically by
P (z + z ) = P (z ) eiz !=v (4.60)
By alternating between (1.58) and (1.60), which are derived from (1.54) and (1.59),
the program solves (1.53) by a splitting method. Formal justi cation of the splitting
method follows in chapter 5. The program uses the tridiagonal solver discussed earlier,
except the version here, ctris(), has all the real variables changed to complex.
Figure 1.5 shows a change of initial conditions where the incoming wave on the
top frame is de ned to be an impulse, namely, p(x; z = 0) = ( ; 0; 0; 1; 0; 0; ). The
result is alarmingly noisy. What is happening is that for any frequencies anywhere
near the Nyquist frequency, the di erence equation departs from the di erential equa-
tion that it should mimic. This problem is addressed, analyzed, and ameliorated in
chapter 8. For now, the best thing to do is to avoid sharp corners in the initial wave
eld.
necessarily limited in extent. Given that side boundaries are inevitable, we must think
about them. The subroutine wavemovie() included zero-slope boundary conditions.
This type of boundary treatment resulted from taking
d(1) = 0. ; d(nx) = 0.
The zero slope boundary condition is explicitly visible as identical signal on the
two end columns. Likewise, the zero-value side boundary condition has a column of
102 CHAPTER 4. MIGRATION BY FINITE DIFFERENCES
zero-valued signal on each side. In each case, the end trace need not be explicitly
present, and it is not present in Dave Hale's coding of the boundary conditions which
I give, but do not derive, below.
cd(1) = aa * p(2) + ( 1. - 2. * aa ) * p( 1) + aa * p0
cd(nx) = aa * p(nx-1) + ( 1. - 2. * aa ) * p(nx) + aa * pnxp1
endl = c * bl + b;
endr = a * br + b
call ctris(nx,endl,a,b,c,endr,cd,p,ce,cf)
Note that bl=br=0 for zero-value boundaries, and bl=br=1 for zero-slope bound-
aries. Absorbing side boundaries, derived in chapter 9 are obtained by letting bl and
br be complex.
Figure 4.10: The accuracy of the x-derivative may be improved by a technique that is
analyzed later in chapter 8. Brie y, instead of representing kx2 x2 by the tridiagonal
matrix T with ( 1; 2; 1) on the main diagonal, you use T=(I T=6). Modify
the extrapolation analysis by multiplying through by the denominator. Make the
necessary changes to the 45 collapsing wave program. Left without 1/6 trick; right,
with 1/6 trick. omx-trick [ER]
response in two dimensions carries a 45 phase shift. Even though real life is three
dimensional, the two-dimensional response is appropriate for migrating seismic lines
104 CHAPTER 4. MIGRATION BY FINITE DIFFERENCES
Another di erence with the lm loop program is that the input now has a time axis
whereas the output is still a depth axis. It is customary and convenient to reorganize
the calculation to plot travel-time depth, instead of depth, making the vertical axes
on both input and output the same. Using = z=v , equivalently d=dz = 1=v ,
the chain rule gives
@
= @ @
= 1 @ (4.61)
@z @z @ v @
Substitution into (1.53) gives
@P v2 @2P
@
= i! P
i ! 2 @x2
(4.62)
In the program, the time sample size dt = t and the travel-time depth sample
dtau = are taken to be unity, so the maximum frequency is the Nyquist. Notice
that the frequency loop covers only the positive frequency axis. The negative fre-
quencies serve only to keep the time function real, a task that is more easily done by
simply taking the real part. A program accompanies.
#% Migration in the (omega,x,z)-domain
subroutine kjartjac()
real p(48,64), pi, alpha, dt, dtau, dw, w0, omega
complex cp(48,64), cd(48), ce(48), cf(48), aa, a, b, c, cshift
integer ix, nx, iz, nz, iw, nw, it, nt, esize
nt= 64; nz= nt; nx= 48; pi= 3.141592
dt= 1.; dtau= 1.; w0=-pi/dt; dw= 2*pi/(dt*nt); nw= nt/2;
alpha = .25 # alpha = v*v*dtau/(4*dx*dx)
do iz= 1, nz { do ix=1,nx { p(ix,iz) = 0.; cp(ix,iz)=0. }}
do it= nt/3, nt, nt/4{
do ix= 1, 4 # Broadened impulse source
{ cp(ix,it) = (5.-ix); cp(ix,it+1) = (5.-ix) }}
call ft2axis( 0, 1., nx,nt, cp)
do iz= 1, nz {
do iw= 2, nw { omega = w0 + dw*(iw-1)
aa = - alpha /( (0.,-1.)*omega )
a = -aa; b = 1.+2.*aa; c = -aa
do ix= 2, nx-1
cd(ix) = aa*cp(ix+1,iw) + (1.-2.*aa)*cp(ix,iw) + aa*cp(ix-1,iw)
cd(1) = 0.; cd(nx) = 0.
call ctris( nx, -a, a, b, c, -c, cd, cp(1,iw))
cshift = cexp( cmplx( 0.,-omega*dtau))
do ix= 1, nx
cp(ix,iw) = cp(ix,iw) * cshift
do ix= 1, nx
p(ix,iz) = p(ix,iz)+cp(ix,iw) # p(t=0) = Sum P(omega)
}}
esize=4
to history: integer n1:nx, n2:nz, esize
call hclose()
call srite( 'out', p, nx*nz*4 )
return; end
106 CHAPTER 4. MIGRATION BY FINITE DIFFERENCES
The output of the program is shown in Figure 1.12. Mainly, you see semicircle
approximations. There are also some artifacts at late time that may be !-domain
wraparounds. The input pulses were apparently suciently broad-banded in dip that
the gure provides a preview of the fact, to be proved later, that the actual semicircle
approximation is an ellipse going through the origin.
Notice that the waveform of the original pulses was a symmetric function of time,
whereas the semicircles exhibit a waveform that is neither symmetric nor antisymmet-
ric, but is a 45 phase-shifted pulse. Waves from a point in a three-dimensional world
would have a phase shift of 90. Waves from a two-dimensional exploding re ector in
a three-dimensional world have the 45 phase shift.