A Historical Reflection On Reflections: B D, Westerngeco, Houston, Usa
A Historical Reflection On Reflections: B D, Westerngeco, Houston, Usa
n the summer of 1921, a small team of physicists and geologists (William P. Haseman, J. Clarence Karcher, Irving Perrine, and Daniel W. Ohern) performed a historical experiment near the Vines Branch area in south-central Oklahoma. Using a dynamite charge as a seismic source and a special instrument called a seismograph (Figure 1), the team recorded seismic waves that had traveled through the subsurface of the earth. Analysis of the recorded data (Figure 2) showed that seismic reflections from a boundary between two underground rock layers had been detected. Further analysis of the data produced an image of the subsurfacecalled a seismic reflection profile (Figure 3a)that agreed with a known geologic feature. That result is widely regarded as the first proof that an accurate image of the earths subsurface could be made using reflected seismic waves. The Vines Branch experiment was motivated by the possibility of using seismic reflections as a tool in exploration for oil and gas. With the experiments success, one would have expected immediate financial support for follow-up work. Alas, interest in the new method soon waned due to a coincidental precipitous, but short-lived drop in the price of oil. Reports vary, but apparently the price fell as low as 5-15 cents per barrel! It wasnt until about 1929, after oil prices had recovered and further experiments were done, that seismic reflections became an accepted method of prospecting for oil. Business boomed once reflection seismology was a proven technique for finding hydrocarbons. By 1934, Geophysical Service Inc. (GSI), one of the pioneer seismic reflection oil service companies, had over 30 seismic crews exploring for oil and gas. Today, reflection seismology is a thriving business. Seismic reflection data are acquired worldwide in both land and marine environments. For 2005, oil service companies expect total revenue of about US$5 billion from the acquisition and processing of seismic reflection data. Compared to the Vines Branch result, our present ability to image the earth with seismic reflections is stunning. Figure 4 shows a recent reflection profile as an example. Such astounding advances in technology do not, of course, happen overnight. The story of how seismic exploration technology got from Figure 3a to Figure 4 is a long, fascinating history of interesting discoveries, clever inventions, and the people and companies responsible for them. This article helps commemorate the 75 th anniversary of the Society of Exploration Geophysicists by retelling some significant parts of that history. Fundamentals of reflection seismology. To accommodate readers who may not be familiar with reflection seismology, I begin by briefly explaining its basic concepts. The simplicity of the Vines Branch seismic experiment makes it a good example for this purpose.
(Editors note: Seismic reflection experiments performed for resource exploration consist of three stepsdata acquisition, data processing, and data interpretation. The history of data acquisition and processing is the subject of this article. Rocky Roden and Don Herron discuss interpretation history in the next article.)
Figure 1. Schematic diagram of the seismograph used to detect and record seismic reflections in the 1921 Vines Branch experiment. TRANS refers to amplifier circuits, and the seismic sensors are labeled microphones (Schriever, GEOPHYSICS 1952).
Reflection seismology is based on a simple, familiar phenomenon: echoes. When a compressional seismic wave travels through a materialwhether solid, liquid, or gaspart of the wave reflects wherever a change in acoustic impedance occurs (see Box 1 for details). When they occur in the atmosphere, we call these reflections echoes. Thus, we can think of the Vines Branch experiment as creating and measuring echoes from below ground. The experiment had five main components: a seismic source, devices called receivers to detect the reflections, a recording system to make a permanent record of the reflections, a plan specifying the placement of sources and receivers, and an analysis of the data to produce the final result. The seismic source, or shot, was a small dynamite charge that was ignited in a shallow hole. Figure 3b shows a plan view of the shot and receiver positions for the Vines Branch experiment. The shots were set off one by one. For each shot, two receivers were placed in the ground 300 m away along a line perpendicular to the shot line. A third receiver was placed near the shot hole. Figure 1 shows how the three detected reflection signals were recorded. The electric impulse from each receiver vibrated a thin mirror, which deflected a light beam across a moving photographic film. The receiver near the shot produced a timing signal. Its recording not only showed when the shot was fired, but also included a signal created by a tuning fork that vibrated at a known frequency. The timing signal was crucial because it allowed the experimenters to determine how long it took the impulsive seismic waves created by the dynamite charges to travel from the surface, to the reflecting horizon, and back to the surface. The experimenters analyzed the data by identifying the arrivals of the reflected impulses in the seismic traces (Figure 2) and using the timing signal to determine the elapsed time
S46
SEG @75
Figure 2. Data from three of the shots recorded during the Vines Branch experiment. The tuning fork signal appears at the top of each shot record. Below that are signals detected by the two sensors shown in Figure 1. Each signal recording is called a seismic trace.
P- and S-waves are represented by the dark blue and dark pink shading, respectively. The horizontal black line represents the site of an acoustic impedance change (O. Barkved, et al., Schlumberger Oilfield Review 2004).
Figure 3. Sketch of the 1921 Vines Branch experiment. (a) The very first seismic reflection profile identified a dipping reflecting boundary between two rock layers, the Sylvan shale and the Viola limestone. (b) A plan view of the Vines Branch experiment shows the positions of the shots and receivers and an overhead view of the reflection raypath for the fifth shot in the survey.
between these arrivals and the shot firing times. These time measurements were then transformed into distances using the propagation rate of the seismic waves in the upper rock layer, which had been determined in an earlier experiment. Using the known source and receiver positions and the measured distances traversed by the reflections, the experimenters constructed the seismic profile of the reflecting horizon (Figure 3a). In contrast, the modern seismic reflection profile shown in Figure 4 was created using thousands of shots and tens of
here are several modes of seismic wave propagation in solid materials. Pressure waves (also called compressional waves, acoustic waves, and P-waves) are characterized by particle motion in the material that is in the same direction as the wave travels. P-waves in fluids such as air and water are also called sound waves. Shear waves (S-waves), on the other hand, have particle motion that is transverse to the propagation direction. S-waves travel at a velocity (VS) that is about half the velocity (VP) of P-waves. The acoustic impedance of a material is defined as the product of its density and P-wave velocity. Because S-waves have different velocities than P-waves, shear-wave impedance is different than acoustic impedance. Several phenomena occur when a P-wave encounters an impedance contrast. As shown in the diagram, some of the seismic energy is reflected as a P-wave. This is the wave detected in most reflection seismology experiments. Another portion of the energy is transmitted as a P-wave through the impedance discontinuity, but travels onwards in a different direction. We say that this wave has been refracted. Finally, if an incident P-wave travels in a direction that is not normal to the impedance boundary, then a final portion of the initial energy is converted into reflected and refracted S-waves. The angle between the direction of travel of a reflected Pwave and the normal to the reflecting boundary is the same as the angle between the incident P-wave and the normal. That is not the case for a converted S-wave reflection.
thousands of receiver positions. The digitally recorded data set required an almost unimaginable number of arithmetical operations on high-speed digital computers to create the image. Nevertheless, this massive seismic survey had the same five basic components as the Vines Branch experiment: seismic sources and receivers, a data recorder, a survey design, and data analysis to produce the final image.
S47
SEG @75
Box 2. Dynamic range he dynamic range of a seismic recording system is the ratio of the largest to smallest signals that it can faithfully detect and record. Seismic reflections have a wide range of amplitudes. The large-amplitude part is not hard to understandanything that begins with a dynamite blast is obviously pretty loudbut what about the small-amplitude part? There are three phenomena that attenuate seismic reflections: spherical spreading, transmission loss, and intrinsic absorption. Together these effects mean that a seismic recording system needs a dynamic range of 100 decibels (dB) or more. Spherical spreading. The seismic wave from a localized impulsive source spreads out equally in all directions. After it has traveled some distance, r, the energy in the original impulse is spread out over a spherical area of radius r. Since energy is conserved and the area of a sphere is proportional to r2, the energy per unit area must be proportional to 1/r2. The amplitude of a seismic wave is proportional to the square root of its energy. Thus, the amplitude of a seismic reflection is inversely proportional to the distance it has traveled.
Effects of absorption on a seismic impulse. Absorption changes the shape of an ideal seismic shot pulse as distance from the shot increases. The sketches represent particle displacement and particle velocity for a P-wave (After Ricker, GEOPHYSICS 1953).
Transmission loss. As explained in Box 1, a pressure wave that encounters an impedance contrast has its incident energy partitioned into several events. Only one of these is a P-wave that continues onward. Since the signal from a deep reflector may traverse many impedance contrasts on its way down to the reflector and back up to the surface, even a small energy loss per contrast can quickly add up to a significant effect.
Intrinsic absorption. As a seismic wave travels through a material, the particles that make up the material vibrate, generating heat. The energy that goes into that heatintrinsic absorptionis supplied by the seismic wave. Intrinsic absorption attenuates higher frequencies more rapidly than lower frequencies, thereby changing the shape of the seismic pulse. The rate of intrinsic absorption varies greatly from one type of material to another.
(Note: For reasons not entirely clear to this author, geophysicists refer to a seismic waves propagation speed in a material as its velocity even when they mean a scalar quantity. For example, water velocity means the speed of sound in water. From here forward, I will conform to this convention.) Challenges of reflection seismology. Early practitioners of reflection seismology soon realized that they faced three tough problems: obtaining reflection signals of sufficient quality, constructing accurate subsurface images, and determining the optimal geometric pattern for the shots and receivers. Most major technological innovations made in the seismic reflection method since the Vines Branch experiment were driven by these three challenges. Lets examine these challenges in detail. As explained in Box 2, the amplitudes of seismic reflections span a large range. This means that the seismic field equipment must have a large dynamic range. That is, it must be capable of detecting and recording with high fidelity both very large and very small seismic reflections. This task is further complicated by the presence of noise. As discussed in Box 3, seismic reflection signals of the weaker variety are often completely masked by noise. Separating a weak signal from the noise can be a daunting problem. Because construction of a seismic reflection profile is based on measurement of the times at which reflection events occur, its important that those time measurements are accurate. Two
phenomena come into play here: statics and resolution. A static is a constant time shift that occurs in a seismic trace because of a near-surface condition. The Vines Branch experiment encountered elevation statics (Figure 3a). Note that the reflection profile was constructed relative to the topography of the surface. If the experimenters had instead assumed that the elevation was flat, then the dip of the constructed reflecting horizon would have been wrong. The earths weathering layer causes another type of static. The weathering layer is the low-velocity, near-surface zone that is created by the erosional effects of weather. Generally it consists of unconsolidated materialssuch as sand, soil, or gravelthat are sometimes mixed with rocks. The thickness of the weathering layer can change dramatically over both short and long distances. Because its velocity usually differs significantly from that of the sediments below, the weathering zones variation in thickness introduces position-dependent statics into seismic traces. Such statics must be corrected before the traces are suitable for image construction. Resolution is a measure of the ability to detect the occurrence of two closely spaced reflecting horizons. It depends on the characteristics of the acquisition wavelet (Box 4) and the amount of intrinsic absorption (Box 2). Low resolution degrades the accuracy of time measurements and, thus, the fidelity of the seismic image. Overcoming the difficulties associated with dynamic range, noise, statics, and resolution still leaves the challenging problem of constructing an image from the data. The dashed lines
S48
SEG @75
Box 3. Noise in seismic data oise in seismic data is ubiquitous. It comes from many different sources and in several categories. The figure identifies some noises that occur commonly in land shot records. Ambient noise is always present, even when a shot has not been fired. There are two subcategories of ambient noise. Environmental noise, such as wind and traffic noise, is present independently of the seismic experiment. Intrinsic noise, such as the electronic noise in an amplifier or the swell noise in a seismic streamer, is inherently part of a data acquisition system. Shot-generated noise refers to the part of a seismic signal, such as multiple reverberations and ground roll, that are considered undesirable because they can obscure reflections. Unlike ambient noise, this type of noise is present only when a shot has been fired. Noise can be categorized in another way. Noise that is completely unpredictable from one time or position to another is called random or incoherent noise. Noise that has a recognizable spatial or temporal pattern is called coherent noise. The quality of a seismic signal is often expressed by its signal-to-noise ratio (SNR), which is the ratio of the amplitude of the desirable reflection signal to that of the noise. The amplitude of ambient noise typically does not vary with time in a seis-
A single-sensor seismic land record without any filtering applied. As done here, seismic records are usually displayed so that the vertical axis is time and the horizontal axis is offset (source-toreceiver distance). Facility and vehicle noise are ambient noises. The other noises are caused by the shot. Air blast noise is random. The other noises are coherent. The noise identified as surface waves is known as ground roll. Compare to Figure 18 (WesternGeco).
mic shot record, while the amplitude of seismic reflections decreases rapidly with time (Box 2). Thus, a seismic shot record has a much higher SNR at early times
than it does at later times. Often seismic reflections late in a record are invisible among the noise.
in Figure 3b show the raypath along which the detected seismic wave traveled for the fifth shot in the Vines Branch experiment. (Note: The seismic impulse from the shot actually spreads out in all directions. Its the detected seismic wave whose path is illustrated in the dashed line.) How do we know what that raypath was? The experiment measured only the distance traveled by the reflected wave, not the location of the reflection point itself. Given just the distance, the reflection point is constrained to lie somewhere on the surface of an ellipsoid whose two focal points coincide with the shot and receiver locations. So, given all the possible raypaths for the fifth shot, how could the experimenters pick the correct one? First, they knew a priori the dip direction of the reflecting interface and that the line between the shot and receiver was perpendicular to that direction. Second, by assuming that the structure of the reflector varied only in the dip direction, they constrained the reflection point to lie in a vertical plane that was perpendicular to the line connecting the shot and receiver and that included the midpoint between the shot and receiver. The intersection between the vertical plane and the ellipsoid of possible reflecting points is a circle. Now look closely at Figure 3a. For each measurement, the experimenters drew a circular arc, knowing that the measurements reflection point must lie along that arc. The reflecting horizon was constructed
by finding the smooth surface that best made a tangent to all of the circular arcs. The image construction technique just described is the essence of a process we now call migration. The migration of the Vines Branch data was particularly simple because only a single reflecting horizon was involved and the experimenters made a number of simplifying assumptions based on a priori knowledge. In general, though, migration can be incredibly challenging. As a seismic wave travels through the earth, each reflecting interface bends the raypath of the transmitted wave (Box 1). Furthermore, the velocity varies from one rock layer to the next, and even within a rock layer as a function of travel direction (a rock property called anisotropy). Consider what this means for a subsurface structure like that shown in Figure 4. The mountain-like shape of the buried salt deflects the transmitted raypaths in all sorts of directions, and the raypaths that travel through the salt experience a different propagation speed than the raypaths that travel through only the nearby sedimentary layers. To construct an accurate subsalt image requires that all raypath bending and velocity variations be accounted for properly. Remember, too, that the shape of the salt varies rapidly in three dimensions, unlike the assumed two-dimensional nature of the reflector in the Vines Branch experiment.
S49
SEG @75
Figure 4. A Gulf of Mexico marine seismic profile from the late 1990s. This vertical profile was extracted from a 3D volume migrated with a 3D prestack wave-equation-based depth migration algorithm. Compare the imaging here to that in Figure 30 (WesternGeco).
A newcomer to reflection seismology might find migration puzzlingit seems as if the subsurface structure must be known before an image of that structure can be constructed. Recall, though, that the Vines Branch experimenters managed to construct an image of the dipping reflector without making any assumptions about its depth. This suggests that knowledge of subsurface velocities combined with an imaging algorithm is sufficient to form an image from seismic traces. For the last 35 years, research in seismic data processing has been dominated by efforts to develop better algorithms to determine velocities from the seismic data and better imaging algorithms to use those velocities. The result is a capability to image increasingly complicated subsurface structures. In spite of that effort, imaging in some areas remains a challenge today. Seismic survey design is the discipline concerned with selecting the geometric pattern for the positions of the shots and receivers in a seismic survey. The goal of survey design is to find a geometric pattern that minimizes costs and environmental impact of a survey while simultaneously providing seismic traces with good signal-to-noise ratio and the spatial variety needed to construct a subsurface image.
Because of its small scale, costs, and environmental impact for the Vines Branch experiment were not major concerns. However, the experimenters did have to be concerned with the other two issues. First, they selected a distance between the shots and receiverscalled the offsetthat prevented shot-generated noise (Box 3) from occurring at the same time as the reflection events of interest. Second, they oriented the shots and receivers so that experimental conditions were constant except for the depth of the reflecting horizon they were trying to find. As described above, this orientation allowed a simple construction of the subsurface image. Seismic survey design is usually much more complicated than it was for the Vines Branch experiment. Survey designers routinely have to contend with ambient and shot-generated noise, fragile environments, manmade and natural obstructions, limited access time, unknown and complicated subsurface structures, bad weather and other natural impediments, limited human and physical resources, and so on. Considering the many difficulties faced by a seismic survey, its not surprising that many key innovations in reflection seismology involved schemes to improve upon survey design. Marine seismic surveys face the same fundamental chal-
S50
SEG @75
Figure 5. Vibroseis correlation. The reference signal represents the shape of a pressure wave sent into the ground by a vibrator. The recorded composite signal contains three seismic events. They cannot be identified readily because each one is stretched out within the composite signal over a time span equal to the duration of the reference signal. After the correlation process (far right), the three events become visible, as if the vibrator had been an impulsive source (Crawford, et al., GEOPHYSICS 1960).
Box 4. Signatures for two types of marine seismic sources. The wavelets in this figure are about 350 ms long. (a) A signature produced by an underwater dynamite blast. (b) A signature of a tuned air-gun array. The large positive peak is a ghost reflection caused by sound that reflected from the ocean surface (Western Geophysical).
lenges as those acquired on land. Because the two environments are so different, the details of the challenges differ, as do the solutions to them. In the following five sections of this article I describe the major advances that have occurred since the time of the Vines Branch experiment in each of the five components of a seismic survey. The advances include those in both land and marine reflection seismology. Seismic sources. In the early years of seismic exploration, explosives such as dynamite were the only sources capable of yielding signals strong enough to produce useful seismic shot records. (Note: The collection of traces recorded for each shot location is called a shot record or shot gather.) For land surveys, the explosive charges were usually deployed in shot holes, which were drilled to depths ranging between tens to hundreds of feet. For marine surveys, the charges were detonated in the water at depths ranging from a few to tens of feet. In both cases the size of the charges and the depths at which they were exploded were determined by the local geology, the nature of noise in an area, and the desired depth of seismic wave penetration into the earth. Aside from their strength, explosives have other advantages when used as seismic sources. They are compact, mobile, and (theoretically, at least) simple to handle and store. When used for land exploration, explosives create a nearly ideal impulsive seismic wavelet, which enhances the seismic resolution. Use of explosives can also decrease source-side statics problems if the charges are placed in drill holes that extend below the weathering layer. On the other hand, explosives are not an ideal seismic source when used underwater. The bubble of gases created by an underwater explosion oscillates in the water, producing a source wavelet that consists of a long sequence of impulses (Box 4). Seismic records produced with such a source wavelet are unsuitable for interpretation until after digital filtering to collapse the train of impulses. The so-called debub-
seismic wavelet is the signal that would appear in a seismic trace if there was only one reflecting interface. It has several components: The source wavelet (also called the source signature) is the shape of the pressure pulse created by the source. That wavelet is modified by near-surface effects and the response of the receiver and recording equipment to produce the acquisition wavelet. Intrinsic absorption (Box 2) filters the acquisition wavelet in a time-varying manner to produce the seismic wavelet. For most applications the ideal seismic wavelet is characterized as having a broad, flat spectrum in the frequency domain. In the time domain this means that the wavelet is a single impulse, or spike, which produces seismic profiles with the best possible resolution. In practice, intrinsic absorption limits the usefulness of high frequencies because they are attenuated rapidly in the subsurface. The usefulness of very low frequencies is also limited because noise tends to dominate at that end of the spectrum and near-surface effects, called ghost reflections, attenuate those frequencies. Thus, a typical bandwidth for a seismic wavelet is about 580 Hz for shallow reflections, and about to 5-20 Hz for deep reflections. Source signatures have a major influence on the seismic wavelet. In marine acquisition a tuned air-gun array is considered a superior source to a dynamite charge because its signature is much closer to an ideal impulse.
bling technology to accomplish that did not become readily available until the mid-1960s. One alternative was to detonate the charges at shallow depths so that the gas bubble would reach the surface before any oscillations could occur. Unfortunately, placing charges at such shallow depths also significantly reduces the useful amount of energy created by the explosion. Nevertheless, prior to the mid-1960s, most marine seismic surveys were shot with shallow explosive sources. During the 1950s (land) and the 1960s (marine), viable alternatives to explosive charges were developed. Today, two
S51
SEG @75
Figure 6. A comparison between records from an early vibrator and dynamite shots. Both data sets were acquired in the same area of southeast Montana. (a) The correlated vibrator records. (b) The dynamite shot records. Note the improved SNR in the vibrator records indicated by the clearly visible reflections annotated A, B, and C (Conoco).
of these, vibroseis and the tuned air-gun array, are the most widely used seismic sources for land and marine surveys, respectively. Explosives lost their popularity for a variety of reasons. First and foremost, explosives are a hazardous material, resulting in strict safety regulations and procedures for using, handling, storing, and transporting them. This greatly increases the cost of using explosives. For land surveys explosive sources require drill holes, which are time-consuming to
prepare and which require extra equipment and personnel. In addition to the problems just mentioned, explosives have been banned in some offshore areas because of concern over their impact on marine life. The vibroseis method was invented and developed by John Crawford, Bill Doty, and Milford Lee at Conoco in the early 1950s. The first vibrator generated a continuous source signal using two rotating off-center weights that were geared
S52
SEG @75
together so that the vibrations produced by the weights added and cancelled in the vertical and horizontal directions, respectively. The device was driven by a 25 horsepower gasoline engine. It was designed so that the wheels of a truck could be driven onto it to hold it firmly against the ground while vibrations were produced. The vibration frequency was controlled by the throttle of the gasoline engine. To produce a seismic wave, the device was operated so that over a period of a few seconds the vibration frequency swept through a range of about 20-80 Hz, producing a so-called sweep signal. In a vibrator record the reflected energy from each reflector is spread over a period of time equal to the duration of the sweep. In this form, the data are not directly useful because the time in the record at which an individual reflection occurs cannot be determined. To fix this problem a mathematical process called correlation is applied to compress the spreadout energy from each reflector into a compact seismic wavelet. Correlation requires a reference sweep, which is simply a recording of the vibrator signal that was sent into the ground. Figure 5, from a 1960 Conoco paper, shows the correlation concept and how it was implemented at that time. The correlation process required racks of specialized electronic equipment. Today the process is performed routinely by a digital computer. A few years after the vibroseis method was proven feasible, Conoco engineers built a new version in which the vibrating element was driven hydraulically. Figure 6 shows a comparison between correlated shot records produced by this device and records from the same area shot with dynamite. Today hydraulic vibrators are the seismic source used most often in land surveys. Figure 7 shows a side-by-side view of one of Conocos early vibrators and its modern counterpart. Vibrators have several key advantages over explosive charges. First, the vibrator system allows precise control over the seismic signal sent into the ground. The frequencies at the low and high ends of a sweep can be adjusted easily; the length of the sweep controls how much energy is sent into the ground; and, the shape of the seismic wavelet can be changed by adjusting the amount of time spent sweeping at each frequency. Second, two or more vibrators can be linked together electronically so that their sweep signals are synchronized. That makes it possible to use more than one vibrator per record to increase the output energy level, and to combine them into source arrays to improve the signal-to-noise ratio in the correlated records (see the section below on seismic survey design). Thus, the vibrator is a versatile seismic source that provides a survey designer with more options than are possible with an explosive source. A third advantage of vibroseis is the suppression of some ambient noise by the correlation process. Since the beginning of marine seismic surveying, many types of marine sources have been invented, built, and tested. A few of these are in use today, but one of themthe air gun has far surpassed the others in popularity. Figure 8 shows an air gun manufactured in the 1970s by Bolt Technologies, which, along with Texas Instruments, is a recognized pioneer in the development of air guns. An air gun creates an explosive-like sound by a sudden mechanical release of highly compressed air into the water. The resulting underwater air bubble oscillates, producing a source wavelet similar to that of a dynamite charge. As with underwater explosives, such source wavelets are undesirable. Furthermore, an air gun is a weak source compared to even
Figure 7. The small white vehicle is a swinging weight vibrator of the late 1950s and early 1960s. The large vehicle on the right is a buggy-mounted, hydraulically-driven vibrator used in the late 1970s and early 1980s. When sweeping, the metal vibrating plate seen between the buggys tires was lowered to the ground and held in place by the weight of the vehicle (Conoco).
Figure 8. PAR model 6600 air gun. The left-hand section of the gun is a compressed air reservoir. The small, light-colored rectangles in the center portion of the gun are ports through which the air escapes when the gun is fired. A compressed air hose and a cable that provides the firing signal are connected at the right (Courtesy Bolt Technologies).
Figure 9. Tuned air-gun array concept. (a) Signatures from individual airguns have bubble periods that depend on the gun volumes (listed in cubic inches on the left). (b) If the six guns are placed in an array and fired simultaneously, they produce a signature in which the bubble pulses are suppressed. (Dragoset, TLE 2000).
a small explosive charge. Both of these problems were overcome by the tuned airgun array concept. As shown in Figure 9a, the periodicity of an air-guns signature depends on the volume of air initially stored in the gun. If an array of different-sized air guns is fired simultaneously, the initial pulses sum together coherently,
S53
SEG @75
By the mid 1970s, more than 50% of marine seismic surveys used tuned air-gun arrays as the source, and that percentage climbed rapidly during ensuing years. Typical air-gun arrays in use today have 20-30 individual guns arranged in an approximately square array about 15-20 m on a side (Figure 10). By choice of gun sizes and the depth at which an array is deployed, a survey designer has some control over the frequency characteristics of the seismic impulse produced by an array. However, in this regard the air-gun array is not as versatile a seismic source as the land vibrator. Seismic receivers. All seismic receivers (also called sensors, detectors, or seismometers) are transducers that convert mechanical energy in a seismic wave into an electrical signal. The transducers used on land and in water are called geophones and hydrophones, respectively. The Vines Branch experimenters tested several types of transducers. Interestingly, two of thema movable coil of wire suspended in a magnetic field and a stack of piezoelectric quartz crystalsare, in much improved forms, the most widely used seismometers today. The electromagnetic geophone is based on the principle that movement of a coil of wire in a magnetic field induces a voltage in the coil. Within a typical geophone are a springmounted coil and a permanent magnet that is rigidly fixed to the geophones outer case. When the geophone case and magnet experience vertical motion (that is, motion in the direction of the coils axis), the coil tends to remain at rest because of its inertia. The result is an induced electrical signal in the coil that has a voltage proportional to the velocity of the relative motion. Constructing a good geophone is an art. The challenges include linearity (the output signal should be linearly proportional to the amplitude of the motion over the desired frequency range); wide dynamic range (the geophone should detect large ground motions without distortion); sensitivity (the geophone should detect small motions down to the level of the intrinsic noise in the ground); and directionality (the geophone should have an output signal only when a vertical motion occurs). To accomplish all of this, a geophone designer can adjust the sizes and weights of the various components, the stiffness of the spring, the number of windings in the coil, and the strength of the magnet. Typically, the linearity specification requires a geophone to be nearly critically damped, with a resonant frequency below the seismic bandwidth. Vintage 1930s electromagnetic geophones were large, heavy devices. The only permanently magnetic materials available then were weak compared to those of today. Thus, large magnets and coils were needed to obtain the necessary sensitivity. Damping was accomplished mechanically by suspending the moving parts in viscous oil. By 1965, improved materials and the use of electrical damping reduced the size and weight of geophones dramatically (Figure 11). Within the past few years a new kind of geophone called a MEMS sensor has been introduced. MEMS (micro-electro mechanical system) sensors are tiny devices that are micromachined into microchips (Figure 11). Coupled with an application-specific integrated circuit (ASIC), they are essentially miniature digital accelerometers. MEMS sensors are believed to have many advantages over coil-based geophones, including better sensitivity, instantaneous dynamic range and linearity, less electronic noise, more stable calibration, and a broader bandwidth. Their main disadvantage is manufacturing cost.
Figure 10. Plan view of tuned air-gun array. The numbers below the gun stations (green circles) are gun volumes (in3). The 1553 notation indicates three 155-in3 guns that are so close together that their air bubbles coalesce after the guns fire. Such so-called cluster guns produce sound more efficiently than does a single large gun (Dragoset, TLE 2000).
Figure 11. Three vintages of geophones. W.C. Edwards is holding a geophone used in 1965. Next to him is a geophone from the 1930s. A clean-room technician inspects a microchip that contains the sensing element of a MEMS geophone from the 2000s. A cutaway view of an electromagnetic geophone shows the spring-mounted coil (GSI; Input/Output.)
thereby increasing the strength of the source beyond that of a single gun. The bubble pulses, which occur at different times, do not sum coherently. By carefully selecting gun sizes, an array can be tuned to minimize the size of the bubble pulses relative to that of the primary pulses (Figure 9b). A 1967 paper by Ben Giles of GSI was the first to describe routine marine seismic surveys shot with tuned air-gun arrays.
S54
SEG @75
MEMS sensor packages typically contain three orthogonal sensors, which allow them to be used for multicomponent surveys (discussed later in this article). One important advantage of a MEMS sensor is that it can record the direct current caused by the constant acceleration due to gravity. This capability provides a tilt measurement that can be analyzed to determine the orientation of the sensor package with respect to the ground. Some marine seismic reflection experiments were performed in the late 1930s, but it wasnt until after World War II that such work was being done routinely. By 1953 GEOPHYSICS was reporting more than 100 offshore crew months per year. Offshore seismic surveys in the early 1950s typically used land exploration equipment that had been modified for the marine environment. The sensors, for example, were suspended from a cable that was towed behind a boat containing the recording equipment. At each shotpoint the boat would stop, set off a dynamite charge, and record the data. The early 1960s saw the beginning of continuous seismic profiling using the streamer, an antisubmarine device developed during WWII. A streamer is a hollow cable a few inches in diameter that contains both the sensors and the wires connecting them to a data recording system onboard the boat. Until recently, all streamers were filled with a light oil to provide neutral buoyancy and acoustic coupling between the sensors and the water outside. A main advantage of streamers is that the boat no longer has to stop at each shotpoint. A disadvantage of streamers (one that is still troublesome today) is that, in spite of their streamlined design, their constant motion through the water makes them a noisy environment. A simple hydrophone consists of a piezoelectric crystal that senses pressure by producing a voltage when it is compressed or decompressed. However, it also produces a voltage when acceleration bends the crystal. The latter voltage is considered noise because its source is streamer motion rather than a reflected pressure wave. A partial solution to this problem was provided by using acceleration-canceling hydrophones as the sensing elements. Figure 12 shows one possible design for an acceleration-canceling phone. Another innovation that made streamers less noisy was to replace the oil with a solid material. Solid streamers became available in the 1990s. Today, both oil-filled and solid-filled streamers are widely used. Since their introduction, marine streamers have become much longer, with many more active receivers. Originally, the receiver stations in a streamer were each individually wired to the recording system. The number of receiver stations was thus limited by how many wires could fit inside the streamer. Around 1980, this restriction was eliminated by the introduction of electronic digitizing modules within the streamer (Figure 13). In digital form, the signals from many sensors could be multiplexed and transmitted to the recording system using just a few wires. Virtually all streamers in use today are digital. Their lengths may reach 8000 m or beyond, with 2500 or more receiver stations. Seismic survey design. For about the first 30 years of seismic exploration, reflection surveys were single-fold. That is, as in the Vines Branch experiment, each subsurface point was probed by just one geometric arrangement of a shot and receiver. Many subsurface points are required to map a subsurface structure. That was accomplished by recording each shot simultaneously at a number of geophone positions, and
Figure 12. Conceptual drawing of an acceleration-canceling hydrophone. The yellow shapes represent piezoelectric crystals. The orange shape is a support structure. The blue lines are electrical connections. (a) Acceleration bends both crystals in the same direction. No signal is generated because the voltages cancel. (b) Pressure bends the two crystals in opposite directions. A signal is produced because the voltages no longer cancel.
Figure 13. An early digital streamer from the 1980s. (a) A closeup view of an LRS-16 KILOSEIS streamer wound on a cable reel. A digital electronics module is visible in the center of the picture. (b) A disassembled LRS-16 electronics module showing its three circuit boards. Each module digitized the signals from 12 geophone arrays and transmitted them to the recording system on the seismic ship. When assembled, the electronics module was about 3 inches in diameter (Western Geophysical).
then relocating the source and geophones to new positions for the next shot. Initially, the number of traces per shot record was less than ten, but by the late 1940s that had grown to 20 traces or more. The surveys, however, were still single-fold. The largest problem with single-fold surveys is the presence of noise. As shown in Box 3, land seismic data recorded by single geophones can be extremely noisy. With single-fold data recorded with just one shot and geophone, the only option for removing the noise is to band-pass filter the trace. Unfortunately, doing that also removes reflected signal. By using geometric patterns of shots or geophones called arrays, however, it is possible to preferentially attenuate some noise with minimal disturbance of the signal. Since duplication of geophones is less costly than duplicating shots, geophones arrays were the most common. By the late 1930s geophone arrays with 3-6 elements were in use. By the late 1940s geophone arrays with 100 or more elements were used in some noisy areas. Shot arrays were used along with geophone arrays for especially intractable noise problems. In a geophone array the individual elements are spread over a large area, but wired together so that their signals sum and are recorded as a single trace. Arrays reduce noise in two
S55
SEG @75
Figure 14. Common reflection point acquisition. The vertical dashed lines represent the locations of two reflection points during a sequence of shots. After each shot, the shot location (triangle) moves one unit to the left and the receiver spread (circles) moves one unit to the right. After 12 shots, the two reflection points as well as the area between them have each been recorded 12 times. At each reflection point, each of the 12 recordings is at a different source-to-receiver offset. (For clarity, the shots are shown offset from one another in the vertical direction. In practice, all shots and receivers are positioned along a single line on the surface.) (Mayne, GEOPHYSICS 1962)
ways. If N instances of random noise are summed, the average amplitude of the noise is decreased by a factor of 1/N1/2. For example, if the signals from four geophones are summed, then the average amplitude of any random noise is decreased by one half. Stated differently, the SNR increases by about 6 dB. The worst noise in seismic shot records is coherent, not random. Arrays reduce that noise by discriminating against particular arrival patterns. Over short distances, the wavefronts of reflections from below typically are close to horizontal. Thus, the reflection will arrive at a group of connected geophones in phase, and will sum coherently. Linear noise, such as the ground roll shown in Box 3, travels horizontally and will not arrive in phase across a group of geophones. Thus, summing the phones will attenuate that noise. The 1950s saw much work done on designing optimal arrays for seismic data acquisition. Techniques included optimal spacing, optimal weighting in the summation, and a combination of both. Geophone and hydrophone arrays are still commonly use today. The impact of an array on reflection signals depends on its size. Even small arrays have some effect. Thus, there is a tradeoff in array design between the amount of noise attenuation and harm to the signal. In 1950, Harry Mayne of Petty Geophysical Engineering submitted a U.S. patent for an idea that would eventually revolutionize the way seismic surveys are acquired and processed. Maynes idea was that each subsurface reflecting point would be probed by a sequence of shot/receiver pairs using a continuous range of source-to-receiver offsets. Mayne called the method common reflection point (CRP) acquisition, but it is also known as common midpoint (CMP) and common depth point (CDP) acquisition. Figure 14 from Maynes 1962 GEOPHYSICS paper shows one possible CMP acquisition scheme. The number of common points in CMP acquisition is called the fold. The example in Figure 14 is 12-fold. CMP acquisition provides noise attenuation without hav-
ing to use large arrays in the field. Here is how it works. First, the data are sorted into CMP gathers. Next, the traces in each gather are time-shifted relative to one another so that the reflection events are aligned in time across the gather. This process is called normal moveout (NMO) correction. Finally, the traces are summed. This procedure is called CMP stacking. Time shifts that align the reflection events are unlikely to align coherent noise. Hence, the stack suppresses not only random noise but also coherent noise. Because of the time shifts, the reflection signal is not affected by the stack. A main motivation for the development of CMP stacking was suppression of multiples, a type of coherent noise that cannot be attenuated by arrays. More details are given below in the section on data processing. When CMP acquisition was invented, there was no practical way of using it. Too many data were involved both from an acquisition and a processing point of view. However, in one of those serendipitous confluences of events that occasionally happens in science, a number of developments occurred over a short time span that made the CMP method practical. Roll-along methods were developed in the field that allowed efficient acquisition of large amounts of data using explosives. Vibrators made it possible to acquire shots more rapidly than with dynamite. Magnetic, and later digital, tape supplied the necessary recording densities. And streamer acquisition was a natural fit to the CMP method. At first, clever analog methods were devised to gather and sum the CMP traces. In the 1960s, these were replaced by digital computers and their superior processing horsepower. Today, almost all exploration seismic reflection data are acquired with some form of the CMP method. The next major innovation in seismic survey design was the three-dimensional, or 3D, survey. Until the mid 1980s most seismic surveys were 2D. This meant that the sources and receivers were lined up spatially so that all of the shot and receiver points fell on a single straight or gently curving line. The data from each such line were processed independently of the data from any nearby lines. The 3D structure of the subsurface was mapped out approximately by collecting and processing a series of coarsely-spaced parallel 2D lines. Many geophysicists recognized from the beginning that this approach was inadequate. Basically, it assumes that all reflections that occur in a 2D line have their origin within a vertical plane containing that line. Clearly, that is not correct. Interestingly, one can make a pretty good argument that the Vines Branch experiment was not only the first successful seismic reflection survey, but was also the first 3D seismic reflection survey. Imagine that each of the shot/receiver pairs in Figure 3b is a 2D seismic lineeven though each such line contains only one trace. Then the final image was obtained by collecting and processing in a 3D sense data from a sequence of closely-spaced 2D lines. Except for the number of traces involved, many 3D seismic surveys are performed that way today. The first 3D seismic survey was recorded by Exxon in 1967 near Houston. A few years later, GSI and a consortium of six oil companies shot an experimental 3D survey at Bell Lake Field in southeastern New Mexico. Although the data took several years to process and interpret, the survey was a resounding success. 3D interpretation of the data correlated nicely with information from wells previously drilled in the field. In spite of this success it was about another ten years before 3D acquisition and processing became routine. The delay was caused
S56
SEG @75
Figure 15. A seismic ship shooting a 3D marine survey in 1991. The four streamers under tow create the wake patterns seen at the edges of the photo. Immediately to either side of the ships wake is an air-gun array. Each array contains four strings of air guns. With the two sources firing in an alternating pattern, eight lines of seismic data were acquired at once (Western Geophysical).
by economics3D surveys were costly, took longer to shoot, and much longer to process. When I joined Western Geophysical in 1979, it seemed to me that among our oil company clients there were more skeptics than believers about the value of 3D surveys. Regardless of the benefits, many were not willing to pay significantly more for a survey and then have to wait a year or longer to see the final results. Other than the sheer volume of data involved, the practical aspects of extending land seismic acquisition from 2D to 3D were straightforward. Typical 3D land survey geometry consists of several parallel receiver lines with orthogonal shot lines, moved, and repeated as necessary to cover the targeted area. Many interesting variations of 3D land survey design have been tried. Each has various advantages and disadvantages in terms of coverage fold, data quality, subsurface illumination, cost, and so on. More information can be found in the October 2004 issue of THE LEADING EDGE. The transition from 2D to 3D was more difficult for marine surveys. In 1980about when oil service companies first seriously contemplated offering 3D marine surveys as a standard productseismic ships towed only one streamer. Consequently, shooting the number of lines necessary for 3D coverage was extremely expensive. Furthermore, underwater currents exert forces on a streamer that pushes it to the side, a phenomenon called feathering. Since 3D data processing requires precise knowledge of shot and receiver locations, a method of measuring the streamer shape was required. The positional problem was solved first. Digital compasses were connected to the streamers about every 300-500 m. These devices measured the slope of the tangent to the streamer relative to magnetic north as a function of position along the streamer. The slope data were integrated to provide inline and crossline coordinates for each receiver station relative to the ship position. Later on, compasses were augmented or replaced by acoustic networks of transmitters and receivers. In the 1980s, ship positions were determined using triangulation data from shore-based radio networks. Ship positions were combined with the streamer position data to produce absolute coordinates for the receiver locations. Beginning in the 1990s, ship positions were measured using the considerably more accurate GPS satellite network. The acquisition cost of marine 3D surveys was reduced by
Figure 16. The final product from a 3D seismic survey is a 3D image of the subsurface. Using interpretive workstation software, image planes in any direction through the volume can be displayed (shown at stockholders meeting of Texas Instruments in 1984).
rigging seismic ships to tow more than one streamer at a time (Figure 15). Although hindsight makes this seem like an obvious idea, at the time it was considered rather revolutionary. During the 1980s and early 1990s, the number of streamers towed progressed from one to two, then to three or four, and finally to six to eight. This progression has continued until today, when towing 10, 12, or even 16 cables is common. Another innovation was the use of independent dual air-gun sources firing in a left/right/left/right pattern. With the increased number of cables and dual sources, a seismic ship could record many subsurface lines in a single traverse across a survey area. Such acquisition efficiency made 3D marine surveys a practical exploration tool. After data processing, the result of a 3D survey is a volume of data that represents an image of the earths subsurface (Figure 16). Selected 2D vertical seismic profiles for detailed interpretation can be extracted from the volume in any direction. The seismic profile in Figure 4 is an example. Horizontal image planes, called time or depth slices (depending on the type of migration), can also be extracted. These, too, are valuable interpretation products. The most important characteristic of a 3D seismic image, however, is its geometric accuracy. If the data are processed carefully, reflection events in the image are in their true locations relative to the surface. Having an interpreted 3D seismic image provides a much better understanding of a reservoirs shape, size, and location. This removes much of the uncertainty in making drilling decisions. Today, 3D seismic surveys are considered an indispensable tool for guiding the development of a hydro-
S57
SEG @75
Figure 17. How time and spatial sampling intervals affect seismic data. These profiles from offshore Louisiana were recorded in the early 1980s with a 480channel digital streamer. (a) Processing single-channel data at a 1-ms sampling interval produced a high-resolution result with traces spaced every 3 m. (b) The same data that produced the result in (a) were reprocessed at a 4-ms sampling interval with the single-channel data summed to simulate 60-m long hydrophone arrays found in conventional streamers of that time (Western Geophysical).
overall sensor density in both cases. During the last several decades, the trend has been towards seismic surveys with increased trace count and smaller arrays. In fact, today weve come full circlethe Vines Branch experiment was a single-sensor survey, and, once again, single-sensor surveys are being acquired. Figure 17 shows a data example from a 500channel digital streamer that was developed at Western Geophysical in the early 1980s. The receiver stations were separated by 6.7 m with short, three-phone arrays at each station. When the channels were processed individually, CMP spacing of only 3 m was possible. Despite the remarkable resolution improvement demonstrated in Figure 17, the Western Geophysical system was not a runaway commercial success. That was probably because the prohibitive cost (at that time) of processing so many channels outweighed the advantages. Even today, single-sensor processing remains costly. Thus, single-sensor data are Figure 18. The same record as shown in Box 3 after noise was removed by the digital group forming usually decimated spatially at an early process. Reflections appear after the noise removal (Shabrawi, et al., First Break 2005). stage in the data processing. One decimation procedure is called digital group forming (DGF). DGF accomplishes noise carbon-producing area. Digital technology developed in the 1960s through the 1980s reduction, just as does simple summing or weighted summing allowed a huge increase in the number of individual seismic of sensors to form an array. However, because the DGF algotraces that could be recorded and processed during a seismic rithms are much more elaborate than those used in simple survey. Also, the wider dynamic range possible with digital array forming, the results are better. Figure 18 shows the same instruments meant that the SNR of the data was less important. land shot record as that in Box 3 after DGF has been applied. In combination, these two changes meant that using large Note the dramatic reduction in noise and the appearance of receiver arrays was not as crucial as it had been previously. reflection events in regions were they were previously Arrays were no longer required to improve SNR because the obscured by the noise. A further advantage of single-sensor increased dynamic range allowed recording of small signals recording and processing is that the reflection signal suffers regardless of the noise present. The increase in recording capac- no ill effects caused by array forming. As shown in Figure 19, ity meant, for example, that instead of spanning a 3000-m receiver final results of single-sensor acquisition and subsequent data line with 60 channels of 50-m long receiver arrays, one could processing can have higher resolution than that achieved by span the line with 500 channels of 6-m long arrays, with the same conventional technology.
S58
SEG @75
Figure 19. Comparison between images from single-sensor and conventional seismic data acquisition. (a) The final image from single-sensor acquisition has higher fidelity than that of the conventional data. (b) Image from conventional acquisition in the same area as shown in (a) (WesternGeco).
Seismic data recorders. The When recording seismic function of a seismic data traces, film and paper have a recording system is to make a dynamic range of about 20 permanent record of reflection dB, which is insufficient for signals detected by the senthe signals of interest. This sors. Two challenges are faced. problem was solved by using First, the equipment must automatic gain control (AGC) have a large dynamic range amplifiers to apply a time(Box 2). Second, the recording dependent amplification system should eliminate any (gain) to the sensor signals. noise that would permanently Thus, when signals are large mar the quality of the recordat the early part of a seismic ing (Box 3). record, the gain was small, For about the first 30 years Figure 20. Control panel from a subunit of a 1950s seismic data recording while when signals are small of seismic exploration, data system. The subunit contained an analog filter and an AGC amplifier that at later parts of the record, recording systems were purely were applied to the signal from a geophone array prior to recording it. The the gain was large. analog systems. That is, an entire recording system contained 24 identical control panels of this type Noise problems were (Conoco). electrical output from a sensor addressed by applying bandor an array of sensors was treated as a continuous signal. The pass analog filters to the signals before they were recorded. signals passed through analog amplifiers and filters, and were Optimum filter parameters were determined in the field by recorded as continuous traces on media such as film and paper. analyzing a sequence of test shots recorded with different filTo use a modern term, these systems were WYSIWYG (what ter settings. Figure 20 shows the control panel for an AGC you see is what you get). That is, once a trace was recorded on amplifier used by Conoco in the 1950s. The knobs marked low film, there was no mechanism for changing the amplification or cut and high cut determined the noise filter settings, while removing noise. the other knobs controlled the gain level and amount of AGC.
S59
SEG @75
Figure 21. In the mid-1950s Carroll Murff uses a mechanical drafting tool called the Gaebe Plotter to produce a migrated seismic profile (Conoco).
In the early 1950s, analog magnetic tape recording became available. A system equipped with a magnetic tape recorder was no longer WYSIWYG. The tapes could be played back through the recording system with filter settings changed from those used initially. As a result, filter settings during recording could be broadband, because geophysicists knew that more stringent filters could be applied to the data later if necessary. Magnetic tape has a dynamic range of about 40-45 dB, which reduced, but did not eliminate the need for AGC. In the early 1960s, the introduction of digital technology drastically changed the nature of seismic instrumentation. In digital recording, signals are no longer continuous. Instead, they are discretely sampled Figure 22. The TIAC 827 at predetermined and fixed time intervals. digital computer installed Digital data samples are represented by in GSIs Calgary seismic bits, the familiar 0s and 1s of the binary data processing center in number system. In the binary system, each 1964 (Texas Instruments). bit represents a factor of two, which corresponds to a dynamic range of about 6 dB. Because there is no limit on how many bits can be used to represent a sample, the dynamic range of a WG-24, a marine seismic recording system that eliminated recording system is no longer limited by the dynamic range almost all of the analog components. The key component of the WG-24 was a 24-bit over-sampled ADC that was based on of the recording media. Instead, the dynamic range is determined by the number a new technology called an enhanced data modulator encoder. of bits in the analog-to-digital converter (ADC) and the amount The ADC digitized at a rate of 64 or 128 kHz, which produced of AGC available. For example, the DFS V, a widely-used dig- sample intervals in the s range. Digital filters were then ital field system manufactured by Texas Instruments in the applied to these finely-sampled data before they were subearly 1980s, had a total theoretical dynamic range of 174 dB. sampled to the 1-2 ms intervals typically used for seismic This was accomplished by having an 84 dB range of AGC and data. The 24-bit ADC had an effective instantaneous dynamic 15 bits in the recorded samples. Even though the DFS V range of greater than 114 dB. This allowed data to be recorded recorded digital data, its front-end unit still contained analog faithfully without using any automatic gain control. amplifiers and filters. In the early 1990s, Western Geophysical introduced the Seismic data processing. The major goal of seismic data pro-
S60
SEG @75
Figure 23. A small portion of a 2005 seismic data processing computer system. The entire massively-parallel system contains tens of thousands of linked CPU nodes and has a compute capacity of 60 teraflops (WesternGeco).
cessing is to produce an accurate image of the earths subsurface from the data recorded in the field. In the earliest days of reflection seismology, data processing consisted mostly of three key steps: static corrections, velocity determination, and image construction. These were accomplished by a computer, which was the job title of the individual on a seismic crew who performed these functions using pencil and paper. The measured traveltimes for reflection events were adjusted for weathering and elevation. Next, velocity profiles were constructed based either on direct well measurements or on the observed changes in the arrival times of reflections as the distance between sources and receivers was varied. Time versus depth tables were built from the velocity information and used to convert reflection times to depth. Finally, images were constructed by correlating depth horizons over the area being surveyed. Ambiguous correlation of depth horizons caused by spatial sparseness of the data was resolved by recording and analyzing additional shots. Anyone perusing papers published in GEOPHYSICS during the first 30 years of reflection seismology will quickly realize that the geophysicists of that era clearly recognized the shortcomings of their data processing methods. Their problem was that they just didnt have the tools to do much better. During the 1950s, clever enhancements were made to some seismic data processing functions. Those included inverse filtering using analog computers and migration using special drafting tools (Figure 21). However, seismic data processing as we
know it today began in the 1960s when transistors and solidstate integrated circuits made digital computers affordable. Figure 22 shows the TIAC 827, a digital computer introduced in the early 1960s by Texas Instruments. This computer was purpose-built for seismic data processing. Its core memory was 1 kilobyte, with a 21-track, one-inch tape transport for temporary storage. It used integer-based arithmetic, with floating-point calculations emulated by the software. Division was avoided whenever possible! By 1968 GSI had 11 seismic data processing centers equipped with this computer. Reflection seismologists have hadand probably always will havean insatiable hunger for computer processing power. They were quick to understand and use the advantages of array processors, faster CPUs, denser recording media, larger memories, parallel architectures, more flexible and reliable operating systems, and Linux-based clusters of microprocessor nodes. Figure 23 shows a 2005 state-of-the-art computing facility used exclusively for processing seismic reflection data. This system has a mind boggling capacity of 60 teraflops. Aside from the digital revolution, an event of perhaps equal importance to geophysicists was the 1965 introduction of the Fast Fourier Transform (FFT) by James W. Cooley, an IBM researcher, and John W. Tukey, a Princeton University statistician. The FFT is an efficient way of converting digital time signals into the frequency domain. Many seismic data processing algorithms are simpler and faster in the frequency domain. The FFT is a fundamental component of those algorithms. Deconvolution was one of the first applications of digital processing technology to seismic data. Its theory was worked out during the early 1950s by Enders Robinson and his colleagues in MITs Geophysical Analysis Group. The fundamental assumption of deconvolution is that a seismic trace consists of a series of reflection events convolved with a
S61
SEG @75
Figure 24. CMP stacked profiles without (a) and with (b) deconvolution applied to the prestack data. Note how deconvolution sharpens the reflection at about 1.3 s and the dipping reflections below 2.0 s on the left (Yilmaz, Seismic Data Analysis).
wavelet, plus unrelated noise. The wavelet depends on the shape of the pressure pulse created by the seismic source, reverberations and ghost reflections in the near surface, the response of any filters applied to the data during acquisition, and the effects of intrinsic attenuation. The deconvolution process designs an inverse filter that compresses the wavelet, thereby enhancing the resolution of seismic data. Figure 24 shows the difference that deconvolution can make to a seismic profile. There are two kinds of deconvolution. Deterministic deconvolution is applied when the wavelet to be compressed is known a priori. For example, in signature deconvolution, a common data processing step, a measured source signature is used to design a filter that converts the signature to some desired shape. The shaping filter is then convolved with the seismic traces. Statistical deconvolution is applied when the wavelet is unknown. This requires two assumptions: the reflectivity series is a random sequence and the wavelet has a property known as minimum phase. Given those assumptions, a seismic trace can be deconvolved without use of any additional information. From the 1970s through the 1990s, both kinds of deconvolution were routine data processing steps. More recently, the statistical approach has lost favor with many geophysicists who mistrust its reliance on assumptions that may not always be true. As discussed near the beginning of this article, static anomalies are a challenge that must be overcome in order to produce accurate subsurface images. Figure 25a shows a CMP stack of a land data set for which statics were ignored. This result has two problems. First, statics that occurred over long
distances (long-wavelength statics) created an artificial high in the reflections below point A. Second, statics that occurred over short distances (short-wavelength statics) caused the reflection events to lack continuity. Two methods have evolved for removing the effects of statics. The refraction statics correction method uses the arrival times from refracted seismic waves (called head waves) that travel along the boundary between the weathering layer and the bedrock to determine the thickness of the weathering layer and the bedrock velocity. This allows computation of a static correction that effectively replaces the weathering layer velocity with the bedrock velocity. For simple weathering layers a refraction technique called the generalized reciprocal method is used. This technique, which was published in 1981 by D. Palmer, is a generalization of a method formulated in 1954 by J. G. Hagedoorn. If the near-surface weathering zone consists of several layers, an iterative, generalized linear inversion approach can be used to compute the static corrections. Figure 25b shows how the application of refraction statics corrections removed the artificial high seen in Figure 25a. The residual statics correction method was developed in the 1970s. It analyzes traveltimes picked along a reflecting horizon to compute static corrections associated with each shot and receiver location. The basic assumption is that the picked traveltimes are surface consistent. That is, a picked time in a trace can be decomposed into the sum of a time shift associated with the traces shot location, a time shift associated with the traces receiver location, and a time shift associated with structural changes along the reflecting horizon being picked. For CMP data acquisition there is a large amount of redundancy in the picked times. This makes it possible to use a least-
S62
SEG @75
squares generalized linear inversion to decompose the picked times into their constituents, which are then used to make the associated static corrections. Figure 25c shows how the application of residual statics corrections improves the continuity of the reflecting horizons seen in Figure 25b. A velocity model is required for constructing any type of seismic profile, whether its a simple CMP stack or a 3D prestack depth migration. Lets consider stacking velocities first. The basic assumption involved in determining a stacking velocity function is that within a CMP gather the shape of reflection events (called moveout) is hyperbolic. That is, a hyperbolic equation describes when a particular reflection event occurs as a function of the horizontal distance (the offset) between the shot and receiver. The curvature of the hyperbola depends on the stacking velocity. Determining stacking velocities is a matter of trial and error. Suppose a trial velocity is selected for a reflection event and the event is flattened using that velocity to determine the hyperbolic shape of the event. If the velocity was correct, then the event truly becomes flat; if the velocity was incorrect, then the event is not flat and another trial velocity should be tried. This might sound laborious, but the whole process was automated on digital computers when they became available for seismic data processing in the 1960s. In practice, events do not have to be actually flattened. Instead, trial hyperbolas are chosen and the coherence of the data along the hyperbolic trajectories is computed as a function of velocity. A peak in coherence coincides with the best stacking velocity. An example of this type of analysis, called a velocity spectrum, is shown in Figure 26. The hyperbolic moveout assumption is only exact for a single horizontal layer. For a general anisotropic subsurface, the assumption is approximately correct for small offsets. Thus, in recent years, as the maximum offsets in shot gathers became longer, the errors in the hyperbolic assumption became more important. Starting in the 1990s, velocity analysis was often performed with higher-order terms in order to improve its quality. In general, the velocity model required for time migration (see below) is different than that required for stacking. This is because stacking velocities depend on reflector dip, but migration velocities are true medium velocities that are independent of dip. The trial and error approach is one method of determining migration velocities. However, because migration moves reflected energy between CMPs, the analysis must be performed on groups of CMPs rather than on CMPs one-by-one as is done for stacking velocity analysis. The basic idea is to perform constant-velocity prestack migration (see below) for a range of trial velocities and then analyze the results to choose the velocities that produce the best results at each CMP location. There are many variations of this idea that were discovered during the 1970s and 1980s. The reader interested in details should refer to Yilmaz (2001). Primary reflections in seismic data are charac-
Figure 25. (a) A CMP stacked profile without application of refraction or residual statics corrections; (b) the same profile as in (a), but with refraction statics corrections applied prior to the stack. Note removal of the spurious structural high seen in (a); (c) the same profile as in (a), but with both refraction and residual statics corrections applied prior to the stack. Note the improved continuity of the reflection events compared to those in (b) (Yilmaz, Seismic Data Analysis).
S63
SEG @75
just detecting primary reflections in seismic data was quite a challenge. Worrying about weaker multiple reflections wasnt deemed worthwhile. An issue of GEOPHYSICS published in 1948 contains papers debating whether multiple reflections could be seen in seismic data. As data acquisition technology improved, however, multiples unquestionably became a concern, particularly in marine data. Today, removing multiples from seismic data is a problem that still doesnt have a completely satisfactory solution. Nevertheless, much progress has been made. There are three basic approaches to multiple attenuation. Periodicity-based methods attempt to compress multiples that have a periodic structure. These methods were first applied in the late 1950s for removing water-layer reverberations from marine seismic data. The algorithms are based on inverse filtering or deconvolution. Moveout-based methods of multiple attenuation rely on differences in the moveout of multiple and primary events. As discussed earlier, one way of attenuating multiples or other coherent noise is simple CMP stacking. When primary reflections in a CMP gather are aligned using stacking velocity functions, multiples are Figure 26. A CMP gather (left) and its velocity spectrum obtained by coherence analysis. The small dark areas outlined in yellow correspond to velocity values (horizontal axis) that prooften misaligned (Figure 28). Thus, summing duce moveout curves matching those seen in the gather (Yilmaz, Seismic Data Analysis). the traces attenuates the multiples. Other moveout-based methods involve 2D transforming of CMP gathers from space-time coordinates into some other coordinate frame. The hope is that after the transformation, the primary and multiple events are easier to separate than they were in the original gathers. Todays most popular method of multiple attenuation is based on the discrete Radon transform. This method has received much attention since a paper published by D. Hampson in 1986 revealed an efficient way to compute the transform. Figure 29 shows an example of multiple attenuation using the Radon transform. Data-driven methods of multiple attenuation do not rely on any assumptions about the properties of multiples or the subsurface. Instead, a recorded data set is algorithmically manipulated to directly predict the multiples that it contains. After the predictions are subtracted from the original data, only primary reflections Figure 27. Primary and multiple reflections. The blue-green area represents the water layer. The red and yellow dots indicate the positions of seismic sources and receivers, respectively. remain. The fundamental concept of data-driven multiple attenuation was described in papers writThe white lines are raypaths of the reflections. For the sake of simplicity in the figure, the raypaths do not refract as they cross reflecting horizons. (a) A primary reflection; (b) a first- ten in the 1960s and 1970s. Practical methods of order free-surface multiple; (c) a second-order free-surface multiple; (d) a first-order internal implementing these ideas did not appear until the multiple; (e) a second-order internal multiple; (f) hybrid event 1-3 is a free-surface multiple 1980s, and it was yet another ten years before the that includes an internal multiple, subevent 1-2. data-driven method became fully accepted by the industry. terized by seismic wave raypaths that begin at a shot, extend Presently, a 2D version of surface-related multiple attendownwards into the earth, reflect upwards, and terminate at uation (2D SRME) is the most frequently used and most suca receiver. Multiple reflections are events whose raypaths cessful data-driven method. Internal multiple prediction include more than one upward reflection. Figure 27 shows a algorithms are available, but are not yet widely used. Dataprimary reflection and a variety of multiple reflections. driven methods are currently a hot research topic, particularly Multiples are usually considered an undesirable shot-gen- efforts to develop practical 3D versions of SRME and to erated noise. Interestingly enough, there was a time when improve internal multiple prediction. These efforts are cenmultiples were not a problem. During the 1930s and 1940s, tered around the research programs of the DELPHI and
S64
SEG @75
M-OSRP consortia at Delft University and the University of Houston, respectively. If seismic data processing has a search for a holy grail, then surely it must be the search for the perfect imaging method. The search started at the very beginning of seismic exploration, and continues unabated today. Over 90 abstracts on the topic of migration were submitted for presentation at the 2005 SEG Annual Meeting. That was over 10% of the total, and far exceeded the number in any other category. Imaging, or migration, is the final, and arguably the most important and complex, step in standard seismic data processing. Migration can be applied to either prestack or poststack data, with the final image produced in either time or depth. In addition, migration can be applied in either a 2D or 3D fashion (assuming 3D data are available). Thus, a total of eight basic categories of migration are possible. There are a variety of algorithms for accomplishing migration and a variety of different domains in which they are applied. All the categories and algorithms have unique assumptions, limitations, advantages, disadvantages, and costs. One complexity of migration then is in simply choosing the best approach for a particular data set! Bee Bednar recently published a paper in GEOPHYSICS that describes the history of seismic migration. Below I present a brief synopsis, beginning in the early 1970s shortly after digital computers became available for seismic data processing. The first implementations of digital migration algorithms were in the 2D, poststack time category. Notable contributions were the finitedifference-, Kirchhoff summation-, and FFTbased methods introduced by Jon Claerbout at Stanford University, Bill Schneider at GSI, and Bob Stolt at Conoco, respectively. These methods had limitations of one sort or another but, nevertheless, they laid the groundwork for a subsequent explosion in migration research. Poststack migration results are incorrect when the seismic data contain conflicting dips. This happens because stacking velocities are dipdependent. When two or more dipping events overlap, only one of them can be stacked correctly. Thus, the stack itself contains errors as must any product derived from it. Prestack migration can solve this problem, but in the 1970s and most of the 1980s it was too computationally intensive to use routinely. An alternative solution was provided by a process called dip-moveout (DMO), which is applied to NMOcorrected prestack data to account for the effect of dip on moveout. NMO and DMO used together provide an improved stack that is more appropriate for migration. The theory of DMO was developed between 1975 and 1985, with notable contributions from Jon Claerbouts students at Stanford among others. DMO was a standard processing step for about 15 years. It
Figure 28. Three CMP gathers with strong multiple reflections before and after NMO correction. (a) Before NMO correction, both primary and multiple reflections have hyperbolic moveout trajectories. (b) NMO correction with the velocity of the primary reflections flattens the primary events. When the traces are stacked, the primary reflections will add together coherently. Because the multiple reflections still have moveout, they will not stack coherently. The success of CMP stacking in attenuating multiples depends on the degree of difference between the velocities of primaries and multiples (after Yilmaz, Seismic Data Analysis).
Figure 29. Attenuating multiples using the Radon transform. (a) A CMP gather from a deepwater area. In the Radon domain (not shown) the primary and multiple reflections are localized within separate regions. Thus, they can be separated by muting the undesirable region. (b) The primary reflections recovered by an inverse Radon transform after the multiples were muted. (c) The multiple reflections recovered by an inverse Radon transform after the primaries were muted. (d) The multiples from (c) are subtracted from the original data (a). The result is multiple-free primary reflections (Yilmaz, Seismic Data Analysis).
S65
SEG @75
Figure 30. A Gulf of Mexico marine seismic profile from the late 1990s. This vertical profile was extracted from a 3D volume migrated with a 3D poststack Kirchhoff time migration algorithm. Because of the limitations of poststack and time migration, the base of the salt body in the center of the profile and the sediments below the salt are poorly imaged. Compare to Figure 4 (WesternGeco).
is used less frequently today because most seismic data sets are migrated prestack. The advent of 3D data acquisition in the 1980s raised an obvious question: How were the data to be migrated? Since the raison dtre for 3D seismic data is to obtain a correct 3D image, 2D migration is clearly insufficient. Although 3D migration algorithms were available, they were also quite costly. One approach to 3D migration that was used for a number of years during the 1980s was two-pass migration. In that method 3D migration is accomplished in two steps. First, a 2D migration is applied to the data in one direction. Second, another 2D migration is applied to the results of the first step in the orthogonal direction. The two-pass approach is exactly equivalent to a one-pass approach only for constant velocities; otherwise, it is inaccurate. Because of its inaccuracies, the two-pass approach to 3D migration gradually lost favor. Today it is seldom used. During the late 1980s and early 1990s, a number of efficient one-pass migration algorithms were developed, and these became the algorithms of choice. Figure 30 shows a vertical profile from a 3D data set that was migrated using a 3D poststack, Kirchhoff time migration.
The image of the top of the salt appears adequately migrated, but the image below the top of salt is uncertain. The 1990s saw a major effort within the industry to improve 3D migration. This led to 3D prestack time migration and then 3D prestack depth migration (3D PSDM). Time migration assumes that lateral variations in velocity are mild. Because the velocities of salt and sediments are quite different, a data set like that in Figure 30 has strong lateral velocity variations in the region along the top of salt. Depth migration, which doesnt have a lateral velocity variation limitation, is the algorithm of choice for a data set like this one. Figure 4 shows the same profile as Figure 30, but with 3D prestack wave-equation-based depth migration (WEM) applied. The base of salt reflection and the deeper reflections are properly imaged in this result. Depth migration imaging results are only as good as the velocity-depth model upon which the migration depends. In areas with strong lateral velocity variations construction of the velocity model is tricky business. If a velocity model is close to being correct, then the CMP image gathers created by PSDM can be analyzed for residual moveout. The moveout information can be used in a procedure called tomographic inver-
S66
SEG @75
sion to refine the velocity model. The effects of velocity anisotropy can be included at this stage. If the velocity model is not close to being correct, the image gathers may not be of much use because they may not contain any recognizable events that can be analyzed for residual moveout. The solution to this problem is to build the velocity-depth model layer-bylayer from the top down, remigrating after each step. Consider the data in Figure 30, for example. Suppose the velocities in the layers above the salt have been determined and the migration has produced a good top-of-salt image. The next velocity-depth model can be constructed based on everything known above the salt, but with a constant salt velocity extending from the top-of-salt to the bottom of the model. Migrating with that model should image the bottom of the salt layer correctly. The process than continues for the layer below the salt using a velocity-depth model that has been updated to include the bottomof-salt results, and so on. Figure 31 shows an example of a data processing flow that accomplishes the iterative procedure of velocitydepth model construction and 3D PSDM. This completes the story of how reflection seismology journeyed from Figure 3a to Figure 4. The article concludes by describing a few recent important advances in reflection seismology and with some speculation about the future.
Figure 31. A seismic data processing flow designed to accomplish 3D imaging in a geologically complicated area containing a salt body. Note the iterative tomography to determine the sediment velocity model followed by iterative migration to build the salt model. The acronym KPSDM stands for Kirchhoff prestack depth migration (WesternGeco).
Recent advances in reflection seismology. Within the past 20 years several new innovative kinds of seismic reflection surveying have been developed and commercialized. All required advances in data acquisition hardware and methods as well as in data processing algorithms. The dual-sensor ocean bottom survey technique, which is based on a concept that J.E. White published in his 1965 book, was developed to solve a specific problem. In many shallow-water offshore areas, obstructions and currents make streamer-based acquisition difficult or impossible. A solution is to place the Figure 32. Concept behind dual-sensor ocean-bottom cable acquisition. If seismic sensors are hydrophone cables on the sea floor rather than placed on the sea floor, reflections from the sea surface interfere with the primary reflections. The towing them behind a ship. Unfortunately, interference may be constructive or destructive as shown by the peaks and troughs in the amplitude spectra at the right. Because surface reflections are recorded with opposite polarity by a when this is done the seismic data are severely hydrophone and geophone, they are eliminated by balancing and summing the two types of distorted by the effect of reverberations in the traces, as shown by the bottom trace and its spectrum (Fred Barr, Oil & Gas Journal, 1994). water layer. White suggested that recording both pressure and vertical particle motion at the sea floor them. After calibration, the data from the dual sensors were would provide information that allowed removal of the rever- summed to eliminate the troublesome reverberations. Figure 32 shows an example of how this works for a single reflection berations. Fred Barr and Joe Sanders at Halliburton Geophysical event and one reverberation in the water layer. Within a few years after HGS developed their dual-senServices (HGS) developed a practical implementation of this idea in the second half of the 1980s. The method consisted of sor method, others discovered variations that solved the data placing a hydrophone and a vertical geophone together at each combination problem without requiring calibration shots. All receiver location on the ocean bottom. The data from the two of those methods have various advantages and disadvansensors were calibrated relative to one another using data tages. Hence, today there is not a single preferred data prorecorded from a single air-gun shot positioned directly above cessing algorithm for combining dual-sensor data.
S67
SEG @75
Figure 33. Results from a multicomponent survey. A reservoir sand channel is visible on a P-S converted wave profile but not on a P-wave profile. The welllog velocity curves on the left indicate that the difference in the two sections is due to differences in the impedance contrasts for P- and S-waves. (J. Gaiser and A. Strudley, First Break 2005)
The reflection seismology technology discussed so far in this article involves the creation, detection, and processing of pressure waves (P-waves). In some situations, however, Pwaves do not produce good results. For example, shallow gas deposits can disrupt the transmission of P-waves to lower reflectors and some reservoirs have low-amplitude P-wave reflections because of their low acoustic impedance contrasts relative to surrounding layers. Fortunately, another type of seismic wave called shear waves also exists (Box 1). Shear waves (S-waves) respond to many geologic situations quite differently than do P-waves. This makes them a valuable alternative in cases where P-waves alone provide a poor or ambiguous subsurface image. Multicomponent seismic surveys exploit this characteristic of S-waves. Multicomponent surveys typically measure particle motion of seismic waves in three orthogonal directions. This may be accomplished by an orthogonal arrangement of geophones. In marine environments, the orthogonal sensors are deployed
on the sea floor, and may be accompanied by a hydrophone that measures pressure. A mathematical rotation of the threecomponent data provides a reconstruction of the S-wave particle motion in the plane of wave propagation. If the seismic source generates only P-waves, then any detected S-waves must arise from P-wave to S-wave conversions (converted waves or PS-waves) at reflecting interfaces (Box 1). Alternatively, land multicomponent surveys are possible using sources designed to generate S-waves. (Because S-waves do not propagate through water, an S-wave marine source would have to be placed on the sea floor. A practical way of doing this has not yet been found.) Many efforts have been made to construct practical S-wave land sources, but none has yet been a commercial success. Thus, most multicomponent surveys use P-wave sources and depend on the presence of converted waves. As shown in Box 1, the incident and reflected angles at a P-to-S conversion are not equal. This introduces an asymme-
S68
SEG @75
Figure 34. Time-lapse measurements. The image on the left was acquired over a producing reservoir in 2003. The middle image shows the difference between the 2003 image and an older image acquired in 1992. There is no clear indication of reservoir changes. The right-hand image shows the difference between the 2003 image and a newer image acquired in 2001. Because of improvements in the repeatability of seismic data acquisition and processing, reservoir changes are seen clearly. The red circle shows an area where water injection has increased reservoir pressure. The differences in the blue oval are caused by changes in velocity in the reservoir above. The green oval is an area where production may have reduced gas saturation (H. A. Aronsen, et al., Schlumberger Oilfield Review 2004).
try into the raypaths of PS-waves that complicates the data processing compared to that required for P-wave data. Good imaging results have been reported using an anisotropic Swave velocity model in a prestack time migration algorithm that was especially tailored for multicomponent data. Figure 33 shows one application for multicomponent seismic data. In this case, low P-wave reflectivity caused a reservoir sand channel to be almost invisible on a conventional seismic reflection profile. The converted wave profile however, shows the reservoir clearly because of a strong S-wave impedance contrast. Time-lapse seismic surveys are another recently developed application of reflection seismology. When the hydrocarbons in a reservoir are produced, the pressure, temperature, and composition of the pore fluids change. These changes alter the seismic velocity and density of the reservoir. Thus, if two 3D seismic surveys are acquired at different stages in the lifetime of a reservoir, then the reflections at the reservoir should be different wherever fluid changes have occurred. The goal of time-lapse surveys is to detect changes in the seismic response of a reservoir over time. Seismic interpreters and production engineers can then use the results to better understand the behavior of the reservoir fluids. Time-lapse surveys are also called 4D surveys, with the fourth dimension being time. The key challenge in time-lapse surveying is the repeatability of the seismic data acquisition and processing. Time-lapse profiles are made by subtracting the processed results of one survey from those of a prior survey. A null result is the goal, except where reservoir changes have occurred. In practice, null results are impossible to obtain. Differences between the sur-
veys in source and receiver types, positions, and calibrations, ambient noise levels, and data processing algorithms all conspire to produce a measurable background noise in timelapse profiles. To be useful, the signal from actual reservoir changes has to be observable above that noise. Because that signal generally is low amplitude, high survey repeatability to reduce the background noise is crucial to the success of the time-lapse method. Over the last ten years the exploration industry has seen major efforts to improve the repeatability of seismic surveys. Innovations include steerable streamers, high-density 3D surveys, calibrated sensors and sources, new noise reduction algorithms, and more accurate navigation technology. The comparison of two time-lapse profiles in Figure 34 shows that the efforts have been worthwhile. In the 1992-2003 profile the time-lapse signal is lost among the noise, while it clearly stands out in the 2001-2003 profile. The future of reflection seismology. No article of this length can possibly describe the many important contributions that scientists and engineers have made to reflection seismology over a 75-year history. Ive mentioned a few here that, in my own personal opinion, are major milestones, but much remains unsaid. A fascinating aspect of seismic reflection technology is that key contributions have come from people with such diverse backgrounds. These include geophysics, mathematics, statistics, geology, electrical engineering, mechanical engineering, geodesy, navigation, surveying, and, of course, physics. Within the field of physics contributions have come from those trained in acoustics, optics, nuclear physics, astro-
S69
SEG @75
physics, solid-state physics, and even a few theoretical quantum physicists have contributed! What attracted all of these innovative people to seismology? If they were asked to list their reasons, my guess is that something like the joy of tackling and solving challenging problems would appear at or near the top of many of their lists. Thus, a suitable way to end this article is to ask: Are there remaining challenges in reflection seismology, or are we near the end of the line? My answer is that we are nowhere near the end. The day of petaflop-class computing facilities is approaching, and that should make data processing algorithms practical that are beyond our present capabilities. For example, one can imagine 3D full-waveform inversion in which P-waves and Swaves, primaries and multiples, reflections and refractions all contribute to construction of a subsurface image. Beyond that lies 3D data-driven processing, not just for multiple prediction, but for imaging and direct determination of rock properties. Also, full-elastic, finely sampled 3D and 4D forward modeling should become feasible. Such synthetic data will be invaluable for developing and testing the data processing algorithms of tomorrow. Providing data suitable for these grandiose data processing schemes will require many improvements in data acquisition technology. Despite recent advances, our current data sets remain undersampled in some spatial domains, limiting current data processing applications such as 3D multiple prediction as well as the applications of the future. Those limitations need to be overcome. Further refinements in acquisition methods are also needed to improve the accuracy of time-lapse surveys. In the future I predict a growing use of MEMS technology, permanent seafloor acquisition systems including sources as well as receivers, wide-azimuth marine surveys becoming the norm in some areas, and multicomponent surveys with 100 000+ recording channels. If the seismic profile in Figure 4 could have been shown in 1921 to the four scientists who performed the Vines Branch experiment, I believe they would have been totally flabbergasted. Certainly none of them could have foreseen seismic survey vessels towing and individually recording 20 000+ hydrophones, or the incredible advances in data processing enabled by the digital revolution. Would todays geophysicists be equally flabbergasted if we could see now a seismic profile from 2089? I suspect that we would be.
Suggested reading. Seismic Data Analysis by Yilmaz (SEG, 2001). Introduction to Geophysical Prospecting (fourth edition) by Dobrin and Savit (McGraw Hill, 1988). MEMS-based 3C accelerometers for land seismic acquisition: Is it time? by Mougenot and Thorburn (TLE, 2004). The reflection seismograph: Its invention and use in the discovery of oil and gas fields by Karcher (TLE, 1987). Acquisition and application of multicomponent vector wavefields: Are they practical? by Gaiser and Strudley (First Break, 2005). Encyclopedic Dictionary of Applied Geophysics by Sheriff (SEG, 2002). Geophysics in the Affairs of Mankind: A Personalized History of Exploration Geophysics by Lawyer et al. (SEG, 2001). Clarks monthly articles giving a decade-by-decade history of SEG (TLE, 2005). A brief history of seismic migration by Bednar
(GEOPHYSICS, 2005). Common reflection point horizontal data stacking techniques by Mayne (GEOPHYSICS, 1962). Continuous signal seismograph by Crawford et al. (GEOPHYSICS, 1960). @75
Afterword and acknowledgments: Shortly after I volunteered to write this article I realized that I faced an almost impossible task. In 2001, SEG published Oz Yilmazs book with over 2000 pages on the analysis of seismic data. More recently, Bee Bednar wrote a 17-page article on the history of seismic migration for GEOPHYSICS. How could I possibly summarize all of that, plus all of the data acquisition material, in an article of reasonable length? I feared that I would still be writing long after the SEGs 75th Anniversary was history. Out of necessity, I had to limit the scope of the article. I chose to focus on the technology of surface reflection seismology used for hydrocarbon exploration and exclude discussion of the business and society aspects of our industry as well as personal anecdotes. Because I have a day job at WesternGeco, I didnt have much time for historical research. Thus, at first I decided to write about only what I have personally experienced during my career. Then one day Dean Clark (TLE editor) arrived in Houston with cardboard boxes full of arcana that Susan Henley (administrator of SEGs Geoscience Center) had rooted out of some dark nook in the bowels of the SEG headquarters building in Tulsa. I thank Susan for finding the material, and Dean for carting it to Houston and helping me browse through it for items of interest. Dean also helped me with a bit of historical research and pointed me towards some references about the early days of reflection seismology. Shortly after Deans visit I came across several articles describing the Vines Branch experiment. Thats when I had a double flash of inspiration. First, I could construct the article around the contrasts between that first seismic reflection profile and a present day profile and the technology that got us from one to the other. Second, because of its simplicity, I could use the Vines Branch experiment to describe the basic concepts of reflection seismology in a way that would be interesting to a geophysicist but also accessible to a nongeophysicist. Speaking of nongeophysicists, another decision I made was to write an article aimed at a broad audience. That ruled out the use of equations or indepth technical discussions. In addition, the background material in the four boxes is intended especially for the nongeophysicists among my readers. A number of people provided useful tidbits of information or figures. Among them are my colleagues at WesternGeco: Laurent Meister, Chris Cunnell, Jerry Kapoor, Boff Anderson, Jim Gaiser, Diane Parker, Jennifer Leenheer, and Seth Riseman. I also thank Gary Crews and Dale Lambert at Input/Output for material about sensors, Oz Yilmaz for the permission to use a few figures from his book, and Lee Lawyer for his manuscript review and suggestions. I would like to thank TLE Assistant Editor Sylvie Dale for guiding the production of this article and Graphics Designer Ian Danziger for his wizardry with the figures and layout. Finally, I would like to thank my employer, WesternGeco, for allowing me the time to write this article. About the author: William (Bill) H. Dragoset has been a research geophysicist at WesternGeco and its predecessor Western Geophysical since 1979. He earned a degree in physics from Auburn University in 1972 and completed a PhD in physics at Rice University in 1978. Bill has served on the SEG Executive Committee and as an Associate Editor of GEOPHYSICS. He is currently a member of THE LEADING EDGE Editorial Board.
S70