Loading AI tools
Process by which nuclear WMDs are designed and produced From Wikipedia, the free encyclopedia
Nuclear Weapons Design are physical, chemical, and engineering arrangements that cause the physics package[1] of a nuclear weapon to detonate. There are three existing basic design types:
Pure fission weapons have been the first type to be built by new nuclear powers. Large industrial states with well-developed nuclear arsenals have two-stage thermonuclear weapons, which are the most compact, scalable, and cost effective option, once the necessary technical base and industrial infrastructure are built.
Most known innovations in nuclear weapon design originated in the United States, though some were later developed independently by other states.[3]
In early news accounts, pure fission weapons were called atomic bombs or A-bombs and weapons involving fusion were called hydrogen bombs or H-bombs. Practitioners of nuclear policy, however, favor the terms nuclear and thermonuclear, respectively.
Nuclear fission separates or splits heavier atoms to form lighter atoms. Nuclear fusion combines lighter atoms to form heavier atoms. Both reactions generate roughly a million times more energy than comparable chemical reactions, making nuclear bombs a million times more powerful than non-nuclear bombs, which a French patent claimed in May 1939.[4]
In some ways, fission and fusion are opposite and complementary reactions, but the particulars are unique for each. To understand how nuclear weapons are designed, it is useful to know the important similarities and differences between fission and fusion. The following explanation uses rounded numbers and approximations.[5]
When a free neutron hits the nucleus of a fissile atom like uranium-235 (235U), the uranium nucleus splits into two smaller nuclei called fission fragments, plus more neutrons (for 235U three about as often as two; an average of just under 2.5 per fission). The fission chain reaction in a supercritical mass of fuel can be self-sustaining because it produces enough surplus neutrons to offset losses of neutrons escaping the supercritical assembly. Most of these have the speed (kinetic energy) required to cause new fissions in neighboring uranium nuclei.[6]
The uranium-235 nucleus can split in many ways, provided the atomic numbers add up to 92 and the mass numbers add up to 236 (uranium-235 plus the neutron that caused the split). The following equation shows one possible split, namely into strontium-95 (95Sr), xenon-139 (139Xe), and two neutrons (n), plus energy:[7]
The immediate energy release per atom is about 180 million electron volts (MeV); i.e., 74 TJ/kg. Only 7% of this is gamma radiation and kinetic energy of fission neutrons. The remaining 93% is kinetic energy (or energy of motion) of the charged fission fragments, flying away from each other mutually repelled by the positive charge of their protons (38 for strontium, 54 for xenon). This initial kinetic energy is 67 TJ/kg, imparting an initial speed of about 12,000 kilometers per second (i.e. 1.2 cm per nanosecond). The charged fragments' high electric charge causes many inelastic coulomb collisions with nearby nuclei, and these fragments remain trapped inside the bomb's fissile pit and tamper until their kinetic energy is converted into heat. Given the speed of the fragments and the mean free path between nuclei in the compressed fuel assembly (for the implosion design), this takes about a millionth of a second (a microsecond), by which time the core and tamper of the bomb have expanded to a ball of plasma several meters in diameter with a temperature of tens of millions of degrees Celsius.
This is hot enough to emit black-body radiation in the X-ray spectrum. These X-rays are absorbed by the surrounding air, producing the fireball and blast of a nuclear explosion.
Most fission products have too many neutrons to be stable so they are radioactive by beta decay, converting neutrons into protons by throwing off beta particles (electrons), neutrinos and gamma rays. Their half-lives range from milliseconds to about 200,000 years. Many decay into isotopes that are themselves radioactive, so from 1 to 6 (average 3) decays may be required to reach stability.[8] In reactors, the radioactive products are the nuclear waste in spent fuel. In bombs, they become radioactive fallout, both local and global.[9]
Meanwhile, inside the exploding bomb, the free neutrons released by fission carry away about 3% of the initial fission energy. Neutron kinetic energy adds to the blast energy of a bomb, but not as effectively as the energy from charged fragments, since neutrons do not give up their kinetic energy as quickly in collisions with charged nuclei or electrons. The dominant contribution of fission neutrons to the bomb's power is the initiation of subsequent fissions. Over half of the neutrons escape the bomb core, but the rest strike 235U nuclei causing them to fission in an exponentially growing chain reaction (1, 2, 4, 8, 16, etc.). Starting from one atom, the number of fissions can theoretically double a hundred times in a microsecond, which could consume all uranium or plutonium up to hundreds of tons by the hundredth link in the chain. Typically in a modern weapon, the weapon's pit contains 3.5 to 4.5 kilograms (7.7 to 9.9 lb) of plutonium and at detonation produces approximately 5 to 10 kilotonnes of TNT (21 to 42 TJ) yield, representing the fissioning of approximately 0.5 kilograms (1.1 lb) of plutonium.[10][11]
Materials which can sustain a chain reaction are called fissile. The two fissile materials used in nuclear weapons are: 235U, also known as highly enriched uranium (HEU), "oralloy" meaning "Oak Ridge alloy",[12] or "25" (a combination of the last digit of the atomic number of uranium-235, which is 92, and the last digit of its mass number, which is 235); and 239Pu, also known as plutonium-239, or "49" (from "94" and "239").[13]
Uranium's most common isotope, 238U, is fissionable but not fissile, meaning that it cannot sustain a chain reaction because its daughter fission neutrons are not (on average) energetic enough to cause follow-on 238U fissions. However, the neutrons released by fusion of the heavy hydrogen isotopes deuterium and tritium will fission 238U. This 238U fission reaction in the outer jacket of the secondary assembly of a two-stage thermonuclear bomb produces by far the greatest fraction of the bomb's energy yield, as well as most of its radioactive debris.
For national powers engaged in a nuclear arms race, this fact of 238U's ability to fast-fission from thermonuclear neutron bombardment is of central importance. The plenitude and cheapness of both bulk dry fusion fuel (lithium deuteride) and 238U (a byproduct of uranium enrichment) permit the economical production of very large nuclear arsenals, in comparison to pure fission weapons requiring the expensive 235U or 239Pu fuels.
Fusion produces neutrons which dissipate energy from the reaction.[14] In weapons, the most important fusion reaction is called the D-T reaction. Using the heat and pressure of fission, hydrogen-2, or deuterium (2D), fuses with hydrogen-3, or tritium (3T), to form helium-4 (4He) plus one neutron (n) and energy:[15]
The total energy output, 17.6 MeV, is one tenth of that with fission, but the ingredients are only one-fiftieth as massive, so the energy output per unit mass is approximately five times as great. In this fusion reaction, 14 of the 17.6 MeV (80% of the energy released in the reaction) shows up as the kinetic energy of the neutron, which, having no electric charge and being almost as massive as the hydrogen nuclei that created it, can escape the scene without leaving its energy behind to help sustain the reaction – or to generate x-rays for blast and fire.[citation needed]
The only practical way to capture most of the fusion energy is to trap the neutrons inside a massive bottle of heavy material such as lead, uranium, or plutonium. If the 14 MeV neutron is captured by uranium (of either isotope; 14 MeV is high enough to fission both 235U and 238U) or plutonium, the result is fission and the release of 180 MeV of fission energy, multiplying the energy output tenfold.[citation needed]
For weapon use, fission is necessary to start fusion, helps to sustain fusion, and captures and multiplies the energy carried by the fusion neutrons. In the case of a neutron bomb (see below), the last-mentioned factor does not apply, since the objective is to facilitate the escape of neutrons, rather than to use them to increase the weapon's raw power.[citation needed]
An essential nuclear reaction is the one that creates tritium, or hydrogen-3. Tritium is employed in two ways. First, pure tritium gas is produced for placement inside the cores of boosted fission devices in order to increase their energy yields. This is especially so for the fission primaries of thermonuclear weapons. The second way is indirect, and takes advantage of the fact that the neutrons emitted by a supercritical fission "spark plug" in the secondary assembly of a two-stage thermonuclear bomb will produce tritium in situ when these neutrons collide with the lithium nuclei in the bomb's lithium deuteride fuel supply.
Elemental gaseous tritium for fission primaries is also made by bombarding lithium-6 (6Li) with neutrons (n), only in a nuclear reactor. This neutron bombardment will cause the lithium-6 nucleus to split, producing an alpha particle, or helium-4 (4He), plus a triton (3T) and energy:[15]
But as was discovered in the first test of this type of device, Castle Bravo, when lithium-7 is present, one also has some amounts of the following two net reactions:
Most lithium is 7Li, and this gave Castle Bravo a yield 2.5 times larger than expected.[16]
The neutrons are supplied by the nuclear reactor in a way similar to production of plutonium 239Pu from 238U feedstock: target rods of the 6Li feedstock are arranged around a uranium-fueled core, and are removed for processing once it has been calculated that most of the lithium nuclei have been transmuted to tritium.
Of the four basic types of nuclear weapon, the first, pure fission, uses the first of the three nuclear reactions above. The second, fusion-boosted fission, uses the first two. The third, two-stage thermonuclear, uses all three.
The first task of a nuclear weapon design is to rapidly assemble a supercritical mass of fissile (weapon grade) uranium or plutonium. A supercritical mass is one in which the percentage of fission-produced neutrons captured by other neighboring fissile nuclei is large enough that each fission event, on average, causes more than one follow-on fission event. Neutrons released by the first fission events induce subsequent fission events at an exponentially accelerating rate. Each follow-on fissioning continues a sequence of these reactions that works its way throughout the supercritical mass of fuel nuclei. This process is conceived and described colloquially as the nuclear chain reaction.
To start the chain reaction in a supercritical assembly, at least one free neutron must be injected and collide with a fissile fuel nucleus. The neutron joins with the nucleus (technically a fusion event) and destabilizes the nucleus, which explodes into two middleweight nuclear fragments (from the severing of the strong nuclear force holding the mutually-repulsive protons together), plus two or three free neutrons. These race away and collide with neighboring fuel nuclei. This process repeats over and over until the fuel assembly goes sub-critical (from thermal expansion), after which the chain reaction shuts down because the daughter neutrons can no longer find new fuel nuclei to hit before escaping the less-dense fuel mass. Each following fission event in the chain approximately doubles the neutron population (net, after losses due to some neutrons escaping the fuel mass, and others that collide with any non-fuel impurity nuclei present).
For the gun assembly method (see below) of supercritical mass formation, the fuel itself can be relied upon to initiate the chain reaction. This is because even the best weapon-grade uranium contains a significant number of 238U nuclei. These are susceptible to spontaneous fission events, which occur randomly (it is a quantum mechanical phenomenon). Because the fissile material in a gun-assembled critical mass is not compressed, the design need only ensure the two sub-critical masses remain close enough to each other long enough that a 238U spontaneous fission will occur while the weapon is in the vicinity of the target. This is not difficult to arrange as it takes but a second or two in a typical-size fuel mass for this to occur. (Still, many such bombs meant for delivery by air (gravity bomb, artillery shell or rocket) use injected neutrons to gain finer control over the exact detonation altitude, important for the destructive effectiveness of airbursts.)
This condition of spontaneous fission highlights the necessity to assemble the supercritical mass of fuel very rapidly. The time required to accomplish this is called the weapon's critical insertion time. If spontaneous fission were to occur when the supercritical mass was only partially assembled, the chain reaction would begin prematurely. Neutron losses through the void between the two subcritical masses (gun assembly) or the voids between not-fully-compressed fuel nuclei (implosion assembly) would sap the bomb of the number of fission events needed to attain the full design yield. Additionally, heat resulting from the fissions that do occur would work against the continued assembly of the supercritical mass, from thermal expansion of the fuel. This failure is called predetonation. The resulting explosion would be called a "fizzle" by bomb engineers and weapon users. Plutonium's high rate of spontaneous fission makes uranium fuel a necessity for gun-assembled bombs, with their much greater insertion time and much greater mass of fuel required (because of the lack of fuel compression).
There is another source of free neutrons that can spoil a fission explosion. All uranium and plutonium nuclei have a decay mode that results in energetic alpha particles. If the fuel mass contains impurity elements of low atomic number (Z), these charged alphas can penetrate the coulomb barrier of these impurity nuclei and undergo a reaction that yields a free neutron. The rate of alpha emission of fissile nuclei is one to two million times that of spontaneous fission, so weapon engineers are careful to use fuel of high purity.
Fission weapons used in the vicinity of other nuclear explosions must be protected from the intrusion of free neutrons from outside. Such shielding material will almost always be penetrated, however, if the outside neutron flux is intense enough. When a weapon misfires or fizzles because of the effects of other nuclear detonations, it is called nuclear fratricide.
For the implosion-assembled design, once the critical mass is assembled to maximum density, a burst of neutrons must be supplied to start the chain reaction. Early weapons used a modulated neutron generator code named "Urchin" inside the pit containing polonium-210 and beryllium separated by a thin barrier. Implosion of the pit crushes the neutron generator, mixing the two metals, thereby allowing alpha particles from the polonium to interact with beryllium to produce free neutrons. In modern weapons, the neutron generator is a high-voltage vacuum tube containing a particle accelerator which bombards a deuterium/tritium-metal hydride target with deuterium and tritium ions. The resulting small-scale fusion produces neutrons at a protected location outside the physics package, from which they penetrate the pit. This method allows better timing of the first fission events in the chain reaction, which optimally should occur at the point of maximum compression/supercriticality. Timing of the neutron injection is a more important parameter than the number of neutrons injected: the first generations of the chain reaction are vastly more effective due to the exponential function by which neutron multiplication evolves.
The critical mass of an uncompressed sphere of bare metal is 50 kg (110 lb) for uranium-235 and 16 kg (35 lb) for delta-phase plutonium-239. In practical applications, the amount of material required for criticality is modified by shape, purity, density, and the proximity to neutron-reflecting material, all of which affect the escape or capture of neutrons.
To avoid a premature chain reaction during handling, the fissile material in the weapon must be kept subcritical. It may consist of one or more components containing less than one uncompressed critical mass each. A thin hollow shell can have more than the bare-sphere critical mass, as can a cylinder, which can be arbitrarily long without ever reaching criticality. Another method of reducing criticality risk is to incorporate material with a large cross-section for neutron capture, such as boron (specifically 10B comprising 20% of natural boron). Naturally this neutron absorber must be removed before the weapon is detonated. This is easy for a gun-assembled bomb: the projectile mass simply shoves the absorber out of the void between the two subcritical masses by the force of its motion.
The use of plutonium affects weapon design due to its high rate of alpha emission. This results in Pu metal spontaneously producing significant heat; a 5 kilogram mass produces 9.68 watts of thermal power. Such a piece would feel warm to the touch, which is no problem if that heat is dissipated promptly and not allowed to build up the temperature. But this is a problem inside a nuclear bomb. For this reason bombs using Pu fuel use aluminum parts to wick away the excess heat, and this complicates bomb design because Al plays no active role in the explosion processes.
A tamper is an optional layer of dense material surrounding the fissile material. Due to its inertia it delays the thermal expansion of the fissioning fuel mass, keeping it supercritical for longer. Often[when?] the same layer serves both as tamper and as neutron reflector.
Little Boy, the Hiroshima bomb, used 64 kg (141 lb) of uranium with an average enrichment of around 80%, or 51 kg (112 lb) of uranium-235, just about the bare-metal critical mass . When assembled inside its tamper/reflector of tungsten carbide, the 64 kg (141 lb) was more than twice critical mass. Before the detonation, the uranium-235 was formed into two sub-critical pieces, one of which was later fired down a gun barrel to join the other, starting the nuclear explosion. Analysis shows that less than 2% of the uranium mass underwent fission;[17] the remainder, representing most of the entire wartime output of the giant Y-12 factories at Oak Ridge, scattered uselessly.[18]
The inefficiency was caused by the speed with which the uncompressed fissioning uranium expanded and became sub-critical by virtue of decreased density. Despite its inefficiency, this design, because of its shape, was adapted for use in small-diameter, cylindrical artillery shells (a gun-type warhead fired from the barrel of a much larger gun).[citation needed] Such warheads were deployed by the United States until 1992, accounting for a significant fraction of the 235U in the arsenal[citation needed], and were some of the first weapons dismantled to comply with treaties limiting warhead numbers.[citation needed] The rationale for this decision was undoubtedly a combination of the lower yield and grave safety issues associated with the gun-type design.[citation needed]
For both the Trinity device and the Fat Man (Nagasaki) bomb, nearly identical plutonium fission through implosion designs were used. The Fat Man device specifically used 6.2 kg (14 lb), about 350 ml or 12 US fl oz in volume, of Pu-239, which is only 41% of bare-sphere critical mass . Surrounded by a U-238 reflector/tamper, the Fat Man's pit was brought close to critical mass by the neutron-reflecting properties of the U-238. During detonation, criticality was achieved by implosion. The plutonium pit was squeezed to increase its density by simultaneous detonation, as with the "Trinity" test detonation three weeks earlier, of the conventional explosives placed uniformly around the pit. The explosives were detonated by multiple exploding-bridgewire detonators. It is estimated that only about 20% of the plutonium underwent fission; the rest, about 5 kg (11 lb), was scattered.
An implosion shock wave might be of such short duration that only part of the pit is compressed at any instant as the wave passes through it. To prevent this, a pusher shell may be needed. The pusher is located between the explosive lens and the tamper. It works by reflecting some of the shock wave backward, thereby having the effect of lengthening its duration. It is made out of a low density metal – such as aluminium, beryllium, or an alloy of the two metals (aluminium is easier and safer to shape, and is two orders of magnitude cheaper; beryllium has high neutron-reflective capability). Fat Man used an aluminium pusher.
The series of RaLa Experiment tests of implosion-type fission weapon design concepts, carried out from July 1944 through February 1945 at the Los Alamos Laboratory and a remote site 14.3 km (8.9 mi) east of it in Bayo Canyon, proved the practicality of the implosion design for a fission device, with the February 1945 tests positively determining its usability for the final Trinity/Fat Man plutonium implosion design.[19]
The key to Fat Man's greater efficiency was the inward momentum of the massive U-238 tamper. (The natural uranium tamper did not undergo fission from thermal neutrons, but did contribute perhaps 20% of the total yield from fission by fast neutrons). After the chain reaction started in the plutonium, it continued until the explosion reversed the momentum of the implosion and expanded enough to stop the chain reaction. By holding everything together for a few hundred nanoseconds more, the tamper increased the efficiency.
The core of an implosion weapon – the fissile material and any reflector or tamper bonded to it – is known as the pit. Some weapons tested during the 1950s used pits made with U-235 alone, or in composite with plutonium,[20] but all-plutonium pits are the smallest in diameter and have been the standard since the early 1960s.[citation needed]
Casting and then machining plutonium is difficult not only because of its toxicity, but also because plutonium has many different metallic phases. As plutonium cools, changes in phase result in distortion and cracking. This distortion is normally overcome by alloying it with 30–35 mMol (0.9–1.0% by weight) gallium, forming a plutonium-gallium alloy, which causes it to take up its delta phase over a wide temperature range.[21] When cooling from molten it then has only a single phase change, from epsilon to delta, instead of the four changes it would otherwise pass through. Other trivalent metals would also work, but gallium has a small neutron absorption cross section and helps protect the plutonium against corrosion. A drawback is that gallium compounds are corrosive and so if the plutonium is recovered from dismantled weapons for conversion to plutonium dioxide for power reactors, there is the difficulty of removing the gallium.[citation needed]
Because plutonium is chemically reactive it is common to plate the completed pit with a thin layer of inert metal, which also reduces the toxic hazard.[22] The gadget used galvanic silver plating; afterward, nickel deposited from nickel tetracarbonyl vapors was used,[22] but thereafter and since, gold became the preferred material.[citation needed] Recent designs improve safety by plating pits with vanadium to make the pits more fire-resistant.[citation needed]
The first improvement on the Fat Man design was to put an air space between the tamper and the pit to create a hammer-on-nail impact. The pit, supported on a hollow cone inside the tamper cavity, was said to be "levitated". The three tests of Operation Sandstone, in 1948, used Fat Man designs with levitated pits. The largest yield was 49 kilotons, more than twice the yield of the unlevitated Fat Man.[23]
It was immediately clear[according to whom?] that implosion was the best design for a fission weapon. Its only drawback seemed to be its diameter. Fat Man was 1.5 metres (5 ft) wide vs 61 centimetres (2 ft) for Little Boy.
The Pu-239 pit of Fat Man was only 9.1 centimetres (3.6 in) in diameter, the size of a softball. The bulk of Fat Man's girth was the implosion mechanism, namely concentric layers of U-238, aluminium, and high explosives. The key to reducing that girth was the two-point implosion design.[citation needed]
In the two-point linear implosion, the nuclear fuel is cast into a solid shape and placed within the center of a cylinder of high explosive. Detonators are placed at either end of the explosive cylinder, and a plate-like insert, or shaper, is placed in the explosive just inside the detonators. When the detonators are fired, the initial detonation is trapped between the shaper and the end of the cylinder, causing it to travel out to the edges of the shaper where it is diffracted around the edges into the main mass of explosive. This causes the detonation to form into a ring that proceeds inward from the shaper.[24]
Due to the lack of a tamper or lenses to shape the progression, the detonation does not reach the pit in a spherical shape. To produce the desired spherical implosion, the fissile material itself is shaped to produce the same effect. Due to the physics of the shock wave propagation within the explosive mass, this requires the pit to be a prolate spheroid, that is, roughly egg shaped. The shock wave first reaches the pit at its tips, driving them inward and causing the mass to become spherical. The shock may also change plutonium from delta to alpha phase, increasing its density by 23%, but without the inward momentum of a true implosion.[citation needed]
The lack of compression makes such designs inefficient, but the simplicity and small diameter make it suitable for use in artillery shells and atomic demolition munitions – ADMs – also known as backpack or suitcase nukes; an example is the W48 artillery shell, the smallest nuclear weapon ever built or deployed. All such low-yield battlefield weapons, whether gun-type U-235 designs or linear implosion Pu-239 designs, pay a high price in fissile material in order to achieve diameters between six and ten inches (15 and 25 cm).[citation needed]
A more efficient implosion system uses a hollow pit.[citation needed]
A hollow plutonium pit was the original plan for the 1945 Fat Man bomb, but there was not enough time to develop and test the implosion system for it. A simpler solid-pit design was considered more reliable, given the time constraints, but it required a heavy U-238 tamper, a thick aluminium pusher, and three tons of high explosives.[citation needed]
After the war, interest in the hollow pit design was revived. Its obvious advantage is that a hollow shell of plutonium, shock-deformed and driven inward toward its empty center, would carry momentum into its violent assembly as a solid sphere. It would be self-tamping, requiring a smaller U-238 tamper, no aluminium pusher, and less high explosive.[citation needed]
The next step in miniaturization was to speed up the fissioning of the pit to reduce the minimum inertial confinement time. This would allow the efficient fission of the fuel with less mass in the form of tamper or the fuel itself. The key to achieving faster fission would be to introduce more neutrons, and among the many ways to do this, adding a fusion reaction was relatively easy in the case of a hollow pit.[citation needed]
The easiest fusion reaction to achieve is found in a 50–50 mixture of tritium and deuterium.[25] For fusion power experiments this mixture must be held at high temperatures for relatively lengthy times in order to have an efficient reaction. For explosive use, however, the goal is not to produce efficient fusion, but simply provide extra neutrons early in the process.[citation needed] Since a nuclear explosion is supercritical, any extra neutrons will be multiplied by the chain reaction, so even tiny quantities introduced early can have a large effect on the outcome. For this reason, even the relatively low compression pressures and times (in fusion terms) found in the center of a hollow pit warhead are enough to create the desired effect.[citation needed]
In the boosted design, the fusion fuel in gas form is pumped into the pit during arming. This will fuse into helium and release free neutrons soon after fission begins.[citation needed] The neutrons will start a large number of new chain reactions while the pit is still critical or nearly critical. Once the hollow pit is perfected, there is little reason not to boost; deuterium and tritium are easily produced in the small quantities needed, and the technical aspects are trivial.[25]
The concept of fusion-boosted fission was first tested on May 25, 1951, in the Item shot of Operation Greenhouse, Eniwetok, yield 45.5 kilotons.[citation needed]
Boosting reduces diameter in three ways, all the result of faster fission:
The first device whose dimensions suggest employment of all these features (two-point, hollow-pit, fusion-boosted implosion) was the Swan device. It had a cylindrical shape with a diameter of 29 cm (11.6 in) and a length of 58 cm (22.8 in).[citation needed]
It was first tested standalone and then as the primary of a two-stage thermonuclear device during Operation Redwing. It was weaponized as the Robin primary and became the first off-the-shelf, multi-use primary, and the prototype for all that followed.[citation needed]
After the success of Swan, 28 or 30 centimetres (11 or 12 in) seemed to become the standard diameter of boosted single-stage devices tested during the 1950s.[citation needed] Length was usually twice the diameter, but one such device, which became the W54 warhead, was closer to a sphere, only 38 centimetres (15 in) long.
One of the applications of the W54 was the Davy Crockett XM-388 recoilless rifle projectile. It had a dimension of just 28 centimetres (11 in), and is shown here in comparison to its Fat Man predecessor (150 centimetres or 60 inches).
Another benefit of boosting, in addition to making weapons smaller, lighter, and with less fissile material for a given yield, is that it renders weapons immune to predetonation.[citation needed] It was discovered in the mid-1950s that plutonium pits would be particularly susceptible to partial predetonation if exposed to the intense radiation of a nearby nuclear explosion (electronics might also be damaged, but this was a separate problem).[citation needed] RI was a particular problem before effective early warning radar systems because a first strike attack might make retaliatory weapons useless. Boosting reduces the amount of plutonium needed in a weapon to below the quantity which would be vulnerable to this effect.[citation needed]
Pure fission or fusion-boosted fission weapons can be made to yield hundreds of kilotons, at great expense in fissile material and tritium, but by far the most efficient way to increase nuclear weapon yield beyond ten or so kilotons is to add a second independent stage, called a secondary.[citation needed]
In the 1940s, bomb designers at Los Alamos thought the secondary would be a canister of deuterium in liquefied or hydride form. The fusion reaction would be D-D, harder to achieve than D-T, but more affordable. A fission bomb at one end would shock-compress and heat the near end, and fusion would propagate through the canister to the far end. Mathematical simulations showed it would not work, even with large amounts of expensive tritium added.[citation needed]
The entire fusion fuel canister would need to be enveloped by fission energy, to both compress and heat it, as with the booster charge in a boosted primary. The design breakthrough came in January 1951, when Edward Teller and Stanislaw Ulam invented radiation implosion – for nearly three decades known publicly only as the Teller-Ulam H-bomb secret.[26][27]
The concept of radiation implosion was first tested on May 9, 1951, in the George shot of Operation Greenhouse, Eniwetok, yield 225 kilotons. The first full test was on November 1, 1952, the Mike shot of Operation Ivy, Eniwetok, yield 10.4 megatons.[citation needed]
In radiation implosion, the burst of X-ray energy coming from an exploding primary is captured and contained within an opaque-walled radiation channel which surrounds the nuclear energy components of the secondary. The radiation quickly turns the plastic foam that had been filling the channel into a plasma which is mostly transparent to X-rays, and the radiation is absorbed in the outermost layers of the pusher/tamper surrounding the secondary, which ablates and applies a massive force[28] (much like an inside out rocket engine) causing the fusion fuel capsule to implode much like the pit of the primary. As the secondary implodes a fissile "spark plug" at its center ignites and provides neutrons and heat which enable the lithium deuteride fusion fuel to produce tritium and ignite as well. The fission and fusion chain reactions exchange neutrons with each other and boost the efficiency of both reactions. The greater implosive force, enhanced efficiency of the fissile "spark plug" due to boosting via fusion neutrons, and the fusion explosion itself provide significantly greater explosive yield from the secondary despite often not being much larger than the primary.[citation needed]
For example, for the Redwing Mohawk test on July 3, 1956, a secondary called the Flute was attached to the Swan primary. The Flute was 38 centimetres (15 in) in diameter and 59 centimetres (23.4 in) long, about the size of the Swan. But it weighed ten times as much and yielded 24 times as much energy (355 kilotons vs 15 kilotons).[citation needed]
Equally important, the active ingredients in the Flute probably cost no more than those in the Swan. Most of the fission came from cheap U-238, and the tritium was manufactured in place during the explosion. Only the spark plug at the axis of the secondary needed to be fissile.[citation needed]
A spherical secondary can achieve higher implosion densities than a cylindrical secondary, because spherical implosion pushes in from all directions toward the same spot. However, in warheads yielding more than one megaton, the diameter of a spherical secondary would be too large for most applications. A cylindrical secondary is necessary in such cases. The small, cone-shaped re-entry vehicles in multiple-warhead ballistic missiles after 1970 tended to have warheads with spherical secondaries, and yields of a few hundred kilotons.[citation needed]
As with boosting, the advantages of the two-stage thermonuclear design are so great that there is little incentive not to use it, once a nation has mastered the technology.[citation needed]
In engineering terms, radiation implosion allows for the exploitation of several known features of nuclear bomb materials which heretofore had eluded practical application. For example:
In the ensuing fifty years, no one has come up with a more efficient way to build a thermonuclear bomb. It is the design of choice for the United States, Russia, the United Kingdom, China, and France, the five thermonuclear powers. On 3 September 2017 North Korea carried out what it reported as its first "two-stage thermo-nuclear weapon" test.[31] According to Dr. Theodore Taylor, after reviewing leaked photographs of disassembled weapons components taken before 1986, Israel possessed boosted weapons and would require supercomputers of that era to advance further toward full two-stage weapons in the megaton range without nuclear test detonations.[32] The other nuclear-armed nations, India and Pakistan, probably have single-stage weapons, possibly boosted.[30]
In a two-stage thermonuclear weapon the energy from the primary impacts the secondary. An essential[citation needed] energy transfer modulator called the interstage, between the primary and the secondary, protects the secondary's fusion fuel from heating too quickly, which could cause it to explode in a conventional (and small) heat explosion before the fusion and fission reactions get a chance to start.[citation needed]
There is very little information in the open literature about the mechanism of the interstage.[citation needed] Its first mention in a U.S. government document formally released to the public appears to be a caption in a graphic promoting the Reliable Replacement Warhead Program in 2007. If built, this new design would replace "toxic, brittle material" and "expensive 'special' material" in the interstage.[33] This statement suggests the interstage may contain beryllium to moderate the flux of neutrons from the primary, and perhaps something to absorb and re-radiate the x-rays in a particular manner.[34] There is also some speculation that this interstage material, which may be code-named Fogbank, might be an aerogel, possibly doped with beryllium and/or other substances.[35][36]
The interstage and the secondary are encased together inside a stainless steel membrane to form the canned subassembly (CSA), an arrangement which has never been depicted in any open-source drawing.[37] The most detailed illustration of an interstage shows a British thermonuclear weapon with a cluster of items between its primary and a cylindrical secondary. They are labeled "end-cap and neutron focus lens", "reflector/neutron gun carriage", and "reflector wrap". The origin of the drawing, posted on the internet by Greenpeace, is uncertain, and there is no accompanying explanation.[38]
While every nuclear weapon design falls into one of the above categories, specific designs have occasionally become the subject of news accounts and public discussion, often with incorrect descriptions about how they work and what they do. Examples:
The first effort to exploit the symbiotic relationship between fission and fusion was a 1940s design that mixed fission and fusion fuel in alternating thin layers. As a single-stage device, it would have been a cumbersome application of boosted fission. It first became practical when incorporated into the secondary of a two-stage thermonuclear weapon.[39]
The U.S. name, Alarm Clock, came from Teller: he called it that because it might "wake up the world" to the possibility of the potential of the Super.[40] The Russian name for the same design was more descriptive: Sloika (Russian: Слойка), a layered pastry cake. A single-stage Soviet Sloika was tested as RDS-6s on August 12, 1953. No single-stage U.S. version was tested, but the code named Castle Union shot of Operation Castle, April 26, 1954, was a two-stage thermonuclear device code-named Alarm Clock. Its yield, at Bikini, was 6.9 megatons.[citation needed]
Because the Soviet Sloika test used dry lithium-6 deuteride eight months before the first U.S. test to use it (Castle Bravo, March 1, 1954), it was sometimes claimed that the USSR won the H-bomb race, even though the United States tested and developed the first hydrogen bomb: the Ivy Mike H-bomb test. The 1952 U.S. Ivy Mike test used cryogenically cooled liquid deuterium as the fusion fuel in the secondary, and employed the D-D fusion reaction. However, the first Soviet test to use a radiation-imploded secondary, the essential feature of a true H-bomb, was on November 23, 1955, three years after Ivy Mike. In fact, real work on the implosion scheme in the Soviet Union only commenced in the very early part of 1953, several months after the successful testing of Sloika.[citation needed]
On March 1, 1954, the largest-ever U.S. nuclear test explosion, the 15-megaton Castle Bravo shot of Operation Castle at Bikini Atoll, delivered a promptly lethal dose of fission-product fallout to more than 6,000 square miles (16,000 km2) of Pacific Ocean surface.[41] Radiation injuries to Marshall Islanders and Japanese fishermen made that fact public and revealed the role of fission in hydrogen bombs.
In response to the public alarm over fallout, an effort was made to design a clean multi-megaton weapon, relying almost entirely on fusion. The energy produced by the fissioning of unenriched natural uranium, when used as the tamper material in the secondary and subsequent stages in the Teller-Ulam design, can far exceed the energy released by fusion, as was the case in the Castle Bravo test. Replacing the fissionable material in the tamper with another material is essential to producing a "clean" bomb. In such a device, the tamper no longer contributes energy, so for any given weight, a clean bomb will have less yield. The earliest known incidence of a three-stage device being tested, with the third stage, called the tertiary, being ignited by the secondary, was May 27, 1956, in the Bassoon device. This device was tested in the Zuni shot of Operation Redwing. This shot used non-fissionable tampers; an inert substitute material such as tungsten or lead was used. Its yield was 3.5 megatons, 85% fusion and only 15% fission.[citation needed]
The Ripple concept, which used ablation to achieve fusion using very little fission, was and still is by far the cleanest design. Unlike previous clean bombs, which were clean simply by replacing fission fuel with inert substance, Ripple was by design clean. Ripple was also extremely efficient; plans for a 15 kt/kg were made during Operation Dominic. Shot Androscoggin featured a proof-of-concept Ripple design, resulting in a 63-kiloton fizzle (significantly lower than the predicted 15 megatons). It was repeated in shot Housatonic, which featured a 9.96 megaton explosion that was reportedly >99.9% fusion.[42]
The public records for devices that produced the highest proportion of their yield via fusion reactions are the peaceful nuclear explosions of the 1970s. Others include the 10 megaton Dominic Housatonic at over 99.9% fusion, 50-megaton Tsar Bomba at 97% fusion,[43] the 9.3-megaton Hardtack Poplar test at 95%,[44] and the 4.5-megaton Redwing Navajo test at 95% fusion.[45]
The most ambitious peaceful application of nuclear explosions was pursued by the USSR with the aim of creating a 112 km (70 mi) long canal between the Pechora river basin and the Kama river basin, about half of which was to be constructed through a series of underground nuclear explosions. It was reported that about 250 nuclear devices might be used to get the final goal. The Taiga test was to demonstrate the feasibility of the project. Three of these "clean" devices of 15 kiloton yield each were placed in separate boreholes spaced about 165 metres (540 ft) apart at depths of 127 metres (417 ft). They were simultaneously detonated on March 23, 1971, catapulting radioactive plume into the air that was carried eastward by wind. The resulting trench was around 700 metres (2,300 ft) long and 340 metres (1,120 ft) wide, with an unimpressive depth of just 10 to 15 metres (30 to 50 ft).[46] Despite their "clean" nature, the area still exhibits a noticeably higher (albeit mostly harmless) concentration of fission products, the intense neutron bombardment of the soil, the device itself and the support structures also activated their stable elements to create a significant amount of man-made radioactive elements like 60Co. The overall danger posed by the concentration of radioactive elements present at the site created by these three devices is still negligible, but a larger scale project as was envisioned would have had significant consequences both from the fallout of radioactive plume and the radioactive elements created by the neutron bombardment.[47]
On July 19, 1956, AEC Chairman Lewis Strauss said that the Redwing Zuni shot clean bomb test "produced much of importance ... from a humanitarian aspect." However, less than two days after this announcement, the dirty version of Bassoon, called Bassoon Prime, with a uranium-238 tamper in place, was tested on a barge off the coast of Bikini Atoll as the Redwing Tewa shot. The Bassoon Prime produced a 5-megaton yield, of which 87% came from fission. Data obtained from this test, and others, culminated in the eventual deployment of the highest-yielding US nuclear weapon known, and the highest yield-to-weight weapon ever made, a three-stage thermonuclear weapon with a maximum "dirty" yield of 25 megatons, designated as the B41 nuclear bomb, which was to be carried by U.S. Air Force bombers until it was decommissioned; this weapon was never fully tested.[citation needed]
First and second generation nuclear weapons release energy as omnidirectional blasts. Third generation[48][49][50] nuclear weapons are experimental special effect warheads and devices that can release energy in a directed manner, some of which were tested during the Cold War but were never deployed. These include:
The idea of "4th-generation" nuclear weapons has been proposed as a possible successor to the examples of weapons designs listed above. These methods tend to revolve around using non-nuclear primaries to set off further fission or fusion reactions. For example, if antimatter were usable and controllable in macroscopic quantities, a reaction between a small amount of antimatter and an equivalent amount of matter could release energy comparable to a small fission weapon, and could in turn be used as the first stage of a very compact thermonuclear weapon. Extremely-powerful lasers could also potentially be used this way, if they could be made powerful-enough, and compact-enough, to be viable as a weapon. Most of these ideas are versions of pure fusion weapons, and share the common property that they involve hitherto unrealized technologies as their "primary" stages.[52]
While many nations have invested significantly in inertial confinement fusion research programs, since the 1970s it has not been considered promising for direct weapons use, but rather as a tool for weapons- and energy-related research that can be used in the absence of full-scale nuclear testing. Whether any nations are aggressively pursuing "4th-generation" weapons is not clear. In many case (as with antimatter) the underlying technology is presently thought to be very far from being viable, and if it was viable would be a powerful weapon in and of itself, outside of a nuclear weapons context, and without providing any significant advantages above existing nuclear weapons designs[53]
Since the 1950s, the United States and Soviet Union investigated the possibility of releasing significant amounts of nuclear fusion energy without the use of a fission primary. Such "pure fusion weapons" were primarily imagined as low-yield, tactical nuclear weapons whose advantage would be their ability to be used without producing fallout on the scale of weapons that release fission products. In 1998, the United States Department of Energy declassified the following:
(1) Fact that the DOE made a substantial investment in the past to develop a pure fusion weapon
(2) That the U.S. does not have and is not developing a pure fusion weapon; and
(3) That no credible design for a pure fusion weapon resulted from the DOE investment.[54]
Red mercury, a likely hoax substance, has been hyped as a catalyst for a pure fusion weapon.[citation needed]
A doomsday bomb, made popular by Nevil Shute's 1957 novel, and subsequent 1959 movie, On the Beach, the cobalt bomb is a hydrogen bomb with a jacket of cobalt. The neutron-activated cobalt would have maximized the environmental damage from radioactive fallout. These bombs were popularized in the 1964 film Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb; the material added to the bombs is referred to in the film as 'cobalt-thorium G'.[citation needed]
Such "salted" weapons were investigated by U.S. Department of Defense.[55] Fission products are as deadly as neutron-activated cobalt.
Initially, gamma radiation from the fission products of an equivalent size fission-fusion-fission bomb are much more intense than Cobalt-60 (60
Co
): 15,000 times more intense at 1 hour; 35 times more intense at 1 week; 5 times more intense at 1 month; and about equal at 6 months. Thereafter fission drops off rapidly so that 60
Co
fallout is 8 times more intense than fission at 1 year and 150 times more intense at 5 years. The very long-lived isotopes produced by fission would overtake the 60
Co
again after about 75 years.[56]
The triple "taiga" nuclear salvo test, as part of the preliminary March 1971 Pechora–Kama Canal project, produced a small amount of fission products and therefore a comparatively large amount of case material activated products are responsible for most of the residual activity at the site today, namely 60
Co
. As of 2011,[update] fusion generated neutron activation was responsible for about half of the gamma dose at the test site. That dose is too small to cause deleterious effects, and normal green vegetation exists all around the lake that was formed.[57][58]
The idea of a device which has an arbitrarily large number of Teller-Ulam stages, with each driving a larger radiation-driven implosion than the preceding stage, is frequently suggested,[59][60] but technically disputed.[61] There are "well-known sketches and some reasonable-looking calculations in the open literature about two-stage weapons, but no similarly accurate descriptions of true three stage concepts."[61]
During the mid-1950s through early 1960s, scientists working in the weapons laboratories of the United States investigated weapons concepts as large as 1,000 megatons,[62] and Edward Teller announced the design of a 10,000-megaton weapon code-named SUNDIAL at a meeting of the General Advisory Committee of the Atomic Energy Commission.[63] Much of the information about these efforts remains classified,[64][65] but such "gigaton" range weapons do not appear to have made it beyond theoretical investigations.
While both the US and Soviet Union investigated (and in the case of the Soviets, tested) "very high yield" (e.g. 50 to 100-megaton) weapons designs in the 1950s and early 1960s,[66] these appear to represent the upper-limit of Cold War weapon yields pursued seriously, and were so physically heavy and massive that they could not be carried entirely within the bomb bays of the largest bombers. Cold War warhead development trends from the mid-1960s onward, and especially after the Limited Test Ban Treaty, instead resulted in highly-compact warheads with yields in the range from hundreds of kilotons to the low megatons that gave greater options for deliverability.
Following the concern caused by the estimated gigaton scale of the 1994 Comet Shoemaker-Levy 9 impacts on the planet Jupiter, in a 1995 meeting at Lawrence Livermore National Laboratory (LLNL), Edward Teller proposed to a collective of U.S. and Russian ex-Cold War weapons designers that they collaborate on designing a 1,000-megaton nuclear explosive device for diverting extinction-class asteroids (10+ km in diameter), which would be employed in the event that one of these asteroids were on an impact trajectory with Earth.[67][68][69]
A neutron bomb, technically referred to as an enhanced radiation weapon (ERW), is a type of tactical nuclear weapon designed specifically to release a large portion of its energy as energetic neutron radiation. This contrasts with standard thermonuclear weapons, which are designed to capture this intense neutron radiation to increase its overall explosive yield. In terms of yield, ERWs typically produce about one-tenth that of a fission-type atomic weapon. Even with their significantly lower explosive power, ERWs are still capable of much greater destruction than any conventional bomb. Meanwhile, relative to other nuclear weapons, damage is more focused on biological material than on material infrastructure (though extreme blast and heat effects are not eliminated).[citation needed]
ERWs are more accurately described as suppressed yield weapons. When the yield of a nuclear weapon is less than one kiloton, its lethal radius from blast, 700 m (2,300 ft), is less than that from its neutron radiation. However, the blast is more than potent enough to destroy most structures, which are less resistant to blast effects than even unprotected human beings. Blast pressures of upwards of 20 psi (140 kPa) are survivable, whereas most buildings will collapse with a pressure of only 5 psi (30 kPa).[citation needed]
Commonly misconceived as a weapon designed to kill populations and leave infrastructure intact, these bombs (as mentioned above) are still very capable of leveling buildings over a large radius. The intent of their design was to kill tank crews – tanks giving excellent protection against blast and heat, surviving (relatively) very close to a detonation. Given the Soviets' vast tank forces during the Cold War, this was the perfect weapon to counter them. The neutron radiation could instantly incapacitate a tank crew out to roughly the same distance that the heat and blast would incapacitate an unprotected human (depending on design). The tank chassis would also be rendered highly radioactive, temporarily preventing its re-use by a fresh crew.[citation needed]
Neutron weapons were also intended for use in other applications, however. For example, they are effective in anti-nuclear defenses – the neutron flux being capable of neutralising an incoming warhead at a greater range than heat or blast. Nuclear warheads are very resistant to physical damage, but are very difficult to harden against extreme neutron flux.[citation needed]
Standard | Enhanced | |
---|---|---|
Blast | 50% | 40% |
Thermal energy | 35% | 25% |
Instant radiation | 5% | 30% |
Residual radiation | 10% | 5% |
ERWs were two-stage thermonuclears with all non-essential uranium removed to minimize fission yield. Fusion provided the neutrons. Developed in the 1950s, they were first deployed in the 1970s, by U.S. forces in Europe. The last ones were retired in the 1990s.[citation needed]
A neutron bomb is only feasible if the yield is sufficiently high that efficient fusion stage ignition is possible, and if the yield is low enough that the case thickness will not absorb too many neutrons. This means that neutron bombs have a yield range of 1–10 kilotons, with fission proportion varying from 50% at 1 kiloton to 25% at 10 kilotons (all of which comes from the primary stage). The neutron output per kiloton is then 10 to 15 times greater than for a pure fission implosion weapon or for a strategic warhead like a W87 or W88.[70]
All the nuclear weapon design innovations discussed in this article originated from the following three labs in the manner described. Other nuclear weapon design labs in other countries duplicated those design innovations independently, reverse-engineered them from fallout analysis, or acquired them by espionage.[71]
The first systematic exploration of nuclear weapon design concepts took place in mid-1942 at the University of California, Berkeley. Important early discoveries had been made at the adjacent Lawrence Berkeley Laboratory, such as the 1940 cyclotron-made production and isolation of plutonium. A Berkeley professor, J. Robert Oppenheimer, had just been hired to run the nation's secret bomb design effort. His first act was to convene the 1942 summer conference.[citation needed]
By the time he moved his operation to the new secret town of Los Alamos, New Mexico, in the spring of 1943, the accumulated wisdom on nuclear weapon design consisted of five lectures by Berkeley professor Robert Serber, transcribed and distributed as the (classified but now fully declassified and widely available online as a PDF) Los Alamos Primer.[72] The Primer addressed fission energy, neutron production and capture, nuclear chain reactions, critical mass, tampers, predetonation, and three methods of assembling a bomb: gun assembly, implosion, and "autocatalytic methods", the one approach that turned out to be a dead end.[citation needed]
At Los Alamos, it was found in April 1944 by Emilio Segrè that the proposed Thin Man Gun assembly type bomb would not work for plutonium because of predetonation problems caused by Pu-240 impurities. So Fat Man, the implosion-type bomb, was given high priority as the only option for plutonium. The Berkeley discussions had generated theoretical estimates of critical mass, but nothing precise. The main wartime job at Los Alamos was the experimental determination of critical mass, which had to wait until sufficient amounts of fissile material arrived from the production plants: uranium from Oak Ridge, Tennessee, and plutonium from the Hanford Site in Washington.[citation needed]
In 1945, using the results of critical mass experiments, Los Alamos technicians fabricated and assembled components for four bombs: the Trinity Gadget, Little Boy, Fat Man, and an unused spare Fat Man. After the war, those who could, including Oppenheimer, returned to university teaching positions. Those who remained worked on levitated and hollow pits and conducted weapon effects tests such as Crossroads Able and Baker at Bikini Atoll in 1946.[citation needed]
All of the essential ideas for incorporating fusion into nuclear weapons originated at Los Alamos between 1946 and 1952. After the Teller-Ulam radiation implosion breakthrough of 1951, the technical implications and possibilities were fully explored, but ideas not directly relevant to making the largest possible bombs for long-range Air Force bombers were shelved.[citation needed]
Because of Oppenheimer's initial position in the H-bomb debate, in opposition to large thermonuclear weapons, and the assumption that he still had influence over Los Alamos despite his departure, political allies of Edward Teller decided he needed his own laboratory in order to pursue H-bombs. By the time it was opened in 1952, in Livermore, California, Los Alamos had finished the job Livermore was designed to do.[citation needed]
With its original mission no longer available, the Livermore lab tried radical new designs that failed. Its first three nuclear tests were fizzles: in 1953, two single-stage fission devices with uranium hydride pits, and in 1954, a two-stage thermonuclear device in which the secondary heated up prematurely, too fast for radiation implosion to work properly.[citation needed]
Shifting gears, Livermore settled for taking ideas Los Alamos had shelved and developing them for the Army and Navy. This led Livermore to specialize in small-diameter tactical weapons, particularly ones using two-point implosion systems, such as the Swan. Small-diameter tactical weapons became primaries for small-diameter secondaries. Around 1960, when the superpower arms race became a ballistic missile race, Livermore warheads were more useful than the large, heavy Los Alamos warheads. Los Alamos warheads were used on the first intermediate-range ballistic missiles, IRBMs, but smaller Livermore warheads were used on the first intercontinental ballistic missiles, ICBMs, and submarine-launched ballistic missiles, SLBMs, as well as on the first multiple warhead systems on such missiles.[73]
In 1957 and 1958, both labs built and tested as many designs as possible, in anticipation that a planned 1958 test ban might become permanent. By the time testing resumed in 1961 the two labs had become duplicates of each other, and design jobs were assigned more on workload considerations than lab specialty. Some designs were horse-traded. For example, the W38 warhead for the Titan I missile started out as a Livermore project, was given to Los Alamos when it became the Atlas missile warhead, and in 1959 was given back to Livermore, in trade for the W54 Davy Crockett warhead, which went from Livermore to Los Alamos.[citation needed]
Warhead designs after 1960 took on the character of model changes, with every new missile getting a new warhead for marketing reasons. The chief substantive change involved packing more fissile uranium-235 into the secondary, as it became available with continued uranium enrichment and the dismantlement of the large high-yield bombs.[citation needed]
Starting with the Nova facility at Livermore in the mid-1980s, nuclear design activity pertaining to radiation-driven implosion was informed by research with indirect drive laser fusion. This work was part of the effort to investigate Inertial Confinement Fusion. Similar work continues at the more powerful National Ignition Facility. The Stockpile Stewardship and Management Program also benefited from research performed at NIF.[citation needed]
Nuclear weapons are in large part designed by trial and error. The trial often involves test explosion of a prototype.
In a nuclear explosion, a large number of discrete events, with various probabilities, aggregate into short-lived, chaotic energy flows inside the device casing. Complex mathematical models are required to approximate the processes, and in the 1950s there were no computers powerful enough to run them properly. Even today's computers and simulation software are not adequate.[74]
It was easy enough to design reliable weapons for the stockpile. If the prototype worked, it could be weaponized and mass-produced.[citation needed]
It was much more difficult to understand how it worked or why it failed. Designers gathered as much data as possible during the explosion, before the device destroyed itself, and used the data to calibrate their models, often by inserting fudge factors into equations to make the simulations match experimental results. They also analyzed the weapon debris in fallout to see how much of a potential nuclear reaction had taken place.[citation needed]
An important tool for test analysis was the diagnostic light pipe. A probe inside a test device could transmit information by heating a plate of metal to incandescence, an event that could be recorded by instruments located at the far end of a long, very straight pipe.[citation needed]
The picture below shows the Shrimp device, detonated on March 1, 1954, at Bikini, as the Castle Bravo test. Its 15-megaton explosion was the largest ever by the United States. The silhouette of a man is shown for scale. The device is supported from below, at the ends. The pipes going into the shot cab ceiling, which appear to be supports, are actually diagnostic light pipes. The eight pipes at the right end (1) sent information about the detonation of the primary. Two in the middle (2) marked the time when X-rays from the primary reached the radiation channel around the secondary. The last two pipes (3) noted the time radiation reached the far end of the radiation channel, the difference between (2) and (3) being the radiation transit time for the channel.[75]
From the shot cab, the pipes turned horizontally and traveled 2.3 km (7,500 ft) along a causeway built on the Bikini reef to a remote-controlled data collection bunker on Namu Island.[citation needed]
While x-rays would normally travel at the speed of light through a low-density material like the plastic foam channel filler between (2) and (3), the intensity of radiation from the exploding primary creates a relatively opaque radiation front in the channel filler, which acts like a slow-moving logjam to retard the passage of radiant energy. While the secondary is being compressed via radiation-induced ablation, neutrons from the primary catch up with the x-rays, penetrate into the secondary, and start breeding tritium via the third reaction noted in the first section above. This 6Li + n reaction is exothermic, producing 5 MeV per event. The spark plug has not yet been compressed and thus remains subcritical, so no significant fission or fusion takes place as a result. If enough neutrons arrive before implosion of the secondary is complete, though, the crucial temperature differential between the outer and inner parts of the secondary can be degraded, potentially causing the secondary to fail to ignite. The first Livermore-designed thermonuclear weapon, the Morgenstern device, failed in this manner when it was tested as Castle Koon on April 7, 1954. The primary ignited, but the secondary, preheated by the primary's neutron wave, suffered what was termed as an inefficient detonation;[76]: 165 thus, a weapon with a predicted one-megaton yield produced only 110 kilotons, of which merely 10 kt were attributed to fusion.[77]: 316
These timing effects, and any problems they cause, are measured by light-pipe data. The mathematical simulations which they calibrate are called radiation flow hydrodynamics codes, or channel codes. They are used to predict the effect of future design modifications.[citation needed]
It is not clear from the public record how successful the Shrimp light pipes were. The unmanned data bunker was far enough back to remain outside the mile-wide crater, but the 15-megaton blast, two and a half times as powerful as expected, breached the bunker by blowing its 20-ton door off the hinges and across the inside of the bunker. (The nearest people were 32 kilometres (20 mi) farther away, in a bunker that survived intact.)[78]
The most interesting data from Castle Bravo came from radio-chemical analysis of weapon debris in fallout. Because of a shortage of enriched lithium-6, 60% of the lithium in the Shrimp secondary was ordinary lithium-7, which doesn't breed tritium as easily as lithium-6 does. But it does breed lithium-6 as the product of an (n, 2n) reaction (one neutron in, two neutrons out), a known fact, but with unknown probability. The probability turned out to be high.[citation needed]
Fallout analysis revealed to designers that, with the (n, 2n) reaction, the Shrimp secondary effectively had two and half times as much lithium-6 as expected. The tritium, the fusion yield, the neutrons, and the fission yield were all increased accordingly.[79]
As noted above, Bravo's fallout analysis also told the outside world, for the first time, that thermonuclear bombs are more fission devices than fusion devices. A Japanese fishing boat, Daigo Fukuryū Maru, sailed home with enough fallout on her decks to allow scientists in Japan and elsewhere to determine, and announce, that most of the fallout had come from the fission of U-238 by fusion-produced 14 MeV neutrons.[citation needed]
The global alarm over radioactive fallout, which began with the Castle Bravo event, eventually drove nuclear testing literally underground. The last U.S. above-ground test took place at Johnston Island on November 4, 1962. During the next three decades, until September 23, 1992, the United States conducted an average of 2.4 underground nuclear explosions per month, all but a few at the Nevada Test Site (NTS) northwest of Las Vegas.[citation needed]
The Yucca Flat section of the NTS is covered with subsidence craters resulting from the collapse of terrain over radioactive caverns created by nuclear explosions (see photo).
After the 1974 Threshold Test Ban Treaty (TTBT), which limited underground explosions to 150 kilotons or less, warheads like the half-megaton W88 had to be tested at less than full yield. Since the primary must be detonated at full yield in order to generate data about the implosion of the secondary, the reduction in yield had to come from the secondary. Replacing much of the lithium-6 deuteride fusion fuel with lithium-7 hydride limited the tritium available for fusion, and thus the overall yield, without changing the dynamics of the implosion. The functioning of the device could be evaluated using light pipes, other sensing devices, and analysis of trapped weapon debris. The full yield of the stockpiled weapon could be calculated by extrapolation.[citation needed]
The examples and perspective in this section deal primarily with the United States and do not represent a worldwide view of the subject. (June 2014) |
When two-stage weapons became standard in the early 1950s, weapon design determined the layout of the new, widely dispersed U.S. production facilities, and vice versa.
Because primaries tend to be bulky, especially in diameter, plutonium is the fissile material of choice for pits, with beryllium reflectors. It has a smaller critical mass than uranium. The Rocky Flats plant near Boulder, Colorado, was built in 1952 for pit production and consequently became the plutonium and beryllium fabrication facility.[citation needed]
The Y-12 plant in Oak Ridge, Tennessee, where mass spectrometers called calutrons had enriched uranium for the Manhattan Project, was redesigned to make secondaries. Fissile U-235 makes the best spark plugs because its critical mass is larger, especially in the cylindrical shape of early thermonuclear secondaries. Early experiments used the two fissile materials in combination, as composite Pu-Oy pits and spark plugs, but for mass production, it was easier to let the factories specialize: plutonium pits in primaries, uranium spark plugs and pushers in secondaries.[citation needed]
Y-12 made lithium-6 deuteride fusion fuel and U-238 parts, the other two ingredients of secondaries.[citation needed]
The Hanford Site near Richland WA operated Plutonium production nuclear reactors and separations facilities during World War 2 and the Cold War. Nine Plutonium production reactors were built and operated there. The first being the B-Reactor which began operations in September 1944 and the last being the N-Reactor which ceased operations in January 1987.[citation needed]
The Savannah River Site in Aiken, South Carolina, also built in 1952, operated nuclear reactors which converted U-238 into Pu-239 for pits, and converted lithium-6 (produced at Y-12) into tritium for booster gas. Since its reactors were moderated with heavy water, deuterium oxide, it also made deuterium for booster gas and for Y-12 to use in making lithium-6 deuteride.[citation needed]
Because even low-yield nuclear warheads have astounding destructive power, weapon designers have always recognised the need to incorporate mechanisms and associated procedures intended to prevent accidental detonation.[citation needed]
It is inherently dangerous to have a weapon containing a quantity and shape of fissile material which can form a critical mass through a relatively simple accident. Because of this danger, the propellant in Little Boy (four bags of cordite) was inserted into the bomb in flight, shortly after takeoff on August 6, 1945. This was the first time a gun-type nuclear weapon had ever been fully assembled.[citation needed]
If the weapon falls into water, the moderating effect of the water can also cause a criticality accident, even without the weapon being physically damaged. Similarly, a fire caused by an aircraft crashing could easily ignite the propellant, with catastrophic results. Gun-type weapons have always been inherently unsafe.[citation needed]
Neither of these effects is likely with implosion weapons since there is normally insufficient fissile material to form a critical mass without the correct detonation of the lenses. However, the earliest implosion weapons had pits so close to criticality that accidental detonation with some nuclear yield was a concern.[citation needed]
On August 9, 1945, Fat Man was loaded onto its airplane fully assembled, but later, when levitated pits made a space between the pit and the tamper, it was feasible to use in-flight pit insertion. The bomber would take off with no fissile material in the bomb. Some older implosion-type weapons, such as the US Mark 4 and Mark 5, used this system.[citation needed]
In-flight pit insertion will not work with a hollow pit in contact with its tamper.[citation needed]
As shown in the diagram above, one method used to decrease the likelihood of accidental detonation employed metal balls. The balls were emptied into the pit: this prevented detonation by increasing the density of the hollow pit, thereby preventing symmetrical implosion in the event of an accident. This design was used in the Green Grass weapon, also known as the Interim Megaton Weapon, which was used in the Violet Club and Yellow Sun Mk.1 bombs.[citation needed]
Alternatively, the pit can be "safed" by having its normally hollow core filled with an inert material such as a fine metal chain, possibly made of cadmium to absorb neutrons. While the chain is in the center of the pit, the pit cannot be compressed into an appropriate shape to fission; when the weapon is to be armed, the chain is removed. Similarly, although a serious fire could detonate the explosives, destroying the pit and spreading plutonium to contaminate the surroundings as has happened in several weapons accidents, it could not cause a nuclear explosion.[citation needed]
While the firing of one detonator out of many will not cause a hollow pit to go critical, especially a low-mass hollow pit that requires boosting, the introduction of two-point implosion systems made that possibility a real concern.[citation needed]
In a two-point system, if one detonator fires, one entire hemisphere of the pit will implode as designed. The high-explosive charge surrounding the other hemisphere will explode progressively, from the equator toward the opposite pole. Ideally, this will pinch the equator and squeeze the second hemisphere away from the first, like toothpaste in a tube. By the time the explosion envelops it, its implosion will be separated both in time and space from the implosion of the first hemisphere. The resulting dumbbell shape, with each end reaching maximum density at a different time, may not become critical.[citation needed]
It is not possible to tell on the drawing board how this will play out. Nor is it possible using a dummy pit of U-238 and high-speed x-ray cameras, although such tests are helpful. For final determination, a test needs to be made with real fissile material. Consequently, starting in 1957, a year after Swan, both labs began one-point safety tests.[citation needed]
Out of 25 one-point safety tests conducted in 1957 and 1958, seven had zero or slight nuclear yield (success), three had high yields of 300 t to 500 t (severe failure), and the rest had unacceptable yields between those extremes.[citation needed]
Of particular concern was Livermore's W47, which generated unacceptably high yields in one-point testing. To prevent an accidental detonation, Livermore decided to use mechanical safing on the W47. The wire safety scheme described below was the result.[citation needed]
When testing resumed in 1961, and continued for three decades, there was sufficient time to make all warhead designs inherently one-point safe, without need for mechanical safing.[citation needed]
In the last test before the 1958 moratorium the W47 warhead for the Polaris SLBM was found to not be one-point safe, producing an unacceptably high nuclear yield of 200 kg (440 lb) of TNT equivalent (Hardtack II Titania). With the test moratorium in force, there was no way to refine the design and make it inherently one-point safe. A solution was devised consisting of a boron-coated wire inserted into the weapon's hollow pit at manufacture. The warhead was armed by withdrawing the wire onto a spool driven by an electric motor. Once withdrawn, the wire could not be re-inserted.[80] The wire had a tendency to become brittle during storage, and break or get stuck during arming, preventing complete removal and rendering the warhead a dud.[81] It was estimated that 50–75% of warheads would fail. This required a complete rebuild of all W47 primaries.[82] The oil used for lubricating the wire also promoted corrosion of the pit.[83]
Under the strong link/weak link system, "weak links" are constructed between critical nuclear weapon components (the "hard links"). In the event of an accident the weak links are designed to fail first in a manner that precludes energy transfer between them. Then, if a hard link fails in a manner that transfers or releases energy, energy can't be transferred into other weapon systems, potentially starting a nuclear detonation. Hard links are usually critical weapon components that have been hardened to survive extreme environments, while weak links can be both components deliberately inserted into the system to act as a weak link and critical nuclear components that can fail predictably.[citation needed]
An example of a weak link would be an electrical connector that contains electrical wires made from a low melting point alloy. During a fire, those wires would melt, breaking any electrical connection.[citation needed]
A permissive action link is an access control device designed to prevent unauthorised use of nuclear weapons. Early PALs were simple electromechanical switches and have evolved into complex arming systems that include integrated yield control options, lockout devices and anti-tamper devices.[citation needed]
Seamless Wikipedia browsing. On steroids.
Every time you click a link to Wikipedia, Wiktionary or Wikiquote in your browser's search results, it will show the modern Wikiwand interface.
Wikiwand extension is a five stars, simple, with minimum permission required to keep your browsing private, safe and transparent.