College of Engineering, Chengannur
College of Engineering, Chengannur
1. INTRODUCTION
In the past 40 years, the minimum dimension of integrated circuits (ICs) has been shrinking
at a rate of 30% smaller feature size every three years, following the so-called Moore’s law.
The International Technology Roadmap for Semiconductors (ITRS), an industrial consensus
of future technology largely based on Moore’s law, is a 100 page document that suggests
that features of complementary metal–oxide–semiconductor (MOS) circuitry will continue
to shrink down to at least the 22 nm node. At that node, the half of the centre-to-centre
(pitch) of first level of interconnect is 22 nm and the width of the resist feature for the gate
electrode is 15 nm; the etched gate electrode is even smaller, about 9 nm (Fig. 1.1).
Fig 1.1
There is very good reason for this continued drive to shrink dimensions. Until recently,
scaling down all linear dimensions L along with applied voltages led to a proportionate
increase in speed and a reduction in energy per computing function to L .Although the
former advantage is the most frequently touted, the latter is probably the more significant;
even more so as electronics increasingly will be hand-carried. Although the classical
scaling laws may not apply quantitatively as we continue to scale down dimensions, it
appears that we will get more computing per unit time and per unit energy. Scaling to 22 nm
will bring advantages in terms of energy per computing function as well as speed. There
may be other ways to reduce power per computing function. For example, just about all
transistors operate by modulating the height of a thermal barrier. Thence it follows that to
change the current tenfold at room temperature, the change in voltage applied to the control
electrode must be at least 60 mV; in current jargon we say that we need 60 mV/decade But
as we scale down dimensions we must also scale down signal and supply voltages to avoid
breakdown. Thus, at finer dimensions, the current in transistors that are Boff will become
appreciable and will increase the dissipated power. Some newer devices operate on different
principles and may well not suffer this problem. These and other exotic devices (e.g.,
spintronics) and strategies [e.g., three-dimensional (3-D) integrated circuitry and quantum
computing] are being pursued and are described elsewhere as well as in separate papers in
this issue. But even with these exotic devices and structures, the power and speed
advantages of scaling down dimensions remain and compel us to invest in appropriate
patterning technologies. At least one prominent speaker (Shahidi, IBM) claims that the limit
to system performance is directly related to density and thus the limit to system
performance is set by the technology to generate and replicate dense patterns.
Fig 2.1
The daughter masks were used to contact print the required pattern onto the resist film on
the wafer. The underlying functional layer was then etched into the required pattern. The
resolution of this process was typically > 10 μm set by the emulsion. But the main problem
of contact lithography is that the repeated contact of the masks while overlaying the image
to the prior pattern layers gives rise to defects in the plate, thus lowering yield and limiting
the economic scale of integration that could be achieved. So the next step, proximity
printing, was developed in which there is a gap (of width g) between the emulsion and the
wafer. The resulting resolution, discussed more fully in the next section, is on the order of
√(gλ). So if λ= 436 nm (a strong line on a mercury arc), for a 2 μm blur resolution, the gap
should be no more than 9 µm. As the whole wafer was exposed simultaneously, this placed
a stringent requirement on the flatness of the wafer and limited the useful minimum features
to larger than 5 µm. To improve the lithography resolution and overlay while keeping no
contact with the wafer, projection printing lithography tools were developed. The Perkin
Elmer Micralign (circa 1973) was the first practical tool to employ focusing optics so that a
sharp image of the pattern on the mask could expose the resist-coated wafer without
mechanical contact, thus reducing defects and improving yield .
Fig 2.2
The 3x cost increase (to $100 000) in the cost of the tool was easily justified on grounds of
yield alone. Furthermore, mask life was greatly improved, so there was no need to generate
cheap emulsion working plates; and the master, with its higher resolution chromium pattern,
could be used directly. To compensate for the lower contrast on the projected optical
(aerial) image (see contact or proximity printing), the resist used was a positive, novolac-
based resin with much higher contrast (gamma of about three; this is discussed more in
Section III-D) and better resolution than original negative photoresists. The optics of the
Micralign was an ingenious arrangement of three mirrors and one prism (Fig 2.2) that
allowed aberration-free imaging over a ring-shaped field of view 75 mm long by about 1
mm wide. Because the optics had unity magnification of þ1 (i.e., an erect image), a simple
mechanical scanning mechanism allowed complete Fig. 2. Original process for making
photolithographic masks. exposure of a (3-in diameter) wafer in less than 1 min. This
patterning technology offered the winning combination of sub-5-µm resolution with high
yield and high throughput (1 wafer/min) and made possible the economical manufacturing
of very large-scale integrated circuits. Unfortunately, the optical arrangement of the
Micralign limited the numerical aperture to 0.167, which, as described in the next section,
limited the smallest features to about 2 or 3 µm. So the next family of exposure tools was to
use a step and repeat camera, employing a multipleelement refractive lens, to expose the
wafer directly from the reticle. The first such steppers were expensive ($500 000) and slow
(1 wafer/3 min) but were able to generate 1.25-µm features, and by 1985 steppers had
replaced the Micralign for the critical patterning steps. During the next 20-plus years,
steppers evolved such that the numerical aperture is now close to one, the wavelength
dropped from 436 to 193 nm (using an excimer laser), and the resist technology improved
so that features can be less than 100 nm. A further advantage of the stepper was improved
overlay because alignment could be carried out die-by-die or to some less dense array of
marks to correct for the most serious spatial frequency components of distortion of the
wafer. Moreover, the introduction of Bchemically amplified resist allows a throughput
exceeding one 300-mm-diameter wafer per minute. The most recent versions employ
scanning as well as stepping to cover the large fields of the ultra-large-scale interated(ULSI)
chips, and liquid immersion optics are being introduced to reduce the wavelength [i.e.,
increase the numerical aperture (NA)] further so that the minimum features will be less than
50 nm. All this comes at a price; the current scanners have a price tag of about $30 000 000.
But despite this high price, the cost per patterned minimum feature has continued to drop.
Needless to say, the mask making technology has also been changing. In the mid 1970s,
Bell Labs introduced and licensed to ETEC Corporation commercial electron beam
technology for mask making, and initially the product was the master masks used in the
Micralign; only one patterning step was needed to generate the master. With the
introduction of steppers, the e-beam tools were used to generate reticles (i.e., only one to
four chip patterns) whose features were four to ten times those on the wafer. So, despite the
shrinking of features on the wafer, the original electron beam tools continued to be used
with little modification until about 2000. Since then, new generations of electron beam tools
have been introduced, not simply because the wafer features have become smaller Fig 2.2.
The first successful wafer exposure tool using optical projection; the Perkin–Elmer
Micralign (1973). This tool made possible very large-scale integration by directly exposing
the reticle onto the wafer with no mechanical contact to either image Surface. But also
because of the need to put on the mask subresolution assist features (SRAFs) to enhance the
fidelity of the projected image to the design layout (Fig 3.4). Less publicized has been the
accompanying inspection, metrology, and repair tools needed to qualify the patterns on the
reticles, masks, and wafers. Inspection for defects is mostly done with highly automated
high-speed optical microscopy, whereas the scanning electron microscope (SEM) is used to
measure the feature sizes. Inspection, repair, and metrology of the reticle have now become
an appreciable factor in the total cost of patterning a wafer and must be considered when
assessing the introduction of a new patterning technology.
3. PHOTOLITHOGRAPHY
3.1 LIMIT TO RESOLUTION SET BY THE WAVELENGTH
If you focus an image of a point object using an aberration-free lens, the result will
be an Airy disk of radius 0.61 λ= nsinα where λ is the wavelength (at the image) and
α the convergence semi-angle at the image.2 In light optics, this is frequently written
as 0.61 λ= nsinα, where n is the refractive index of the medium at the image, λ the
free-space wavelength, and the quantity nsinα the numerical aperture (NA) of the
lens. Lord Rayleigh suggested that the criterion for resolution of a microscope
should be the radius of the Airy disk. That is, two (self luminous) points separated
by 0.61λ=NA can just be resolved when using a perfect (i.e., aberration-free) lens.
This criterion is often referred to, incorrectly, as the Rayleigh limit, largely because
is it is a practical limit for two points emitting radiation with no coherence between
the pencil beams from the different points.
Fig. 3.1
However, an IC pattern is very far from being a pair of points, and we can use many
different kinds of illumination to achieve different levels of coherence between rays
from neighbouring points in the object. Often we use a (equal linespace) grating
object for analyzing optical system performance. For example, if we illuminate
coherently a grating object and project its image through a perfect lens system ( Fig
3.1 top), there will be two diffraction spots, as well as the undiffracted spot, at the
aperture plane (assume that higher order spots will be outside the aperture) and so
the resulting aerial image will be a periodic wave of unity (maximum) contrast
defined by (Imax- Imin)/(Imax + Imin). This value will hold as we shrink the grating
(increase spatial frequency) until the diffraction spots go outside the aperture, at
which point the contrast abruptly goes to zero. If the illumination is incoherent,
usually realized by impinging over a solid angle up to 2π steradians, from an
extended source, then the curve of contrast versus spatial frequency drops gradually
with increasing spatial frequency ( Fig 3.1 bottom); note that the highest spatial
frequency for which here is nonzero contrast is twice that of the corresponding
value when coherent illumination is used. In practice, the illumination is partially
coherent, so the curves are somewhere between the two, but the trend of
monotonically decreasing contrast for smaller features is usually true. Thus, except
for perfectly coherent illumination, there is no sharp criterion for resolution of the
grating object, so today we replace the factor 0.61 with k1 for the minimum (half-
pitch) feature that is resolved by the projection system imaging into a resist film. As
we shall see below, k1 depends on several factors and can be much less than 0.61.
We should also point out that the optics of present-day scanners and steppers has
become much more involved than those used in the Perkin–Elmer Micralign because
of the need to cover large fields of view at high numerical apertures and with
negligible aberrations; an example is shown in Fig 3.2. Proximity printing, in which
a shadow of the mask pattern is cast onto the nearby resist film, is still used for
noncritical patterns, and here the wavelength is still key. The non-sharpness of the
edge of the shadow is caused not only by a noncollimated illumination but also by
Fresnel diffraction.
Fig 3.2
For collimated llumination, the Fresnel diffraction width is approximately √(λg)
where g is the gap between the object (mask) and image (resist film on wafer).
Values of g < 5 µm are difficult in practice because of nonflatness of the wafer;
hence achieving sub-micrometer resolution with visible and ultraviolet wavelengths
is also difficult. However for X-rays, ions, and electrons, this limitation is much less
serious but, as described below, other factors limit the utility of proximity printing
with these technologies. From the above, it is clear that lowering the wavelength is
key to better resolution. Hence during the last 20 years, the industry has moved from
436–365 nm (two strong lines, g and i, of the mercury arc) to 248–193 nm (KrF and
ArF excimer lasers). It is proving extremely difficult to go further. One reason is that
that is the same as the desired pattern on the wafer. For higher resolution patterns,
we have already seen that the contrast falls off as the lines and spaces shrink.
Fig 3.3
One of the first wavefront engineering techniques was the phase-shifting mask (Fig
3.3) in which alternate clear features have a phase change in the transmitted light so
that they destructively interfere and give rise to a region of zero intensity between
them, thus restoring contrast. This is very effective for line-space patterns but in
general there were pattern topologies (e.g., the letter M) that made the application of
alternating phase shifting challenging. There have been many variations on this
original idea. Related to this is the technique of concentrating the illumination at
certain angles (tilted illumination) so that contrast of the characteristic spatial
frequency of a periodic mask pattern is maximized. One literal shortcoming of
images for k1 < 0.5 is that the ends of lines become rounded and shortened, and this
can often be compensated by making the mask pattern for a short line in the form of
a dogbone. Such semi-empirical techniques were effective for a time, but as features
became smaller, k1 continued to drop and more and more effects due to nearby
features (hence the term proximity effects became significant and there were more
neighbours to take into consideration. The complexity of arriving at an acceptable
mask pattern soon became a computational nightmare. This was exacerbated by the
more complex physics that needs to be invoked for modeling the imaging process
when features are smaller than the wavelength of the radiation and the numerical
apertures are so high that the obliqueness of zonal rays becomes appreciable. Use of
the scalar model for optical imaging ceases to be adequate, and Maxwell’s equations
have to be solved for each case (vector model). This nightmare was partially
ameliorated by the introduction of model-based correction for these optical
proximity effects.
Fig 3.4
The principle of the model-based approach is that an IC pattern can be thought of as
made up of a finite menu of basic patterns, and knowing thoroughly the behaviour of
these basic patterns simplifies the task of computing the proximity effects for the
complete pattern; the evolution of optical masks is shown in Fig 3.4. In more recent
versions, the development response of the resists is also taken into account. Several
companies have been formed that devote their entire efforts to improving the speed
and accuracy of predicting completely the optical lithography process, including the
tolerance to variations in exposure level and focus level, and identifying hotspots
where such variations can reduce yield; the term computational lithography has
recently been coined to describe this activity. Again, this is a large topic, and more
complete descriptions can be found in. Applying such techniques has led to the
ability to manufacture ICs at the 65-nm node using 193-nm radiation at an NA of
0.9. This corresponds to a k1 of 0.3.
Fig 3.5
The arrangement is shown schematically in Fig 3.5. By using water for the fluid
filling the space between the lens and wafer surface, we can increase the numerical
aperture to more than one. Although the principle is simple and the technique has
been used for microscopy since the nineteenth century, the practical difficulties are
challenging. For example, in a modern scanner, the wafer is moving past the lens at
up to 1 m/s so droplets are often left on the surface and bubbles can form; but
despite these difficulties, this technique is now being introduced to manufacturing
for 45-nm (half-pitch) features and is slated to be extended to 32 nm. Some of the
arrangements for maximizing the numerical aperture with this technique are
illustrated in Fig 3.5.
3.5 HOW TO GO BEYOND K1 = 0:25
how low can k1 go? Looking at Fig 3.1(b), we can see that the contrast drops to
zero at k1 = 0.25. However, that is for a single exposure, and we have already
pointed out
that we can over-etch to achieve arbitrarily finer features. So we could expose,
develop, and over-etch one grating pattern and then pattern an identical grating
pattern offset so that the combination is now a grating of double the spatial
frequency of the original. A huge number of variations of such double exposure
techniques are being described. In some, there is no need to remove the wafer from
the scanner, so registering the second exposure to the first is less of a problem.
However, most of the techniques do require a second registration operation, and
achieving sufficiently accurate overlay has now become a major issue. Whereas for
single exposure techniques an overlay accuracy of line width/three was regarded as
adequate, with double exposure, an accuracy of a few nanometers is now desirable
for 32-nm half-pitch. One particularly ambitious approach to overcoming the
wavelength limitation on resolution in optical lithography is absorbance modulation
optical lithography (AMOL). AMOL employs simultaneous exposure at two
different wavelengths. Twenty-nanometre lines and spaces have been reported
corresponding to a k1 value of 0.05. The bottom line is that ultraviolet lithography
has demonstrated features below 22 nm, so the issue is not fundamental physics but
technology (including economics). A particularly telling example is that a 9-nm
transistor was made using 248-nm radiation (Fig. 1.1).
At the energies used (> 2 KeV), the wavelengths of both ions and electrons are so short as
to be of negligible concern. Much the most popular is electron beam direct write (EBDW)
and is essentially a computer-controlled SEM with means for blanking the beam. The
simplest versions employ the scanning circuitry of the SEM and a stationary work piece,
and rely on the computer pattern generator to stage is moving (this was the key feature of
the Bell Labs BEBES). With one pencil beam, the pattern is built up one picture element
(pel) at a time. A minimum feature has at least 5 *5= 25 pels. A current chip pattern
containing, say, 1 * 1011 rectangles have 2.5 * 1012 pels. If the maximum blanking rate is
100 MHz, then it will take 8 h to write one chip pattern; this is unacceptable for writing
directly on the wafer but might be just acceptable for patterning a reticle. Even so, the
versatility and resolution (better than 10 nm) of EBDW has led to at least two commercial
systems (VISTEC and JEOL) that are used for prototyping a wide range of individual
devices. The slowness of this single pencil process was recognized from the start , and
various approaches have been used to mitigate the problem. The shaped beam approach is
the most successful. By imaging a square aperture as the object, we can project a complete
square simultaneously. When introduced in about 1976 by IBM for the personalization of
gate arrays, the minimum feature size was 2.5 µm and it was possible to expose 22 wafers
of 2.25-in diameter in 1 h. But today, with 65-nm minimum features and 300-mm wafers,
the same system would take several months to pattern one wafer.
Fig 4.1
So more elaborate approaches have been tried such as the variable shape and cell projection
( Fig 4.1). These schemes are faster than the fixed shape and are still being used for mask
making and prototyping complex circuits. A related system with the same advantages in
principle is the dot matrix configuration . As can be inferred from Fig 4.1, even though the
object and image are extended, all rays pass through a common aperture centered on a
single axis. At currents above about 1 µA, the coulomb forces between electrons usually
introduce not only first-order defocusing but also third-order aberrations and stochastic
blurring of the beam so large-scale manufacture of ULSICs is not economical. To get
around this, the idea of many electron beam columns has repeatedly surfaced but this has
not yet proved practical, largely because of the difficulty of matching the different columns.
Fig 4.2
Perhaps the most promising idea has been one that has its origins in the early days of
electron lithography; the principle is shown in Fig 4.2(a). That particular idea failed
because of contamination of the photoelectron emissive film by the resist being bombarded
and because any nonflatness of the wafer distorted the electric field and caused
misalignment of the projected image. However, its reincarnation [ Fig 4.2(b)] overcomes
those problems and is being researched by at least one institution. The main difficulty now
appears to be achieving an array of adequately intense, uniform, and stable photoelectron
sources. A similar scheme, MAPPER , ran into the same problem but now employs a
different approach that brings about blank the beam at the appropriate intervals. More
sophisticated systems use an interferometer to monitor continuously the position of the
stage and often write while the individual blanking of the beam-lets using a complicated
micro-machined assembly that is activated optically to facilitate bringing a large array of
switching signals through the vacuum wall Another European project is PML2, which
features electron optics optimized to minimize space charge interactions and also employs a
micro-machined assembly for blanking individual beamlets. One scenario that is becoming
more popular is a throwback to the original IBM work on personalizing gate arrays; this is a
hybrid approach in which EBL is used only for the high-resolution features and the
remainder is done with optical lithography. One company, e-ASIC, employs a process in
which only vias are patterned to personalize application-specific integrated circuits
(ASICs); in this way only about 1% of one level need be exposed4 . This not only saves
time, thus making EBL viable, but also greatly reduces design costs, without the overhead
of the switching and routing circuitry associated with field-programmable gate arrays.
Multiple ion beams and ion projection lithography have also been suggested because there
is reduced lateral scattering within the resist film and the resists are about 100 times more
sensitive to ions than to electrons; but because of shot noise in the beam, we cannot take
advantage of this increased sensitivity at features sizes below 100 nm. Present estimates are
that we need about 10000 exposure quanta to achieve 10% feature area tolerance (6-sigma) .
For 45nm2 features, this is about 40 µC/cm2 and for 22-nm features 160 µC/cm2; both
figures are characteristic of required doses of high resolution resists to electrons so there is
no advantage in using ions under these conditions. To try to avoid space-charge problems,
proximity printing through a stencil mask has been demonstrated, but despite the high
quality of patterns the difficulty of making qualified stencil masks has precluded serious
development. Thus ion beam patterning is now restricted to applications using a pencil
beam in which only very small throughput is needed such
as repair of photomasks and preparing samples for examination in the transmission electron
microscope5 and for making ambitious devices one at a time; some spectacular structures
have been built in this way6. For researching new devices, this focused ion beam is a
wonderful tool especially when combined in the same instrument as a scanning electron
beam.
5. X-RAY LITHOGRAPHY
As mentioned earlier, X-ray lithography (XRL) has a choice of a large range of
wavelengths, from about 0.4 to 100 nm. In its first incarnation, proximity printing was
used with a wavelength of about 1 nm and the gap was about 20 μm, giving a Fresnel
diffraction blur of about 140 nm. At this wavelength membranes of, for example, silicon 4
µm thick were reasonably transparent and absorbers of, say, gold needed to be about 400
nm thick to be adequately opaque (these numbers are approximate as it is advantageous to
tune the wavelength on opposite sides of the absorption edges of the substrate and absorber
materials). Despite determined industrial efforts and significant technical success, XRL
never was able to demonstrate circuits that could not be made more economically by
ultraviolet lithography. The blame for this failure was laid at the door of the mask houses
who had failed to furnish adequate masks; in particular it was the fact that these masks had
to be unity magnification (rather than 4, 5, or 10 times larger, as in the case of reticles for
steppers) that was widely blamed. Others, including one of the authors, believe the real
problem was the membranous nature of the substrate and the thick absorber layers that were
needed; a 4x mask would have had to have 16x the area of a 1x mask. However a variant,
originally dubbed soft X-ray lithography, appeared in the early 1990s. It offers a way
around the difficulty of making X-ray lithographic masks by offering reduction optics and a
thick mask substrate. To make the introduction of this technology more palatable, it was
renamed extreme ultraviolet lithography, EUVL is now the favourite technology to
supplant optical lithography. The wavelength is 13 nm. For this wavelength, mirrors can be
made with up to 70% reflectivity by using a multilayer (e.g., about 70 alternating layers
several nanometres thick of Si and Mo) structure with tolerances of a few atomic layers.
All-reflective optics are used (including the mask substrate), and building the appropriate
focusing elements and defect-free mask blanks have been major challenges that, somewhat
to the surprise of several authorities, appear to have been successfully met. Coming up with
an appropriately powerful source now appears to be the main problem. At this wavelength,
the shot noise limitations begin to appear, which could make the source problem even more
daunting.
Fig 5.1
Until now, the only published sub-30-nm features that look acceptable have been made in
very insensitive resist such as polymethylmethacrylate (PMMA) or hydrogen silsesquioxane
(HSQ) but it is possible that more sensitive resists will be adequate. Some early results are
shown in Fig 5.1. Two prototype EUV exposure tools have recently been delivered ( Fig
5.1) , and it will be interesting to see how well the technology performs. Assuming they are
successful, this approach may be more economical than immersion 193 nm using double
exposure because, at the 22-nm half-pitch, the relative simplicity of the process allows
greater tolerance for each step. The annual meeting on microlithography sponsored by The
International Society for optical Engineering is the best source of current progress on this
and other patterning techniques for semiconductor manufacturing.
6. NON-RADIATION-BASED PATTERNING
Unlike the above forms of patterning that employ radiation (photons, electrons, or ions) to
delineate a pattern in a resist, non-radiation-based patternings delineate features in a resist
primarily using mechanical or chemical means or both. Mechanical patterning uses a
mechanical mold (also called template) that shapes a material into features (i.e.,
nanoimprinting or a stamp that transfers an ink onto a surface (nanoprinting or soft-
lithography .Chemical patterning techniques use a local chemical energy minimum
of a material system, hence called self-assembly, since in a thermal equilibrium a system
always goes to its energy minimum and the patterns form themselves. Chemical patterning
techniques include self-assembly of a monolayer of molecules on the local surface and the
phase-separation of diblock polymers. Mixed patterning techniques combine mechanical
and chemical means, such as guided self assembly, that use a larger mechanical or chemical
patterns to guide the self-assembly of much small patterns, so that the self-assembled
patterns have predetermined locations and large domain size rather than random locations
and
small domain sizes in unguided self-assembly. Compared with radiation-based
nanopatternings, nonradiation based patternings have three major advantages due to
fundamentally different physical principles. First, they do not have a diffraction limit in
resolution; secondly, they are easy for 3-D patterning; and thirdly, they can directly pattern
functional materials to reduce fabrication steps and cost. (They also eliminate expensive,
complicated particle source and optical systems.) Although non-radiation-based patterning
is far less mature and has its drawbacks (as discussed below), it potentially offers a high
resolution and low cost unmatchable by radiation-based patterning. Nonradiation
patternings have their own challenges. For example, mechanical patterning (such as
nanoimprint or
nanoprint) is a form of contact lithography, facing issues of defect density, 1x mask cost,
mask damage, and wafer throughput (for step-and-repeat). It is to be seen whether all
of these issues can be solved, to what degree, and for what applications. Unlike radiation-
based patterning, which has been used by the semiconductor industry for more than
30 years and has had tens of billions dollars investment (in today’s money) for research and
development, nonradiation- based patterning is just introduced to industry from laboratory
research and has received several orders of magnitude less funding for research and
development. It is of great interest to see if, when given sufficient funding and time, the
previous issues will be solved and the potential of nonradiation patterning will be fully
utilized. Here our discussion focuses on nanoimprint, since it may be the most promising
non-radiation-based patterning for electronics applications due to its ultrahigh resolution
and high pattern transfer fidelity. Nanoimprint has been put on the roadmaps of many
industries, including ITRS, as a next-generation patterning method for manufacturing
semiconductor integrated circuits and the roadmap for manufacturing magnetic data storage
disks.
6.1. NANOIMPRINT TECHNOLOGIES
Fig 6.1
Principle: Nanoimprint patterns nanostructures by physical deformation of a
material using a mold, creating a pattern in the material (rather than by changing the
local chemical properties of the material using radiation) ( Fig 6.1). The imprinted
material can serve as a resist for pattern transfer (as in conventional lithography and
being removed later) or can be a part of the devices to be built and stay on the wafer
(direct imprint of functional materials).
step in less than 10 s. A 40-nm T-gate fabricated by a single NIL step and
liftoff of metal. Nanoimprint is also used to create 3-D damascene oxide
patterns for metal interconnect using SFIL.
SAM is a process where only one monolayer molecules are attached to the
substrate surface. The principle of SAMs, based on the fact that each
molecule used for assembly has two end functional groups, one end group,
called head functional group, preferentially attaches to the material on the
substrate surface and the other end functional group, called terminal
functional, does not attach to the substrate material nor the head functional
group. Hence once one layer of molecules has attached to the substrate
surface, the rest of the molecules stop attaching to the surface, and hence
only layer of molecules is attached. For examples, thiol (SH) can be used as
the head functional group for attaching a gold surface, and hydroxyl (OH)
can be used as a head functional group for attaching to SiO2. Methyl (CH3)
can be used as terminal functional group. Often, a SAM is coated on
nanoparticles with its terminal functional group attractive to a substrate, so
that a monolayer of the nanoparticles can be coated on the a surface and the
monolayer of nanoparticles can selfform certain patterns.
6.3.3 Phase Separation Of Block Copolymers:
Block copolymers are a special type of polymer where different polymer
blocks are connected. For examples, a diblock copolymer has a block A _
A. . .A connected to another block B _ B . . .B, namely, A _ A. . .A_ B _ B . .
.B, where A and B are different monomers. An example is polyisoprene (PI)-
polystyrene (PS) diblock copolymer that has monomer of isoprene and
styrene, respectively. In general, block A mixes well with block B, and the
film is uniform. But when the copolymer is heated at the order– disorder
temperature (e.g. 140ºC for PI-PS copolymer), the two polymer blocks
phase-separate (because a nonmixing state has a lower energy). However,
since the two
blocks are connected at their ends, the phase separation only can be in
microscopic regions, forming periodic patterns of A and B blocks, with the
length of each A and B block region about two times the block A and B
molecular length, respectively. By controlling the initial molecular length of
each block, we can varyeach A and B block region from a few to hundreds of
nanometers. Furthermore, depending upon the relative size of the two blocks,
the final phase separation structures can be spheres, cylinders (HEX),
lamellae (LAM), bicontinous, or perforated layers. Because each polymer
block has a different reactive ion etching (RIE) rate, the RIE of a
phaseseparated diblock copolymer film will convert its original composition
difference into topology difference, hence carving out the pattern formed by
phase separation.
possible that they will be used initially for patterning the less critical levels and later
migrate to more critical levels, i.e., behave as a classical disruptive technology. Self
assembly of structures is an appealing approach particularly when periodic patterns that do
not require overlay accuracy to a small fraction of a minimum feature are required. One
example might be to increase the surface area of capacitors through the fabrication of high
aspect ratio holes or pillars. Inspection and metrology of the fabricated patterns do not often
feature in the discussions of patterning technology, yet these steps are now becoming a
significant contributor to total cost and merit more attention from the research community.
In many cases, the tecniques used for writing may well be used for high speed, high
resolution inspection. One example could be a multiple-beam SEM using multiplexed
secondary electron detectors.
8. REFERENCES
[1] R. Fabian Pease Fellow IEEE, and Stephen Y. Chou, Fellow IEEE, “Lithography and
Other Patterning Techniques for Future Electronics”, IEEE Spectrum, invited paper, pp
248-270, 2008
[2] http://www.wikikipedia.com
[3] http://www.pdfound.com
[4] Frank T. Hartley and Chantal Khan Malek , Louisiana State University ,“Nanometre X-
ray Lithography”, AVS journal, By April 2008