Real-Time Soft Shadow Mapping by Backprojection

Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/220853049

Real-time Soft Shadow Mapping by Backprojection

Conference Paper · June 2006


DOI: 10.2312/EGWR/EGSR06/227-234 · Source: DBLP

CITATIONS READS

76 1,573

3 authors:

Gaël Guennebaud Loïc Barthe


National Institute for Research in Computer Science and Control Institut de Recherche en Informatique de Toulouse
49 PUBLICATIONS 2,622 CITATIONS 57 PUBLICATIONS 1,213 CITATIONS

SEE PROFILE SEE PROFILE

Mathias Paulin
Paul Sabatier University - Toulouse III
80 PUBLICATIONS 910 CITATIONS

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Appearance Modeling & Acquisition View project

Point-Based Rendering View project

All content following this page was uploaded by Loïc Barthe on 21 May 2014.

The user has requested enhancement of the downloaded file.


Eurographics Symposium on Rendering (2006)
Tomas Akenine-Möller and Wolfgang Heidrich (Editors)

Real-time soft shadow mapping by backprojection


Gaël Guennebaud, Loïc Barthe and Mathias Paulin†

IRIT - CNRS - Université Paul Sabatier - Toulouse - France

Figure 1: A scene including alpha-textured meshes (foliage and wire netting). Left: illustration of realistic soft shadows
produced by the average of 1024 hard shadows (2.5s per frame ). Right: our new soft shadow algorithm rendering the same
scene at 25 fps without any precomputation.

Abstract
We present a new real-time soft shadow algorithm using a single shadow map per light source. Therefore, our
algorithm is well suited to render both complex and dynamic scenes, and it handles all rasterizable geometries.
The key idea of our method is to use the shadow map as a simple and uniform discretized represention of the scene,
thus allowing us to generate realistic soft shadows in most cases. In particular it naturally handles occluder fusion.
Also, our algorithm deals with rectangular light sources as well as textured light sources with high precision, and
it maps well to programmable graphics hardware.
Categories and Subject Descriptors (according to ACM CCS): I.3.7 [Computer Graphics]: Three-Dimensional
Graphics and RealismColor, shading, shadowing, and texture

1. Introduction how much the light source is visible from the shaded point,
usually expressed as a percentage of visibility.
Rendering realistic soft shadows in real-time is a
fundamental issue in computer graphics. In addition to Targeting real-time non-dedicated soft shadow rendering
increase the realism of rendered images, they simplify the applications, a well suited algorithm should exhibit the
identification of spatial relationships between objects. From following features:
the practical point of view, point light sources generate 1. handle dynamic and complex scenes in real-time,
so-called hard shadows where a sharp transition is seen 2. be independent of both the receiver’s and the occluder’s
between light and umbra. However, because most light geometry (such as meshes, point clouds, images...),
sources are extended (area or volume), the intensity of the 3. generate shadows as faithful as possible to real ones.
light smoothly varies from no shadow to full shadow, hence
generating soft shadows with regions of penumbra. While Related work: Recent optimized techniques based on
rendering hard shadows only require the computation of object space silhouette detection provide approximate soft
the visibility between two points (the shaded point and the shadows in real-time for reasonably complex scenes (such
light), soft shadows require the complicated evaluation of as penumbra-wedges [AAM03, ADMAM03]). However,
in addition to some shortcomings (e.g. wrong occluders
fusion), they remain limited to manifold meshes and their
complexity increases with the scene complexity, making
† e-mail: {guenneba | lbarthe | paulin}@irit.fr both the first and the second criteria very difficult to

c The Eurographics Association 2006.


Guennebaud et al. / Real-time soft shadow mapping by backprojection

satisfy. On the other hand, shadow maps are image based discretized representation of the scene, each sample being
techniques supporting any type of rasterizable geometry: a small potential occluder. The key idea is to use this
meshes, point clouds or alpha-textured models (commonly simplified scene representation to compute the percentage
used to represent foliage or wire netting). They are also of visibility between a scene point and the extended light
less sensitive to the scene complexity and for these reasons, source. In order to provide real-time performance, our
we focus on this second family of techniques (note that a algorithm uses a single shadow map per light source. Despite
recent survey on real-time soft shadows can be found in this approximation, in most cases, our algorithm provides
[HLHS03]). realistic soft shadows with almost correct occluder fusion.
Moreover, it handles any type of rasterizable geometry, it
The shadow map algorithm [Wil78] first renders a depth deals with both rectangular and textured light sources, and it
map of the scene from the light source. Using this depth is easy to integrate into existing applications. The real-time
map, called the shadow map, a simple depth comparison performance can be guaranteed by using modular accuracy
determines which pixels of the final image are lit or and finally it does not require any precomputation.
not. This inexpensive process allows the generation of
shadows from scratch at each frame at real-time rate.
2. Our soft shadow algorithm
Unfortunately, shadow maps also exhibit several drawbacks.
One is the aliased boundary of hard shadows when the Even though our algorithm naturally deals with multiple
sampling resolution is insufficient. This problem can be rectangular light sources and with rectangular shadow map
solved by increasing the effective shadow map resolution pixels, in order to simplify the explanations, we present the
[FFBG01, SD02, MT04, WSP04] or replaced by blur with main procedure on a single square light source of width l and
the percentage closer filtering technique [RSC87]. Another on square pixels.
fundamental issue is the limitation to hard shadows and Our algorithm computes for each light source a soft
hence, several recent real-time soft shadow methods have visibility buffer (V-buffer) modulating diffuse and specular
been built over the shadow map algorithm. Some require lighting in the final rendering pass. The visibility buffer
the rendering of multiple shadow maps per light [ARHM00, stores a visibility factor ν p ∈ [0, 1] for each pixel of the
HBS00], limiting their use to static scenes only. Others, screen thus providing the percentage (ν p ∗ 100) of light seen
dealing with dynamic scenes, render soft shadows from by a single pixel: fully lit if ν p = 1, in umbra if ν p = 0 and
a single light sample even though this generates well in penumbra when 0 < ν p < 1.
known artifacts because only the object parts visible from
the light sample are considered as occluders. However, The critical problem which our method solves is then
in addition to this shortcoming, such existing techniques how to perform a fast and accurate computation of the V-
suffer from several other important limitations. For instance, buffer. The algorithm is decomposed into two steps: first,
some are limited to planar receivers [SS98] while others it computes the shadow map from the light and then, for
improperly take into account the occluder’s shape as well each pixel of the V-buffer, if its corresponding point p
as occluder fusion [BS02], and also generate popup effects is in the penumbra, its visibility factor ν p is evaluated
when an originally hidden shadow appears [AHT04]. Some by computing the light area occluded by a subset of the
hybrid methods [CD03, WH03], based on both shadow shadow map samples. This subset of shadow map samples
maps and silhouette extraction, can only compute a coarse is called the search area and it is denoted as A. The visibility
approximation of the external penumbra parts (relatively to computations are detailed in section 2.1 while optimizations,
the hard shadow boundaries). Image based algorithms still i.e. the penumbra classification and the selection of the
have to be improved in order to provide real-time rendering search area A, are described in section 2.2.
of dynamic scenes with more realistic soft shadows.
2.1. Visibility computation
In a very recent work [AHL∗ 06], Atty et al. have Shadow map acquisition
presented a soft shadow algorithm whith a percentage of
visibility computation based, as in our approach, on the back The first step of the visibility buffer computation is the
projection of the shadow map samples. However, unlike acquisition of the shadow map storing linear light space
our algorithm, their method relies on a very restrictive depth values. This step requires the definition of a projection
assumption: occluders and receivers are disjointed sets. frustum (illustrated in figure 2a) having its origin at the
Moreover, they are limited to very small shadow map light source center. The near plane and its borders are taken
resolution (they report 200×200 pixels) and because all parallel to the light. It is at a distance n from the origin and
visibility computations are done in the discrete shadow map its width is w. The other frustum parameters can be chosen
space, shadows are more aliased. arbitrarily or, better, dynamically adapted to the view point
in order to optimize the effective shadow map resolution.
Contribution: We present a new soft shadow algorithm. Finally, the shadow map resolution r should be a power of
Rather than limiting the use of the shadow map to two in order to simplify the construction of a hierarchical
simple depth queries, we consider the shadow map as a version (section 2.2).

c The Eurographics Association 2006.


Guennebaud et al. / Real-time soft shadow mapping by backprojection

Figure 3: (a) Illustration of gaps and overlaps. (b)


Optimization of the search area.

Figure 2: (a) Shadow map parameters. (b) Projection of a


sample onto the light source from the current point p. This
Note that up to now, we consider every sample of the
projection is clipped by the light source and the rest is the
shadow map with a depth value less than z as the set
occluded area.
of potential occluders. We show in section 2.2 how to
drastically reduce this set of samples (i.e. the search area A).
Visibility pass
Gap filling
Given the shadow map, the goal of the visibility pass is
to compute the visibility factor ν p of each visible point p Owing to the discontinuous representation of the shadow
of the scene. Thus, given a point p that has to be shaded, map, small overlaps and gaps may occur between samples
let (u, v) and z be respectively the coordinates (in pixels) (as illustrated in figure 3a). In overlapping regions, some
of its projection onto the shadow map and its light space light points are removed twice, hence generating slightly
depth value (figure 2b). The basic algorithm for computing darker penumbrae. On the other hand, in gap regions,
its visibility factor ν p is straightforward. We first assume that occluded light points are not removed and this may create
this point is fully visible (ν p = 1) and we remove from ν p unwanted light in umbra regions (see figure 7d). Whereas
the area occluded by every potential occluding sample stored overlap errors are quite acceptable, gap artifacts have to be
in the shadow map. removed. In order to fill the gaps, we make the assumption
that they occur between neighbor samples of the shadow
The area occluded by a sample of coordinates (us , vs ) map and then we extend samples’ left and bottom boundaries
and depth zs is computed as follow. Its object space such that all occluding samples join in the two-dimensional
representation, i.e. a square parallel to the light and the light space (figure 3a). Whereas the previous assumption is
shadow map, is projected from the current point p onto the true in 1D, it is not always the case in 2D because samples
light source plane (figure 2b). This projection is a rectangle are shifted in the two directions. This explains why we only
parallel to the light’s borders. Let B be its bounds in the extend the samples to fill gaps and why we cannot also clip
normalized light source space, i.e. the two-dimensional light the samples to remove overlapping. Hence, this procedure
space scaled such that the light size is 1 (figure 2b). Thus B slightly increases the overlap error, but it remains rarely
is given by: perceptible while it effectively fills the gaps. We also point
 
ble f t (us − u − 1/2) out that the overlap error may becomes large as soon as
 bright  (us − u + 1/2) w z 1 their exist two occluders such that one is close to the light
B= = z (1)
(vs − v − 1/2) n ∗ r s z − zs l
 
bbottom  source and the other is close to the current shaded point.
btop (vs − v + 1/2) Fortunately, such extrem cases rarely occur in practice. To
w summarize, for a given occluding sample (us , vs ):
where, n∗r zs is the size of the sample in the object space
z
and z−zs is the scale between the sample plane and the light • compute its bounds B using equation 1,
source plane. Finally, the intersection between the light and • compute the bound b0le f t (resp. b0bottom ) for its neighbor
the sample is given by clamping the bounds B by [−1/2, (us − 1, vs ) (resp. (us , vs − 1)) by adapting equation 1,
1/2] and the normalized occluded area (bright − ble f t ) ∗ • take as final value for ble f t (resp. bbottom ) the minimal
(btop − bbottom ) is subtracted from ν p . value between b0le f t and ble f t (resp. b0bottom and bbottom ).

c The Eurographics Association 2006.


Guennebaud et al. / Real-time soft shadow mapping by backprojection

Obviously, a sample is extended towards one of its neighbors covered depth values. In practice, our HSM is efficiently
if and only if this neighbor is also a potential occluder, i.e. if stored in a hardware mipmapped texture such that these
its depth is lower than the current receiver depth. reduction steps are efficiently performed by the GPU: at each
step, the largest level is rendered into the lower one with
Textured light source a trivial fragment shader performing custum minimal and
maximal operations.
The method presented above uses a fast analytical
computation of the area occluded by each sample. This Search area reduction
method can easily be adapted to handle textured and
animated light sources (like a fire) in a similar way to that In order to reduce the number of potential occluding
in [AAM03]. Indeed, the four scalar values B, defining the samples, we accurately evaluate the search area A. Since
light region occluded by a sample, can be directly used the occluding samples are inside the rectangular pyramid
to index a 4D texture storing the normalized light source formed by the light quad and the current point p, a first
color of this region. This way, we can handle any kind of coarse approximation is obtained by taking the intersection
area light source. All details on this method can be found between the near plane of the shadow map and the pyramid
in [AAM03]. The use of 4D textures generates discretization (figure 3b). The subset A is then a square(or a rectangle
 if
artifacts and the textures are expensive to store. Since our the light is rectangular) of width wA = l wn 1n − 1z , centered
occluding regions are rectangular quads parallel to the light, in (u, v) (the projection of p onto the shadow map). This
a dramatically lighter and more accurate alternative is to use approximation can be improved in two steps. First, the top
Summed Area Tables (SAT) [Cro84]. Textured light sources level of the HSM gives us the minimal depth value zmin
are illustrated figure 4. stored in the shadow map. Thus, the pyramid can be clamped
by a plane of depth zmin (figure 3b) and the width of the new
2.2. Optimizations search area becomes:
 
Significant improvements in performance can be made out n 1 1
wA = l − (2)
by both reducing the number of potential occluding samples w zmin z
and performing the expensive penumbra calculations only
on pixels potentially in the penumbra. In practice, the second Then, we quickly find the local min depth value z0min of the
optimization requires the first one. new search area (figure 3b) by accessing the appropriate
level in the HSM (note that four texture fetches are required).
In order to implement these optimizations, we first have The new value of wA is computed with equation 2 using z0min
to build a hierarchical shadow map (HSM) storing for each rather than zmin . This step can be repeated while the search
pixel both minimal and maximal depth values of occluder area is significantly reduced, but in practice, we have found
samples (level 0 is just the shadow map itself). As with that one step is sufficient.
mipmaps, other levels are built iteratively by reducing the
resolution by a factor two, but rather than average values, Penumbra classification
each low level pixel stores both the minimal and maximal
We want now to quickly classify potential penumbra pixels
in order to reduce the number of expensive computations
of visibility factors v p . For the current point p, once
the accurate local depth bound values z0min and z0max are
evaluated (during the search area reduction), we compare its
depth value z:
1. if z ≤ z0min then the pixel is fully lit and ν p = 1,
2. if z > z0max then the pixel is considered as occluded and
ν p = 0,
3. otherwise the pixel is potentially in penumbra and ν p
must be computed accurately.

Adaptive precision
Finally, the speed of our algorithm mainly depends on the
size in pixels of the search areas which can be important
for a very large penumbra. In terms of visual quality, large
penumbrae require less details than thinner ones. Hence,
when real-time performance requires faster computation, we
Figure 4: Soft shadows from a simple textured light source. reduce the precision of large penumbrae by dynamically
For this example we have used the SAT option. selecting, for each point p, a level in the HSM so that the

c The Eurographics Association 2006.


Guennebaud et al. / Real-time soft shadow mapping by backprojection

Figure 5: In these pictures, the frame rate has been measured with a 768×768 screen resolution. Top row: our algorithm without
adaptive precision applied with two different resolutions (left and center) compared with standard hard shadow mapping (right).
Bottom row: illustration of the adaptive precision capabilities of our algorithm for two different threshold values of the search
area (center and right). Colors in the left picture indicate the locally selected level in the HSM. Colors red, green, blue and
yellow respectively correspond to levels of resolution 1024×1024, 512×512, 256×256 and 128×128.

search area does not exceed a given size threshold (figure 5). by the visibility passes to compute the V-buffers (one per
Therefore, for large penumbrae, rather than backprojecting light source) and hence the whole geometry is rendered a
many samples of the shadow map (level 0 of the HSM), we second time for the final rendering pass (this pass can take
project fewer but larger samples from a coarser level of the advantages of early z-cull). We have opted for this approach
HSM. Since we cannot store any coverage information per rather than a full deferred shading approach because it is
pixel of the HSM (coverage values depend on the receiver more flexible and it provides a better hardware anti-aliasing
depth), we simply take as sample depth zs the minimal support.
depth value in the selected coarse level. When the scene is
composed of large penumbra areas, this approximation leads Dynamic branching versus multi-pass
to small visual quality degradations that are described and The simplest way to compute the visibility buffer is to use
discussed in section 4.1. a single pass with one fragment shader performing the three
following steps:
3. Implementation
1. compute the reduced search area (using the HSM),
Our soft shadow mapping algorithm can be implemented 2. classify the pixel as fully occluded, fully lit or in
in several ways, depending on the application and the penumbra,
hardware. Since there is no general best solution, we present 3. if the pixel is in penumbra, loop over samples of the
our implementation and discuss the variants. search area and accurately compute the visibility factors.
Deferred shading like strategy In this case, efficient dynamic branching support in
the fragment shader stages is required to both loop over
In order to be independent of the scene complexity we
the selected samples and avoid complex computations on
use a deferred shading like strategy: at each frame, the
pixels outside the penumbra. However, in some hardware,
scene is rendered a first time from the view point without
branching is expensive.
any shading calculation but, as for the shadow map, into a
single component floating point buffer storing linear depth The test “in penumbra?" can be attractively substituted by
values. In our implementation this depth buffer is only used two passes taking advantage of early-z fragment rejection.

c The Eurographics Association 2006.


Guennebaud et al. / Real-time soft shadow mapping by backprojection

(a) (b) (c) (d)

Figure 6: Our algorithm (a), is compared to a reference image (b), the penumbra wedges technique (c) and the flood fill
algorithm (d).

Basically, for each pixel, the first pass computes the reduced 4. Results
search area and performs the classification (steps 1 and 2). In this section, we present performances and visual results
Accordingly, fragment depth is set such that only pixels followed by a short discussion on limitations. The results
classified as in penumbra pass the depth test during the next presented here have been obtained with a GeForce 7800
pass (first part of step 3). Then, the dynamic loops can be graphics card.
replaced by static loops if we set the size of the search area.
The choice of this size directly depends on the performance
one wants to achieve: a small size leads to fast computations 4.1. Performance
but the selection of a coarse level in the HSM to evaluate Our algorithm is very efficient, especially for complex
the visibility factor of large penumbrae (section 2.2) while dynamic scenes. Indeed, with respect to classical shadow
large size produces accurate soft shadows but requires more mapping algorithms, our additional visibility calculations
expensive computations. do not depend on the scene complexity and they are only
We have tested our algorithm on current NVidia performed on visible points which are in or close to the
GPUs (GeForce 6x00 and 7x00). Due to the coarse penumbra. For instance, the scene in the figure 8, composed
support of dynamic branching of these GPUs (groups of of 800k polygons, is rendered at about 40 fps with classical
64×64 fragments follow the same branch), it is difficult shadow mapping and at 23 fps using our soft shadow
to determine the best options since performances can algorithm without adaptive resolution. The rendering times
significantly vary according to the scene and the view point of each part of our algorithm are summarized in the table 1
position. However, after many experiments and because for scenes of various complexities. These results clearly
the dynamic loops treat fewer samples, the two passes show that our visibility calculations do not depend on
approach combined with dynamic loops seems to be the the scene complexity at all, but rather on the size of the
best compromise. We believe that, on GPUs having efficient penumbrae and the resolution of the shadow map.
dynamic branching support, such as the latest ATI GPU In practice, the worst cases are close views on large
generation (X1x00) which manages small groups of 4×4 penumbrae with a high shadow map resolution. In such
pixels, the fully dynamic version would be the best choice cases, our performance can drop down to a few frames per
in any case. Such hardware should also exhibit significantly second at full precision. For instance, the top left picture
better relative performances. in figure 5 shows a simple scene rendered at 2.75 fps. The
low performance is explained by the size of the search areas
which exceed 32×32 for pixels situated in large penumbrae.
This figure also illustrates the effects of our adaptive
Scene Fig. 7 Fig. 1 Fig. 8 precision strategy (section 2.2) on both performance and
Shadow map 1.7 2.6 8.7 visual quality. As we can see, a threshold value setting
Camera depth map 0.7 1.3 7.6 the maximal size of the search area at 8×8 is sufficient to
HSM construction 3.1 3.1 3.1 successfully meet real-time rates and this with a low visual
Visibility pass 1 0.9 0.9 0.9 quality degradation. Indeed, the use of local low resolutions
Visibility pass 2 39 28 15 still provides a high quality penumbra except at the transition
Final rendering pass 0.8 1.6 8.2 between different levels of the HSM (see the discontinuities
Total (ms) 46.2 37.5 43.5 in the penumbra of figure 5 bottom right). Note that, even
fps 21.6 26.6 23 though taking minimal depth values in the low levels of
the HSM may seem to be a coarse approximation (section
Table 1: Rendering times in ms for each step of our 2.2), for the same speed, this approximation clearly provides
algorithm when it is applied on different scenes (without a better visual quality than reducing the resolution of the
adaptive resolution). whole shadow map (figure 5 central images).

c The Eurographics Association 2006.


Guennebaud et al. / Real-time soft shadow mapping by backprojection

(a) (b) (c)

(d) (e)
Figure 7: (a) Our algorithm. (b) Reference image. (c) The penumbra wedges technique. (d) Our algorithm without gaps filling.
(e) The flood fill algorithm.

4.2. Visual results In order to exhibit all these important features, our
In order to evaluate our visual results, we take as reference technique uses some approximations. Since we use a single
the average of several accurate hard shadows, e.g. high shadow map, we consider as occluders only object parts
resolution hard shadow maps (2048 × 2048) computed from that are visible from the center of the light. However,
1024 point light samples. some hidden parts can be visible from other points of the
light. Thus these object parts should be treated as occluders
Figure 1 demonstrates the capability of our algorithm to whereas they are not considered as such by our method (see
deal with binary alpha-textured meshes (wire netting and figure 9). Artifacts due to this approximation are the same
foliage) without performance penalty and with the same as the single silhouette artifacts of the penumbra-wedges
visual quality as for any other rasterizable geometry. In the technique [AAM03].
figures 6 and 7 we compare our algorithm against the famous
penumbra wedges technique [AAM03] and a recent flood Even though these artifacts are seldom perceptible, they
fill algorithm [AHT04] on two tedious synthetic examples. can be significantly reduced by splitting the light source
As we can see, our algorithm correctly handles occluder area as proposed in [ADMAM03] for the penumbra wedges
fusion when the occluders are close respectively (figure 6), technique. In this case it is important to notice that, in
while the other methods do not. When the blend of occluders the case of simple scenes, almost the same performance is
becomes more complex (figure 7) our algorithm exhibits obtained when n small light sources are used rather than a
darker penumbra where the samples of the shadow map single light source of the same area. Indeed, whereas the
overlap too much. However, our results are still closer to the
reference than those obtained with other methods. Figure 7d
also illustrates the importance of our gaps filling method.

5. Discussion and conclusion


Our algorithm significantly improves the capability of
image-based methods to generate dynamic soft shadows
in real-time. In addition to providing all the advantages
of classical shadow maps (such as independence from the
geometry), it produces soft shadows with similar quality
to real-time silhouette extraction based techniques but
without scalability limitations. Furthermore, our algorithm
can handle textured light sources, it is easy to integrate into Figure 9: Illustration of the well known single light sample
applications and it does not require any precomputation. artifact. Left: reference image. Right: our algorithm.

c The Eurographics Association 2006.


Guennebaud et al. / Real-time soft shadow mapping by backprojection

Figure 8: A scene composed of 800k polygons (screen resolution: 768×768 pixels). From left to right: reference image obtained
from the average of 1024 light samples (17s per frame), our algorithm without adaptive resolution (23 fps) and standard shadow
mapping with hardware 2×2 percentage closer filtering showing the pixels of the shadow map (40 fps).

rendering time of the shadow maps will be multiplied by workshop on Rendering (2003), Eurographics Association,
a factor of n, our visibility computations will not increase pp. 208–218.
since they are linear with respect to the light source area. [Cro84] C ROW F. C.: Summed-area tables for texture mapping.
In Proceedings of ACM SIGGRAPH ’84 (New York, NY, USA,
In the future we intend to increase the accuracy of our
1984), ACM Press, pp. 207–212.
soft shadows. This can be achieved by first removing sample
[FFBG01] F ERNANDO R., F ERNANDEZ S., BALA K.,
overlappings, and then improving the quality of our adaptive
G REENBERG D. P.: Adaptive shadow maps. In Proceedings
precision strategy when the shadow map resolution is too
of ACM SIGGRAPH 2001 (New York, NY, USA, 2001), ACM
high. Also, when the local shadow map resolution is too Press, pp. 387–390.
low, it becomes necessary to increase its effective resolution.
[HBS00] H EIDRICH W., B RABEC S., S EIDEL H.-P.: Soft
Hence, it could be pertinent to investigate the integration of
shadow maps for linear lights high-quality. In Rendering
such existing methods for hard shadow mapping into our soft Techniques ’00 (Proceedings of the 11th EG Workshop on
shadow mapping algorithm. Rendering (2000), Eurographics Association, pp. 269–280.

References [HLHS03] H ASENFRATZ J.-M., L APIERRE M., H OLZSCHUH


N., S ILLION F.: A survey of real-time soft shadows algorithms.
[AAM03] A SSARSSON U., A KENINE -M ÖLLER T.: A geometry- Computer Graphics Forum 22, 4 (2003).
based soft shadow volume algorithm using graphics hardware.
[MT04] M ARTIN T., TAN T.-S.: Anti-aliasing and continuity
Proceedings of ACM SIGGRAPH 2003 22, 3 (2003), 511–520.
with trapezoidal shadow maps. In Proceedings of the 15th EG
[ADMAM03] A SSARSON U., D OUGHERTY M., M OUNIER M., Symposium on Rendering (2004), Eurographics Association.
A KENINE -M ÖLLER T.: An optimized soft shadow volume [RSC87] R EEVES W. T., S ALESIN D. H., C OOK R. L.:
algorithm with real-time performance. In Proceedings of Rendering antialiased shadows with depth maps. In Proceedings
the ACM SIGGRAPH/EUROGRAPHICS Workshop on Graphics of SIGGRAPH ’87 (New York, NY, USA, 1987), ACM Press,
Hardware (2003), ACM Press. pp. 283–291.
[AHL∗ 06] ATTY L., H OLZSCHUCH N., L APIERRE M., [SD02] S TAMMINGER M., D RETTAKIS G.: Perspective shadow
H ASENFRATZ J.-M., H ANSEN C., S ILLION F.: Soft shadow maps. In Proceedings of ACM SIGGRAPH ’02 (New York, NY,
maps: Efficient sampling of light source visibility. Computer USA, 2002), ACM Press, pp. 557–562.
Graphics Forum (2006). (to appear).
[SS98] S OLER C., S ILLION F. X.: Fast calculation of soft shadow
[AHT04] A RVO J., H IRVIKORPI M., T YYSTJÄRVI J.: textures using convolution. In Proceedings of ACM SIGGRAPH
Approximate soft shadows using image-space flood-fill ’98 (New York, NY, USA, 1998), ACM Press, pp. 321–332.
algorithm. In Proceedings of Eurographics 2004, Computer
[WH03] W YMAN C., H ANSEN C.: Penumbra maps:
Graphics Forum (2004), vol. 23, pp. 271–280.
Approximate soft shadows in real-time. In Proceedings
[ARHM00] AGRAWALA M., R AMAMOORTHI R., H EIRICH A., of the 14th Eurographics workshop on Rendering (2003),
M OLL L.: Efficient image-based methods for rendering soft Eurographics, Eurographics Association, pp. 202–207.
shadows. In Proceedings of ACM SIGGRAPH 2000 (2000),
[Wil78] W ILLIAMS L.: Casting curved shadows on curved
ACM Press, pp. 375–384.
surfaces. In Proceedings of ACM SIGGRAPH ’78 (New York,
[BS02] B RABEC S., S EIDEL H.-P.: Single sample soft shadows NY, USA, 1978), ACM Press, pp. 270–274.
using depth maps. In Proceedings of Graphics Interface (2002). [WSP04] W IMMER M., S CHERZER D., P URGATHOFER W.:
[CD03] C HAN E., D URAND F.: Rendering fake soft shadows Light space perspective shadow maps. In Proceedings of the 15th
with smoothies. In Proceedings of the 14th Eurographics EG Symposium on Rendering (2004), Eurographics Association.

View public
c The Eurographics Association 2006.

You might also like