Hybrid Rendering For Real
Hybrid Rendering For Real
Hybrid Rendering For Real
ABSTRACT
This chapter describes the rendering pipeline developed for PICA PICA, a real-time
ray tracing experiment featuring self-learning agents in a procedurally assembled
world. PICA PICA showcases a hybrid rendering pipeline in which rasterization,
compute, and ray tracing shaders work together to enable real-time visuals
approaching the quality of offline path tracing.
The design behind the various stages of such a pipeline is described, including
implementation details essential to the realization of PICA PICA’s hybrid ray tracing
techniques. Advice on implementing the various ray tracing stages is provided,
supplemented by pseudocode for ray traced shadows and ambient occlusion. A
replacement to exponential averaging in the form of a reactive multi-scale mean
estimator is also included. Even though PICA PICA’s world is lightly textured and
small, this chapter describes the necessary building blocks of a hybrid rendering
pipeline that could then be specialized for any AAA game. Ultimately, this chapter
provides the reader with an overall good design to augment existing physically
based deferred rendering pipelines with ray tracing, in a modular fashion that is
compatible across visual styles.
By relying on the interaction of multiple graphical stages, and by using each stage’s
unique capabilities to solve the task at hand, modularization of the rendering
process allows for achieving each visual aspect optimally. The interoperability
of DirectX also allows for shared intermediary results between passes and,
ultimately, the combination of those techniques into the final rendered image.
Moreover, a compartmentalized approach as such is scalable, where techniques
mentioned in Figure 25-2 can be adapted depending on the user’s hardware
capabilities. For example, primary visibility and shadows can be rasterized or ray
traced, and reflections and ambient occlusion can be ray traced or ray marched.
438
Hybrid Rendering for Real-Time Ray Tracing
Global illumination, transparency, and translucency are the only features of PICA
PICA’s pipeline that fully require ray tracing. The various stages described in
Figure 25-2 are executed in the following order:
1. Object-space rendering.
3. G-buffer layout.
4. Direct shadows.
5. Reflections.
6. Direct lighting.
8. Post-processing.
439
RAY TRACING GEMS
25.2.1 SHADOWS
Accurate shadowing undeniably improves the quality of a rendered image. As seen
in Figure 25-3, ray traced shadows are great because they perfectly ground objects
in a scene, handling both small- and large-scale shadowing at once.
440
Hybrid Rendering for Real-Time Ray Tracing
Figure 25-4. Hybrid ray traced shadows: hard (left) and soft and filtered (right).
With DirectX Raytracing (DXR), ray traced shadows can be achieved by both a ray
generation shader and miss shader:
441
RAY TRACING GEMS
442
Hybrid Rendering for Real-Time Ray Tracing
The code also demonstrates the use of the cone angle function,
uniformSampleCone(), which drives the softness of the penumbra. The wider the
angle, the softer the penumbra, but more noise will be generated. This noise can
be mitigated by launching additional rays, but it can also be solved with filtering.
The latter is illustrated in Figure 25-5.
Figure 25-5. Hybrid ray traced shadows: unfiltered (left) and filtered (right).
Figure 25-6. Shadow filtering, inspired by the work of Schied et al. [24].
443
RAY TRACING GEMS
One should note that we implement shadows with closest-hit shaders. Shadows
can also be implemented with any-hit shaders, and we could specify that we only
care about the first unsorted hit. We did not have any alpha-tested geometry such
as vegetation in PICA PICA, therefore any-hit shaders were not necessary for this
demo.
Though our approach works for opaque shadows, it is possible to rely on a similar
approach for transparent shadows [4]. Transparency is a hard problem in real-time
graphics [20], especially if limited to rasterization. With ray tracing new alternatives
are possible. We achieve transparent shadows by replacing the regular shadow
tracing code with a recursive ray trace through transparent surfaces. Results are
showcased in Figure 25-7.
In the context of light transport inside thick media, proper tracking [11] in real time
is nontrivial. For performance reasons we follow a thin-film approximation, which
assumes that the color is on the surface of the objects. Implementing distance-
based absorption could be a future improvement.
For any surface that needs shadowing, we shoot a ray toward the light. If we
hit an opaque surface, or if we miss, we terminate the ray. However, if we hit a
transparent surface, we accumulate absorption based on the albedo of the object.
We keep tracing toward the light until all light is absorbed, the trace misses, or we
hit an opaque surface. See Figure 25-8.
444
Hybrid Rendering for Real-Time Ray Tracing
Our approach ignores the complexity of caustic effects, though we do take the
Fresnel effect into account on interface transitions. To that, Schlick’s Fresnel
approximation [25] falls apart when the index of refraction on the incident side of
the medium is higher than the far side. Therefore, we use a modified total internal
reflection modification [16] of Schlick’s model.
Similar to opaque ray traced soft shadows, we filter transparent soft shadows
with our modified SVGF filter. One should note that we only compute transparent
shadows in the context of direct shadowing. In the event where any other pass
requires light visibility sampling, for performance reasons we approximate such
visibility by treating all surfaces as opaque.
25.2.2 REFLECTIONS
One of the main techniques that takes advantage of ray tracing is reflections.
Reflections are an essential part of a rendered image. If done properly, reflections
ground objects in the scene and significantly improve visual fidelity.
Lately, video games have relied on both local reflection volumes [17] and screen-
space reflections (SSR) [27] for computing reflections with real-time constraints.
While such techniques can generally provide convincing results, they are often not
robust. They can easily fall apart, either by lacking view-dependent information or
simply by not being able to capture the complexity of interreflections. As shown
in Figure 25-9, ray tracing enables fully dynamic complex reflections in a robust
fashion.
445
RAY TRACING GEMS
Similar to our approach for shadows and ambient occlusion, reflection rays are
launched from the G-buffer, thus eliminating the need for ray tracing of primary
visibility. Reflections are traced at half resolution, or at a quarter of a ray per pixel.
While this might sound limiting, a multistage reconstruction and filtering algorithm
brings reflections up to full resolution. By relying on both spatial and temporal
coherency, missing information can be filled and visually convincing reflections
can be computed while keeping performance in check. Our technique works on
arbitrary opaque surfaces, with varying normals, roughness, and material types.
Our initial approach combined this with SSR for performance, but in the end we
rely solely on ray traced reflections for simplicity and uniformity. Our approach
relies on stochastic sampling and spatiotemporal filtering, instead of post-trace
screen-space blurring. Therefore, we believe that our approach is closer to
ground-truth path tracing, as surface appearance is driven by the construction of
stochastic paths from the BRDF. Our approach also does not require special care
at object boundaries, where blurring issues may occur with screen-space filtering
approaches.
The reflection system comes with its own pipeline, as depicted in Figure 25-10.
The process begins by generating rays via material importance sampling. Given a
view direction, a reflected ray taking into account our layered BRDF is generated.
Inspired by Weidlich and Wilkie’s work [29], our material model combines multiple
layers into a single, unified, and expressive BRDF. This model works for all
lighting and rendering modes, conserves energy, and handles the Fresnel effect
between layers. Sampling the complete material is complex and costly, so we
446
Hybrid Rendering for Real-Time Ray Tracing
Since we have only a quarter of a ray per pixel, we must ensure a high-quality
distribution. We use the low-discrepancy quasi-random Halton sequence because
it is easy to calculate, and well distributed for low and high sample counts. We
couple it with Cranley-Patterson rotation [7] for additional per-pixel jittering, in
order to obtain a uniquely jittered sequence for every source pixel.
From every point in the sample space, a reflected direction is generated. Because
we are sampling solely from the normal distribution, reflection rays that point
below the horizon are possible. We detect this undesirable case, as depicted by the
blue line in Figure 25-11, and compute an alternative reflection ray.
447
RAY TRACING GEMS
Figure 25-11. Left: BRDF reflection sampling. Right: Cranley-Patterson rotated Halton sequence. The
probability distribution (light gray area with dashed outline) contains valid BRDF importance-sampled
reflection rays (green) and reflection rays below the horizon (blue).
The simplest way to sample our material model is by choosing one of the layers
with uniform probability and then sampling that layer’s BRDF. This can be wasteful:
a smooth clear coat layer is barely visible head on yet dominates at grazing angles.
To improve the sampling scheme, we draw the layer from a probability mass
function based on each layer’s approximate visibility. See Figure 25-12.
After selecting the material layer, we generate a reflection ray matching its
properties using the microfacet normal sampling algorithm mentioned earlier. In
addition to the reflection vector, we also need the probability with which it has been
sampled. We will later scale lighting contributions by the inverse of this value, as
dictated by the importance sampling algorithm. It is important to keep in mind that
multiple layers can potentially generate the same direction. Yet, we are interested
in the probability for the entire stack, not just an individual layer. We thus add
up the probabilities so that the final value corresponds to having sampled the
448
Hybrid Rendering for Real-Time Ray Tracing
direction from the entire stack, rather than an individual layer. Doing so simplifies
the subsequent reconstruction pass, and allows using it to reason about the entire
material rather than its parts.
1 result = 0.0
2 weightSum = 0.0
3
4 for pixel in neighborhood:
5 weight = localBrdf(pixel.hit) / pixel.hitPdf
6 result += color(pixel.hit) * weight
7 weightSum += weight
8
9 result /= weightSum
Figure 25-13. Hybrid ray traced reflections at a quarter ray per pixel.
Once the half-resolution results have been computed, the spatial filter is applied.
Results are shown in Figure 25-14. While the output is still noisy, it is now full
resolution. This filter also gives variance reduction similar to actually shooting 16
rays per pixel, similar to work by Stachowiak [27] and Heitz et al. [12]. Every full-
resolution pixel uses a set of ray hits to reconstruct its reflection, and there is a
weighted average where the local pixel’s BRDF is used to weigh contributions.
Contributions are also scaled by the inverse PDF of the source rays, to account for
their distribution. This operation is biased, but it works well in practice.
449
RAY TRACING GEMS
The final step in the reflection pipeline is a simple bilateral filter that cleans up
some of the remaining noise. While this kind of filter can be a blunt instrument that
can overblur the image, it is needed for high-roughness reflections. Compared
to SSR, ray tracing cannot rely on a blurred version of the screen for prefiltered
radiance. It produces much more noise compared to SSR, therefore more
aggressive filters are required. Nonetheless, we can still control the filter’s effect.
We estimate variance in the image during the spatial reconstruction pass, as
shown in Figure 25-15, and use it to tune the bilateral kernel. Where variance is
low, we reduce the kernel size and sample count, which prevents overblurring.
450
Hybrid Rendering for Real-Time Ray Tracing
Near the end of the frame, we apply temporal antialiasing and get a pretty clean
image. When looking at Figure 25-9, it is important to remember that it comes from
a quarter reflection ray per pixel per frame and works with a dynamic camera and
dynamic objects.
451
RAY TRACING GEMS
Figure 25-16. Top left: motion reprojection. Top right: hit point reprojection. Bottom left: motion and
hit point reprojection blending. Bottom right: with reprojection clamping.
Finally, as demonstrated by Karis [15], we can use local pixel statistics to reject or
clamp the reprojected values, and force them to fit the new distribution. While the
results are not perfect, it is certainly a step forward. This biases the result and can
create some flickering, but it nicely cleans up the ghosting and is sufficient for real-
time purposes.
452
Hybrid Rendering for Real-Time Ray Tracing
453
RAY TRACING GEMS
30 RAY_FLAG_SKIP_CLOSEST_HIT_SHADER|
31 RAY_FLAG_ACCEPT_FIRST_HIT_AND_END_SEARCH,
32 RaytracingInstanceMaskAll,
33 HitType_Shadow,
34 SbtRecordStride,
35 MissType_Shadow,
36 ray,
37 shadowPayload);
38
39 result += shadowPayload.miss ? 1 : 0;
40 }
41
42 result /= numRays;
The shader code for ray traced ambient occlusion is similar to that of shadows, and
as such we only list the part specific to AO here. As with shadows, we reconstruct
the world-space position and normal for each pixel visible on screen using the
G-buffer.
Since the miss flag in the shadow payload is initialized to false and is only set
to true in the miss shader, we can set RAY_FLAG_SKIP_CLOSEST_HIT_SHADER
to skip the hit shader, for performance. We also do not care about how far away
an intersection is. We just want to know if there is an intersection, so we use
RAY_FLAG_ACCEPT_FIRST_HIT_AND_END_SEARCH as well. Finally, the cosine-
weighted distribution of samples is generated on a unit hemisphere and rotated
into world space using a basis produced from the G-buffer normal.
454
Hybrid Rendering for Real-Time Ray Tracing
Figure 25-17. Top left: ray traced AO (1000 spp). Top right: hybrid ray traced AO (1 spp). Bottom left:
filtered hybrid ray traced AO (1 spp). Bottom right: GTAO.
25.2.4 TRANSPARENCY
Unlike rasterization, where the rendering of transparent geometry is often treated
separately from opaque geometry, ray tracing can streamline and unify the
computation of light transport inside thick media with the rest of the scene. One
notable example is the rendering of realistic refractions for transparent surfaces
such as glass. See Figure 25-18.
455
RAY TRACING GEMS
Figure 25-18. Left: object-space ray traced transparency result. Right: texture-space output.
With ray tracing, interface transitions are easier to track because each transition
is part of a chain of intersections. As seen in Figure 25-19, as a ray travels inside
and then outside a medium, it can be altered based on the laws of optics and
parameters of that medium. Intermediary light transport information is modified
and carried with the ray, as part of its payload, which enables the computation
of visually convincing effects such as absorption and scattering. We describe the
latter in Section 25.2.5.
456
Hybrid Rendering for Real-Time Ray Tracing
457
RAY TRACING GEMS
Figure 25-22. Object-space ray traced transparency: result (left) and texture-space output (right).
Using our parameterization and camera information, we drive ray origin and ray
direction during tracing. Clear glass refraction is achieved using Snell’s law,
whereas rough glass refraction is achieved via a physically based scattering
function [28]. The latter generates rays refracted off microfacets, spreading into
wider cones for rougher interfaces.
458
Hybrid Rendering for Real-Time Ray Tracing
A feature of DXR that enables this technique is the ability to know if we have
transitioned from one medium to another. This information is provided by the
HitKind() function, which informs us if we have hit the front or back side of the
geometry:
With such information we can alter the index of refraction and correctly handle
media transitions. We can then trace a ray, sample lighting, and finish by
modulating the results by the medium’s absorption, approximated by Beer’s law.
Chromatic aberration can also be applied, to approximate wavelength-dependent
refraction.
25.2.5 TRANSLUCENCY
Three ray traced images with translucency are shown in Figure 25-23. Similar
to transparency, we parameterize translucent objects in texture space. The
scattering process is represented in Figure 25-24: Starting with (a) a light source
and a surface, we consider valid vectors using (b) the surface normals. Focusing
on a single normal vector for now, (c) we then push the vector inside the surface.
Next, (d) we launch rays in a uniform sphere distribution similar to the work by
Christensen et al. [6]. Several rays can be launched at once, but we only launch
one per frame. Finally, (e) lighting is computed at the intersection, and (f) previous
results are gathered and blended with the current result.
459
RAY TRACING GEMS
We let results converge over multiple frames via temporal accumulation. See
Figure 25-25. Spatial filtering can be used as well, although we did not encounter
enough noise to make it worthwhile because of the diffuse nature of the effect.
Since lighting conditions can change when objects move, the temporal filter
needs to invalidate results and adapt to dynamism. A simple exponential moving
460
Hybrid Rendering for Real-Time Ray Tracing
average here can be sufficient. For improved response and stability, we use an
adaptive temporal filter based on exponential averaging [26], which is described
further in the next section and which varies its hysteresis to quickly reconverge to
dynamically changing conditions.
PICA PICA features an indirect diffuse lighting solution that does not require any
precomputation or pre-generated parameterization, such as UV coordinates. This
reduces the mental burden on artists, and provides realistic results by default,
without them having to worry about implementation details of the GI system.
It supports dynamic and static scenes, is reactive, and refines over time to a
high-quality result. Since solving high-quality per-pixel GI every frame is not
currently possible for real-time rates, spatial or temporal accumulation is
required. For this project, 250,000 rays per frame are budgeted for diffuse
interreflections.
461
RAY TRACING GEMS
462
Hybrid Rendering for Real-Time Ray Tracing
It is important to note that surfels are spawned probabilistically. In the event where
the camera moves close to a wall that is missing surfels, suddenly all pixels have
low coverage and will require surfels. This would end up creating a lot of surfels in
a small area, since screen tiles are independent of each other. To solve this issue,
the spawn heuristic is made proportional to the pixel’s projected area in world
space. This process runs every frame and continues spawning surfels wherever
coverage is low. Additionally, since surfels are allocated based on screen-space
constraints, sudden geometric or camera transitions to first-seen areas can show
missing diffuse interreflections. This “first frame” problem is common among
techniques that rely on temporal amortization, and it could be noticed by the user.
The latter was not an issue for PICA PICA, but it could be depending on the target
usage of this approach.
Once assigned, surfels are persistent in the array and scene. See Figure 25-28.
This is necessary for the incremental aspect of the diffuse interreflection
accumulation. Because of the simple nature of PICA PICA’s scene, we did not
have to manage complex surfel recycling. We simply reset the atomic counter
at scene reload. As shown in Section 25.3, performance on current ray tracing
hardware was quite manageable, at a cost of 0.35 ms for 250,000 surfels. We believe
surfel counts can be increased quite a bit before it becomes a performance issue.
A more advanced allocation and deallocation scheme might be necessary in case
one wants to use this technique for a more complex use case, such as a video
game. Further research here is required, especially with regards to level of detail
management for massive open-world games.
463
RAY TRACING GEMS
Surfels are rendered similarly to light sources when applied to the screen.
Similar to the approach by Lehtinen et al. [19], a smoothstep distance attenuation
function is used, along with the Mahalanobis metric to squash surfels in the
normal direction. Angular falloff is used as well, but each surfel’s payload is just
irradiance, without any directionality. For performance reasons, an additional
world-space data structure enables the query of indirect diffuse illumination in
three-dimensional space. This grid structure, in which every cell stores a list of
surfels, also serves as a culling mechanism. Each pixel or point in space can then
look up the containing cell and find all relevant surfels.
A downside of using surfels is, of course, the limited resolution and the lack
of high-frequency detail. To compensate, a colored multiple-bounce variant
of screen-space ambient occlusion [14] is applied to the calculated per-pixel
irradiance. The use of high-frequency AO here makes our technique diverge from
theory, but it is an aesthetic choice that compensates for the lack of high-frequency
detail. This colored multi-frequency approach also helps retain the warmth in our
toy-like scenes. See Figure 25-29.
464
Hybrid Rendering for Real-Time Ray Tracing
Figure 25-29. Left: colored GTAO. Center: surfel GI. Right: surfel GI with colored GTAO.
Figure 25-30. Left: full recursive path tracing. Right: incremental previous frame path tracing.
465
RAY TRACING GEMS
Path tracing typically uses Monte Carlo integration. If expressed as a running mean
estimator, the integration is an average of contributions with linearly decaying
weights. Its convergence hinges on the integrand being immutable. In the case of
our dynamic GI, the integrand changes all the time. Interactive path tracers and
progressive light map bakers [8, 13] typically tackle this by resetting accumulation
on change. Their goals are different though, as they try to converge to the correct
solution over time, and do not allow error. As such, a hard reset is actually
desirable for them, but not for a real-time demo.
Since we cannot use proper Monte Carlo, we outright give up on ever converging.
Instead, we use a modified exponential mean estimator,
x0 = 0
(1)
x n +1 = lerp ( x n , x n +1, k ) ,
whose formulation is similar to that of plain Monte Carlo. The difference is in how
the blending factor k is defined. In exponential averaging, the weight for a new
sample is constant and typically set low, so that variance in input is scaled by a
small value and does not look jarring in the output.
If the input does not have high variance, the output will not either. We can then use a
higher blending factor k. The specifics of our integrand change all the time though,
so we need to estimate that dynamically. We run short-term mean and variance
estimators, which we then use to inform our primary blending factor. The short-
term statistics also give us an idea of the plausible range of values into which the
inputs samples should fall. When they start to drift, we increase the blending factor.
This works well in practice and allows for a reactive indirect diffuse lighting solution,
as demonstrated by this demo. // Looks good to me, and I added a comma. /Eric
1 struct MultiscaleMeanEstimatorData
2 {
3 float3 mean;
4 float3 shortMean;
5 float vbbr;
6 float3 variance;
7 float inconsistency;
8 };
9
10 float3 MultiscaleMeanEstimator(float3 y,
11 inout MultiscaleMeanEstimatorData data,
12 float shortWindowBlend = 0.08f)
13 {
14 float3 mean = data.mean;
15 float3 shortMean = data.shortMean;
466
Hybrid Rendering for Real-Time Ray Tracing
467
RAY TRACING GEMS
25.3 PERFORMANCE
Here we provide various performance numbers behind the ray tracing aspect of
our hybrid rendering pipeline. The numbers in Figure 25-31 were measured on
pre-release NVIDIA Turing hardware and drivers, for the scene and view shown in
Figure 25-32. When presented at SIGGRAPH 2018 [4], PICA PICA ran at 60 frames
per second (FPS), at a resolution of 1920 × 1080. Performance numbers were also
captured against the highest-end GPU at that time, the NVIDIA Titan V (Volta).
Figure 25-31. Performance measurements in milliseconds (ms). SIGGRAPH 2018 timings are
highlighted in green.
468
Hybrid Rendering for Real-Time Ray Tracing
25.4 FUTURE
The techniques in PICA PICA’s hybrid rendering pipeline enable real-time visually
pleasing results with (almost) path traced quality, while being mostly free from
noise in spite of relatively few rays being traced per pixel and per frame. Real-time
ray tracing makes it possible to replace finicky hacks with unified approaches,
allowing for the phasing-out of artifact-prone algorithms such as screen-space ray
marching, along with all the artist time required to tune them. This opens the door
to truly effortless photorealism, where content creators do not need to be experts
in order to get high-quality results.
The surface has been barely scratched, and with real-time ray tracing a new world
of possibilities opens up. While developers will always keep asking for more power,
the hardware that we have today already allows for high-quality results at real-
time performance rates. If ray budgets are devised wisely, with hybrid rendering
we can approach the quality of offline path tracers in real time.
25.5 CODE
1 struct HaltonState
2 {
3 uint dimension;
4 uint sequenceIndex;
5 };
6
7 void haltonInit(inout HaltonState hState,
8 int x, int y,
469
RAY TRACING GEMS
470
Hybrid Rendering for Real-Time Ray Tracing
ACKNOWLEDGMENTS
The authors would like to thank the PICA PICA team at SEED, a technical and creative
research division of Electronic Arts. We would also like to thank our friends at
Frostbite and DICE, with whom we have had great discussions and collaboration as
we built this hybrid rendering pipeline. Moreover, this endeavor would not have been
possible without support from the folks at NVIDIA and the DirectX team at Microsoft.
The authors would also like to thank Morgan McGuire for his review of this chapter,
as well as Tomas Akenine-Möller and Eric Haines for their insight and support.
REFERENCES
[1] Akenine-Möller, T., Haines, E., Hoffman, N., Pesce, A., Iwanicki, M., and Hillaire, S. Real-Time
Rendering, fourth ed. A K Peters/CRC Press, 2018.
[2] Andersson, J., and Barré-Brisebois, C. DirectX: Evolving Microsoft’s Graphics Platform. Microsoft
Sponsored Session, Game Developers Conference, 2018.
[3] Andersson, J., and Barré-Brisebois, C. Shiny Pixels and Beyond: Real-Time Raytracing at SEED.
NVIDIA Sponsored Session, Game Developers Conference, 2018.
[4] Barré-Brisebois, C., and Halén, H. PICA PICA and NVIDIA Turing. NVIDIA Sponsored Session,
SIGGRAPH, 2018.
[5] Bavoil, L., Sainz, M., and Dimitrov, R. Image-Space Horizon-Based Ambient Occlusion. In ACM
SIGGRAPH Talks (2008), p. 22:1.
471
RAY TRACING GEMS
[6]
Christensen, P., Harker, G., Shade, J., Schubert, B., and Batali, D. Multiresolution Radiosity
Caching for Global Illumination in Movies. In ACM SIGGRAPH Talks (2012), p. 47:1.
[7]
Cranley, R., and Patterson, T. Randomization of Number Theoretic
Methods for Multiple Integration. SIAM Journal on Numerical Analysis 13, 6 (1976), 904–914.
[8]
Dean, M., and Nordwall, J. Make It Shiny: Unity’s Progressive Lightmapper and Shader Graph.
Game Developers Conference, 2016.
[9] Dutré, P., Bekaert, P., and Bala, K. Advanced Global Illumination. A K Peters, 2006.
[10] EA SEED. Project PICA PICA—Real-Time Raytracing Experiment Using DXR (DirectX Raytracing).
https://www.youtube.com/watch?v=LXo0WdlELJk, March 2018.
[11] Fong, J., Wrenninge, M., Kulla, C., and Habel, R. Production Volume Rendering. Production
Volume Rendering, SIGGRAPH Courses, 2017.
[12] Heitz, E., Hill, S., and McGuire, M. Combining Analytic Direct Illumination and Stochastic
Shadows. In Symposium on Interactive 3D Graphics and Games (2018), pp. 2:1–2:11.
[13] Hillaire, S. Real-Time Raytracing for Interactive Global Illumination Workflows in Frostbite.
NVIDIA Sponsored Session, Game Developers Conference, 2018.
[14] Jiménez, J., Wu, X., Pesce, A., and Jarabo, A. Practical Real-Time Strategies for Accurate
Indirect Occlusion. Physically Based Shading in Theory and Practice, SIGGRAPH Courses, 2016.
[17] Lagarde, S., and Zanuttini, A. Local Image-Based Lighting with Parallax-Corrected Cubemap. In
SIGGRAPH Talks (2012), p. 36:1.
[19] Lehtinen, J., Zwicker, M., Turquin, E., Kontkanen, J., Durand, F., Sillion, F.,
and Aila, T. A Meshless Hierarchical Representation for Light Transport.
ACM Transactions in Graphics 27, 3 (2008), 37:1–37:10.
[20] McGuire, M., and Mara, M. Phenomenological Transparency. IEEE Transactions on Visualization
and Computer Graphics 23, 5 (2017), 1465–1478.
[21] Pharr, M., Jakob, W., and Humphreys, G. Physically Based Rendering: From Theory to
Implementation, third ed. Morgan Kaufmann, 2016.
[22] Salvi, M. An Excursion in Temporal Supersampling. From the Lab Bench: Real-Time Rendering
Advances from NVIDIA Research, Game Developers Conference, 2016.
472
Hybrid Rendering for Real-Time Ray Tracing
[23] Sandy, M. Announcing Microsoft DirectX Raytracing! DirectX Developer Blog, https://blogs.
msdn.microsoft.com/directx/2018/03/19/announcing-microsoft-directx-
raytracing/, March 2018.
[24] Schied, C., Kaplanyan, A., Wyman, C., Patney, A., Chaitanya, C. R. A., Burgess, J., Liu, S.,
Dachsbacher, C., Lefohn, A., and Salvi, M. Spatiotemporal Variance-Guided Filtering: Real-Time
Reconstruction for Path-Traced Global Illumination. In Proceedings of High-Performance Graphics
(2017), pp. 2:1–2:12.
[25] Schlick, C. An Inexpensive BRDF Model for Physically-based Rendering. Computer Graphics
Forum 13, 3 (1994), 233–246.
[26] Stachowiak, T. Stochastic All The Things: Raytracing in Hybrid Real-Time Rendering. Digital
Dragons Presentation, 2018.
[27] Stachowiak, T., and Uludag, Y. Stochastic Screen-Space Reflections. Advances in Real-Time
Rendering, SIGGRAPH Courses, 2015.
[28] Walter, B., Marschner, S. R., Li, H., and Torrance, K. E. Microfacet Models for Refraction through
Rough Surfaces. In Eurographics Symposium on Rendering (2007), pp. 195–206.
[29] Weidlich, A., and Wilkie, A. Arbitrary Layered Micro-Facet Surfaces. In GRAPHITE (2007),
pp. 171–178.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution-
NonCommercial-NoDerivatives 4.0 International License (http://creativecommons.org/
licenses/by-nc-nd/4.0/), which permits any noncommercial use, sharing, distribution and
reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the
source, provide a link to the Creative Commons license and indicate if you modified the licensed material. You do
not have permission under this license to share adapted material derived from this chapter or parts of it.
The images or other third party material in this chapter are included in the chapter's Creative Commons license,
unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative
Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you
will need to obtain permission directly from the copyright holder.
473