Jump to content

Computer graphics lighting: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
m linking
m Added 1 {{Bare URL inline}} tag(s) using a script. For other recently-tagged pages with bare URLs, see Category:Articles with bare URLs for citations from August 2024
 
(44 intermediate revisions by 25 users not shown)
Line 1: Line 1:
{{short description|Simulation of light in computer graphics}}
{{short description|Simulation of light in computer graphics}}


'''Computer graphics lighting''' is the collection of techniques used to simulate light in [[computer graphics]] scenes. While lighting techniques offer flexibility in the level of detail and functionality available, they also operate at different levels of computational demand and [[Computational complexity|complexity]]. Graphics artists can choose from a variety of light sources, models, shading techniques, and effects to suit the needs of each application.
'''Computer graphics lighting''' is the collection of techniques used to simulate light in [[computer graphics]] scenes. While [[lighting]] techniques offer flexibility in the level of detail and functionality available, they also operate at different levels of computational demand and [[Computational complexity|complexity]]. Graphics artists can choose from a variety of light sources, models, shading techniques, and effects to suit the needs of each application.


== Light sources ==
== Light sources ==
Light sources allow for different ways to introduce light into graphics scenes<ref name=":72">{{Cite web|url=https://www.cs.uic.edu/~jbell/CourseNotes/ComputerGraphics/LightingAndShading.html|title=Intro to Computer Graphics: Lighting and Shading|website=www.cs.uic.edu|access-date=2019-11-05}}</ref>.
Light sources allow for different ways to introduce light into graphics scenes.<ref>{{Cite news|url=https://garagefarm.net/blog/light-the-art-of-exposure|title=Light: The art of exposure|date=2020-11-11|work=GarageFarm|access-date=2020-11-11|language=en-US}}</ref><ref name=":7">{{Cite web|url=https://www.cs.uic.edu/~jbell/CourseNotes/ComputerGraphics/LightingAndShading.html|title=Intro to Computer Graphics: Lighting and Shading|website=www.cs.uic.edu|access-date=2019-11-05}}</ref>


=== Point ===
=== Point ===
Point sources emit light from a single point in all directions, with the intensity of the light decreasing with distance<ref name=":73">{{Cite web|url=https://www.cs.uic.edu/~jbell/CourseNotes/ComputerGraphics/LightingAndShading.html|title=Intro to Computer Graphics: Lighting and Shading|website=www.cs.uic.edu|access-date=2019-11-05}}</ref>. An example of a point source is a standalone light bulb<ref name=":8">{{Cite web|url=http://www.bcchang.com/immersive/ygbasics/lighting.html|title=Lighting in 3D Graphics|website=www.bcchang.com|access-date=2019-11-05}}</ref>.
Point sources emit light from a single point in all directions, with the intensity of the light decreasing with distance.<ref name=":72">{{Cite web|url=https://www.cs.uic.edu/~jbell/CourseNotes/ComputerGraphics/LightingAndShading.html|title=Intro to Computer Graphics: Lighting and Shading|website=www.cs.uic.edu|access-date=2019-11-05}}</ref> An example of a point source is a standalone light bulb.<ref name=":8">{{Cite web|url=http://www.bcchang.com/immersive/ygbasics/lighting.html|title=Lighting in 3D Graphics|website=www.bcchang.com|access-date=2019-11-05}}</ref>
[[File:Real-time Raymarched Terrain.png|thumb|309x309px|A directional light source illuminating a terrain.]]
[[File:Real-time Raymarched Terrain.png|thumb|309x309px|A directional light source illuminating a terrain]]


=== Directional ===
=== Directional ===
A directional source (or distant source) uniformly lights a scene from one direction<ref name=":8" />. Unlike a point source, the intensity of light produced by a directional source does not change with distance, as the directional source is treated as though it is extremely far away from the scene<ref name=":8" />. An example of a directional source is sunlight<ref name=":9">{{Cite web|url=https://www.pluralsight.com/blog/film-games/understanding-different-light-types|title=Understanding Different Light Types|website=www.pluralsight.com|language=en|access-date=2019-11-05}}</ref>.
A directional source (or distant source) uniformly lights a scene from one direction.<ref name=":8" /> Unlike a point source, the intensity of light produced by a directional source does not change with distance over the scale of the scene, as the directional source is treated as though it is extremely far away.<ref name=":8" /> An example of a directional source is sunlight on Earth.<ref name=":9">{{Cite web|url=https://www.pluralsight.com/blog/film-games/understanding-different-light-types|title=Understanding Different Light Types|website=www.pluralsight.com|language=en|access-date=2019-11-05}}</ref>


=== Spotlight ===
=== Spotlight ===
A spotlight produces a directed [[cone]] of light<ref name=":73" />. The light becomes more intense closer to the spotlight source and to the center of the light cone<ref name=":73" />. An example of a spotlight is a flashlight<ref name=":9" />.
A spotlight produces a directed [[cone]] of light.<ref name=":73">{{Cite web|url=https://www.cs.uic.edu/~jbell/CourseNotes/ComputerGraphics/LightingAndShading.html|title=Intro to Computer Graphics: Lighting and Shading|website=www.cs.uic.edu|access-date=2019-11-05}}</ref> The light becomes more intense as the viewer gets closer to the spotlight source and to the center of the light cone.<ref name=":73" /> An example of a spotlight is a flashlight.<ref name=":9" />

=== Area ===
Area lights are 3D objects which emit light. Whereas point lights and spot lights sources are considered infinitesimally small points, area lights are treated as physical shapes.<ref>{{cite conference |last1=Lagarde |first1=Sebastien |author-link1= |last2=de Rousiers |first2=Charles |author-link2= |date=Summer 2014 |title=Moving Frostbite to Physically Based Rendering 3.0 |url=https://www.ea.com/frostbite/news/moving-frostbite-to-pb |conference=SIGGRAPH |location= |publisher= |pages= |id= |book-title=}}</ref> Area light produce softer shadows and more realistic lighting than point lights and spot lights.<ref>{{Cite book |last1=Pharr |first1=Matt |title=Physically Based Rendering: From Theory to Implementation |last2=Humphreys |first2=Greg |last3=Wenzel |first3=Jakob |publisher=Morgan Kaufmann |year=2016 |isbn=978-0128006450 |edition=3rd |language=English}}</ref>


=== Ambient ===
=== Ambient ===
Ambient light sources illuminate objects even when no other light source is present<ref name=":73" />. The intensity of ambient light is independent of direction, distance, and other objects, meaning the effect is completely uniform throughout the scene<ref name=":73" />. This source ensures that objects are visible even in complete darkness<ref name=":9" />.
Ambient light sources illuminate objects even when no other light source is present.<ref name=":73" /> The intensity of ambient light is independent of direction, distance, and other objects, meaning the effect is completely uniform throughout the scene.<ref name=":73" /> This source ensures that objects are visible even in complete darkness.<ref name=":9" />

=== Lightwarp ===
A lightwarp is a technique of which an object in the geometrical world [[refracts]] light based on the [[Unit vector|direction]] and [[Intensity (physics)|intensity]] of the light. The light is then [[Wave function|warped]] using an ambient diffuse term with a range of the [[color spectrum]]. The light then may be [[reflectively]] scattered to produce a higher [[depth of field]], and [[refracted]]. The technique is used to produce a [[Style_(visual_arts)#Stylization|unique rendering style]] and can be used to limit [[overexposure]] of objects. Games such as [[Team Fortress 2]] use the rendering technique to create a [[cartoon]] [[cel shaded]] stylized look.<ref>{{Cite book|chapter-url=https://hal.inria.fr/inria-00449828|chapter=Radiance Scaling for Versatile Surface Enhancement|first1=Romain|last1=Vergne|first2=Romain|last2=Pacanowski|first3=Pascal|last3=Barla|first4=Xavier|last4=Granier|first5=Christophe|last5=Schlick|title=Proceedings of the 2010 ACM SIGGRAPH symposium on Interactive 3D Graphics and Games |date=February 19, 2010|pages=143–150 |publisher=ACM|via=hal.inria.fr|doi=10.1145/1730804.1730827|isbn=9781605589398 |s2cid=18291692 }}</ref>

===HDRI===
HDRI stands for High dynamic range image and is a 360° image that is wrapped around a [[3D modeling|3D model]] as an outdoor setting and uses the sun typically as a light source in the sky. The [[Texture mapping|textures]] from the model can reflect the direct and [[Shading#Ambient lighting|ambient light]] and colors from the HDRI.<ref>https://visao.ca/what-is-hdri/#:~:text=High%20dynamic%20range%20images%20are,look%20cartoonish%20and%20less%20professional. {{Bare URL inline|date=August 2024}}</ref>


== Lighting interactions ==
== Lighting interactions ==
In computer graphics, light usually consists of multiple components<ref name=":82">{{Cite web|url=http://www.bcchang.com/immersive/ygbasics/lighting.html|title=Lighting in 3D Graphics|website=www.bcchang.com|access-date=2019-11-05}}</ref>. The overall effect of a light source on an object is determined by the combination of the object's interactions with these components<ref name=":82" />. The three primary lighting components (and subsequent interaction types) are diffuse, ambient, and specular<ref name=":82" />.
In computer graphics, the overall effect of a light source on an object is determined by the combination of the object's interactions with it usually described by at least three main components.<ref name=":82">{{Cite web|url=http://www.bcchang.com/immersive/ygbasics/lighting.html|title=Lighting in 3D Graphics|website=www.bcchang.com|access-date=2019-11-05}}</ref> The three primary lighting components (and subsequent interaction types) are diffuse, ambient, and specular.<ref name=":82" />
[[File:Phong components revised.png|thumb|544x544px|Decomposition of lighting interactions.]]
[[File:Phong components revised.png|thumb|544x544px|Decomposition of lighting interactions]]


=== Diffuse ===
=== Diffuse ===
Diffuse lighting (or [[diffuse reflection]]) is the direct illumination of an object by an even amount of light interacting with a [[Scattering|light-scattering]] surface<ref name=":83">{{Cite web|url=http://www.bcchang.com/immersive/ygbasics/lighting.html|title=Lighting in 3D Graphics|website=www.bcchang.com|access-date=2019-11-05}}</ref><ref name=":10">{{Cite web|url=http://graphics.cs.cmu.edu/nsp/course/15-462/Spring04/slides/07-lighting.pdf|title=Lighting and Shading|last=Pollard|first=Nancy|date=Spring 2004|website=|url-status=live|archive-url=|archive-date=|access-date=}}</ref>. After light strikes an object, it is reflected as a function of the surface properties of the object as well as the angle of incoming light<ref name=":10" />. This interaction is the primary contributor to the object's brightness and forms the basis for its color<ref name=":83" />.
Diffuse lighting (or [[diffuse reflection]]) is the direct illumination of an object by an even amount of light interacting with a [[Scattering|light-scattering]] surface.<ref name=":8"/><ref name=":10">{{Cite web|url=http://graphics.cs.cmu.edu/nsp/course/15-462/Spring04/slides/07-lighting.pdf|title=Lighting and Shading|last=Pollard|first=Nancy|author-link=Nancy Pollard|date=Spring 2004}}</ref> After light strikes an object, it is [[Reflection (computer graphics)|reflected]] as a function of the surface properties of the object as well as the angle of incoming light.<ref name=":10" /> This interaction is the primary contributor to the object's brightness and forms the basis for its color.<ref name=":83">{{Cite web|url=http://www.bcchang.com/immersive/ygbasics/lighting.html|title=Lighting in 3D Graphics|website=www.bcchang.com|access-date=2019-11-05}}</ref>


=== Ambient ===
=== Ambient ===
As ambient light is directionless, it interacts uniformly across all surfaces, with its intensity determined by the strength of the ambient light sources and the properties of objects' surface materials, namely their ambient [[Reflection coefficient|reflection coefficients]]<ref name=":83" /><ref name=":10" />.
As ambient light is directionless, it interacts uniformly across all surfaces, with its intensity determined by the strength of the ambient light sources and the properties of objects' surface materials, namely their ambient [[Reflection coefficient|reflection coefficients]].<ref name=":83" /><ref name=":10" />


=== Specular ===
=== Specular ===
The [[Specular highlight|specular lighting]] component gives objects shine and highlights<ref name=":83" />. This is distinct from mirror effects because other objects in the environment are not visible in these reflections<ref name=":10" />. Instead, specular lighting creates bright spots on objects based on the intensity of the specular lighting component and the specular reflection coefficient of the surface<ref name=":10" />.
The [[Specular highlight|specular lighting]] component gives objects shine and highlights.<ref name=":83" /> This is distinct from mirror effects because other objects in the environment are not visible in these reflections.<ref name=":10" /> Instead, specular lighting creates bright spots on objects based on the intensity of the specular lighting component and the specular reflection coefficient of the surface.<ref name=":10" />


== Illumination models ==
== Illumination models ==
Lighting models are used to replicate lighting effects in [[Rendering (computer graphics)|rendered]] environments where light is approximated based on the physics of light<ref name=":1">{{Cite web|url=https://learnopengl.com/Lighting/Basic-Lighting|title=LearnOpenGL - Basic Lighting|website=learnopengl.com|access-date=2019-11-08}}</ref>. Without lighting models, replicating lighting effects as they occur in the natural world would require more processing power than is practical for computer graphics<ref name=":1" />. This lighting, or illumination model's purpose is to compute the color of every pixel or the amount of light reflected for different surfaces in the scene<ref>{{Cite web|url=https://www.cs.uic.edu/~jbell/CourseNotes/ComputerGraphics/LightingAndShading.html|title=Intro to Computer Graphics: Lighting and Shading|website=www.cs.uic.edu|access-date=2019-11-08}}</ref>.There are two main illumination models, object oriented lighting and global illumination<ref name=":2">{{Cite web|url=https://www.cc.gatech.edu/classes/AY2003/cs4451a_fall/global-illumination.pdf|title=Global Illumination|last=|first=|date=2002|website=Georgia Tech Classes|url-status=live|archive-url=|archive-date=|access-date=}}</ref>. They differ in that object oriented lighting considers each object individually, whereas global illumination maps how light interacts between objects<ref name=":2" />. Currently, researchers are developing global illumination techniques to more accurately replicate how light interacts with its environment<ref name=":2" />.
Lighting models are used to replicate lighting effects in [[Rendering (computer graphics)|rendered]] environments where light is approximated based on the physics of light.<ref name=":1">{{Cite web|url=https://learnopengl.com/Lighting/Basic-Lighting|title=LearnOpenGL - Basic Lighting|website=learnopengl.com|access-date=2019-11-08}}</ref> Without lighting models, replicating lighting effects as they occur in the natural world would require more processing power than is practical for computer graphics.<ref name=":1" /> This lighting, or illumination model's purpose is to compute the color of every pixel or the amount of light reflected for different surfaces in the scene.<ref>{{Cite web|url=https://www.cs.uic.edu/~jbell/CourseNotes/ComputerGraphics/LightingAndShading.html|title=Intro to Computer Graphics: Lighting and Shading|website=www.cs.uic.edu|access-date=2019-11-08}}</ref> There are two main illumination models, object oriented lighting and global illumination.<ref name=":2">{{Cite web|url=https://www.cc.gatech.edu/classes/AY2003/cs4451a_fall/global-illumination.pdf|title=Global Illumination|date=2002|website=Georgia Tech Classes}}</ref> They differ in that object oriented lighting considers each object individually, whereas global illumination maps how light interacts between objects.<ref name=":2" /> Currently, researchers are developing global illumination techniques to more accurately replicate how light interacts with its environment.<ref name=":2" />


=== Object oriented lighting ===
=== Object oriented lighting ===
Object oriented lighting, also known as local illumination, is defined by mapping a single light source to a single object<ref name=":3">{{Cite web|url=http://www.cs.kent.edu/~farrell/cg01/lectures/color/illum_local.html|title=Local Illumination|last=Farrell|date=|website=Kent University|url-status=live|archive-url=|archive-date=|access-date=}}</ref>. This technique is fast to compute, but often is an incomplete approximation of how light would behave in the scene in reality<ref name=":3" />. It is often approximated by summing a combination of specular, diffuse, and ambient light of a specific object<ref name=":1" />.The two predominant local illumination models are the Phong and the Blinn-Phong illumination models<ref name=":4" />.
Object oriented lighting, also known as local illumination, is defined by mapping a single light source to a single object.<ref name=":3">{{Cite web|url=http://www.cs.kent.edu/~farrell/cg01/lectures/color/illum_local.html|title=Local Illumination|last=Farrell|website=Kent University}}</ref> This technique is fast to compute, but often is an incomplete approximation of how light would behave in the scene in reality.<ref name=":3" /> It is often approximated by summing a combination of specular, diffuse, and ambient light of a specific object.<ref name=":1" /> The two predominant local illumination models are the Phong and the Blinn-Phong illumination models.<ref name=":4" />


==== Phong illumination model ====
==== Phong illumination model ====
{{Main|Phong reflection model}}
{{Main|Phong reflection model}}
One of the most common shading models is the Phong model<ref name=":1" />. The Phong model assumes that the intensity of each [[pixel]] is the sum of the intensity due to diffuse, specular, and ambient lighting<ref name=":3" />. This model takes into account the location of a viewer to determine specular light using the angle of light reflecting off an object<ref name=":4" />. The [[Trigonometric functions|cosine]] of the angle is taken and raised to a power decided by the designer<ref name=":3" />. With this, the designer can decide how wide a highlight they want on an object; because of this, the power is called the shininess value<ref name=":4" />. The shininess value is determined by the roughness of the surface where a mirror would have a value of infinity and the roughest surface might have a value of one<ref name=":3" />. This model creates a more realistic looking white highlight based on the perspective of the viewer<ref name=":1" />.<br />
One of the most common reflection models is the Phong model.<ref name=":1" /> The Phong model assumes that the intensity of each [[pixel]] is the sum of the intensity due to diffuse, specular, and ambient lighting.<ref name=":3" /> This model takes into account the location of a viewer to determine specular light using the angle of light reflecting off an object.<ref name=":4" /> The [[Trigonometric functions|cosine]] of the angle is taken and raised to a power decided by the designer.<ref name=":3" /> With this, the designer can decide how wide a highlight they want on an object; because of this, the power is called the shininess value.<ref name=":4" /> The shininess value is determined by the roughness of the surface where a mirror would have a value of infinity and the roughest surface might have a value of one.<ref name=":3" /> This model creates a more realistic looking white highlight based on the perspective of the viewer.<ref name=":1" />


==== Blinn-Phong illumination model ====
==== Blinn-Phong illumination model ====
{{Main|Blinn–Phong reflection model}}
{{Main|Blinn–Phong reflection model}}
The Blinn-Phong illumination model is similar to the Phong model as it uses specular light to create a highlight on an object based on its shininess<ref name=":16">James F. Blinn (1977). "Models of light reflection for computer synthesized pictures". ''Proc. 4th annual conference on computer graphics and interactive techniques'': 192–198. [[CiteSeerX]] 10.1.1.131.7741. {{doi|10.1145/563858.563893}}<br /></ref>. The Blinn-Phong differs from the Phong illumination model, as the Blinn-Phong model uses the vector normal to the surface of the object and halfway between the light source and the viewer<ref name=":1" />. This model is used in order to have accurate specular lighting and reduced computation time<ref name=":1" />.The process takes less time because finding the reflected light vector's direction is a more involved computation than calculating the halfway [[Normal (geometry)|normal vector]] <ref name=":16" />. While this is similar to the Phong model, it produces different visual results, and the specular reflection exponent or shininess might need modification in order to produce a similar specular reflection<ref>Jacob's University, "[http://www.faculty.jacobs-university.de/llinsen/teaching/320322_Fall2010/lecture12.pdf Blinn-Phong Reflection Model]", 2010.</ref>.
The Blinn-Phong illumination model is similar to the Phong model as it uses specular light to create a highlight on an object based on its shininess.<ref name=":16">James F. Blinn (1977). "Models of light reflection for computer synthesized pictures". ''Proc. 4th annual conference on computer graphics and interactive techniques'': 192–198. [[CiteSeerX]] 10.1.1.131.7741. {{doi|10.1145/563858.563893}}</ref> The Blinn-Phong differs from the Phong illumination model, as the Blinn-Phong model uses the vector normal to the surface of the object and halfway between the light source and the viewer.<ref name=":1" /> This model is used in order to have accurate specular lighting and reduced computation time.<ref name=":1" /> The process takes less time because finding the reflected light vector's direction is a more involved computation than calculating the halfway [[Normal (geometry)|normal vector]].<ref name=":16" /> While this is similar to the Phong model, it produces different visual results, and the specular reflection exponent or shininess might need modification in order to produce a similar specular reflection.<ref>Jacob's University, "[http://www.faculty.jacobs-university.de/llinsen/teaching/320322_Fall2010/lecture12.pdf Blinn-Phong Reflection Model]", 2010.</ref>


=== Global illumination ===
=== Global illumination ===
{{Main articles|Global illumination}}
{{Main articles|Global illumination}}
Global illumination differs from local illumination because it calculates light as it would travel throughout the entire scene <ref name=":2" />. This lighting is based more heavily in physics and optics, with light rays scattering, reflecting, and indefinitely bouncing throughout the scene<ref name=":13" />. There is still active research being done on global illumination as it requires more computational power than local illumination<ref name=":18">Li, Hao (Fall 2018). "Global Illumination" (PDF).</ref>.
Global illumination differs from local illumination because it calculates light as it would travel throughout the entire scene.<ref name=":2" /> This lighting is based more heavily in physics and optics, with light rays scattering, reflecting, and indefinitely bouncing throughout the scene.<ref name=":13" /> There is still active research being done on global illumination as it requires more computational power than local illumination.<ref name=":18">Li, Hao (Fall 2018). "Global Illumination" (PDF).</ref>


==== Ray tracing ====
==== Ray tracing ====
{{Main articles|Ray tracing (graphics)}}
{{Main articles|Ray tracing (graphics)}}
[[File:Ray-traced steel balls.jpg|thumb|Image rendered using ray tracing]]
[[File:Ray-traced steel balls.jpg|thumb|Image rendered using ray tracing]]
Light sources emit rays that interact with various surfaces through absorption, reflection, or refraction<ref name=":72" />. An observer of the scene would see any light source that reaches their eyes; a ray that does not reach the observer goes unnoticed<ref>{{Cite web|url=https://developer.nvidia.com/rtx/raytracing|title=Introducing the NVIDIA RTX Ray Tracing Platform|date=2018-03-06|website=NVIDIA Developer|language=en|access-date=2019-11-08}}</ref>. It is possible to simulate this by having all of the light sources emit rays and then compute how of each of them interact with all of the objects in the scene<ref name=":17">Reif, J. H. (1994). "[https://users.cs.duke.edu/~reif/paper/tygar/raytracing.pdf Computability and Complexity of Ray Tracing]"(PDF). ''Discrete and Computational Geometry''. </ref>. However, this process is inefficient as most of the light rays would not reach the observer and would waste processing time<ref name=":21">Wallace, John R.; Cohen, Michael F.; Greenberg, Donald P. (1987). "A Two-pass Solution to the Rendering Equation: A Synthesis of Ray Tracing and Radiosity Methods". ''Proceedings of the 14th Annual Conference on Computer Graphics and Interactive Techniques''. SIGGRAPH '87. New York, NY, USA: ACM: 311–320. {{doi|10.1145/37401.37438}}. {{ISBN|9780897912273}}.</ref>. Ray tracing solves this problem by reversing the process, instead sending view rays from the observer and calculating how they interact until they reach a light source<ref name=":17" />. Although this way more effectively uses processing time and produces a light simulation closely imitating natural lighting, ray tracing still has high computation costs due to the high amounts of light that reach viewer's eyes<ref name=":0">{{Cite journal|last=Greenberg|first=Donald P.|date=1989-04-14|title=Light Reflection Models for Computer Graphics|journal=Science|language=en|volume=244|issue=4901|pages=166–173|doi=10.1126/science.244.4901.166|issn=0036-8075|pmid=17835348}}</ref>.
Light sources emit rays that interact with various surfaces through absorption, reflection, or refraction.<ref name=":72" /> An observer of the scene would see any light source that reaches their eyes; a ray that does not reach the observer goes unnoticed.<ref>{{Cite web|url=https://developer.nvidia.com/rtx/raytracing|title=Introducing the NVIDIA RTX Ray Tracing Platform|date=2018-03-06|website=NVIDIA Developer|language=en|access-date=2019-11-08}}</ref> It is possible to simulate this by having all of the light sources emit rays and then compute how each of them interact with all of the objects in the scene.<ref name=":17">Reif, J. H. (1994). "[https://users.cs.duke.edu/~reif/paper/tygar/raytracing.pdf Computability and Complexity of Ray Tracing]"(PDF). ''Discrete and Computational Geometry''.</ref> However, this process is inefficient as most of the light rays would not reach the observer and would waste processing time.<ref name=":21">Wallace, John R.; Cohen, Michael F.; Greenberg, Donald P. (1987). "A Two-pass Solution to the Rendering Equation: A Synthesis of Ray Tracing and Radiosity Methods". ''Proceedings of the 14th Annual Conference on Computer Graphics and Interactive Techniques''. SIGGRAPH '87. New York, NY, USA: ACM: 311–320. {{doi|10.1145/37401.37438}}. {{ISBN|9780897912273}}.</ref> Ray tracing solves this problem by reversing the process, instead sending view rays from the observer and calculating how they interact until they reach a light source.<ref name=":17" /> Although this way more effectively uses processing time and produces a light simulation closely imitating natural lighting, ray tracing still has high computation costs due to the high amounts of light that reach viewer's eyes.<ref name=":0">{{Cite journal|last=Greenberg|first=Donald P.|date=1989-04-14|title=Light Reflection Models for Computer Graphics|journal=Science|language=en|volume=244|issue=4901|pages=166–173|doi=10.1126/science.244.4901.166|issn=0036-8075|pmid=17835348|bibcode=1989Sci...244..166G |s2cid=46575183 }}</ref>


==== Radiosity ====
==== Radiosity ====
{{Main articles|Radiosity (computer graphics)}}
{{Main articles|Radiosity (computer graphics)}}
Radiosity takes into account the energy given off by surrounding objects and the light source<ref name=":2" />. Unlike ray tracing, which is dependent on the position and orientation of the observer, radiosity lighting is independent of view position<ref name=":21" />. Radiosity requires more computational power than ray tracing, but can be more useful for scenes with static lighting because it would only have to be computed once<ref>Cindy Goral, Kenneth E. Torrance, Donald P. Greenberg and B. Battaile,"[http://www.cs.rpi.edu/~cutler/classes/advancedgraphics/S07/lectures/goral.pdf Modeling the interaction of light between diffuse surfaces]", ''[[Computer Graphics (Publication)|Computer Graphics]]'', Vol. 18, No. 3. ([[PDF]])</ref>. The surfaces of a scene can be divided into a large amount of patches; each patch radiates some light and affects the other patches, then a large set of equations needs to be solved simultaneously in order to get the final radiosity of each patch<ref name=":0" />.
Radiosity takes into account the energy given off by surrounding objects and the light source.<ref name=":2" /> Unlike ray tracing, which is dependent on the position and orientation of the observer, radiosity lighting is independent of view position.<ref name=":21" /> Radiosity requires more computational power than ray tracing, but can be more useful for scenes with static lighting because it would only have to be computed once.<ref>Cindy Goral, Kenneth E. Torrance, Donald P. Greenberg and B. Battaile,"[http://www.cs.rpi.edu/~cutler/classes/advancedgraphics/S07/lectures/goral.pdf Modeling the interaction of light between diffuse surfaces]", ''[[Computer Graphics (Publication)|Computer Graphics]]'', Vol. 18, No. 3. ([[PDF]])</ref> The surfaces of a scene can be divided into a large amount of patches; each patch radiates some light and affects the other patches, then a large set of equations needs to be solved simultaneously in order to get the final radiosity of each patch.<ref name=":0" />


==== Photon mapping ====
==== Photon mapping ====
{{Main|Photon mapping}}
{{Main|Photon mapping}}
[[Photon]] mapping was created as a two-pass global illumination algorithm that is more efficient than raytracing<ref name=":19">Wann Jensen, Henrik (1996). "[http://graphics.ucsd.edu/~henrik/papers/photon_map/global_illumination_using_photon_maps_egwr96.pdf Global Illumination using Photon Maps] {{Webarchive|url=https://web.archive.org/web/20080808140048/http://graphics.ucsd.edu/~henrik/papers/photon_map/global_illumination_using_photon_maps_egwr96.pdf |date=2008-08-08 }}" (PDF). ''Rendering Techniques ’96'': 21–30.</ref>. It is the basic principle of tracking photons released from a light source through a series of stages<ref name=":19" />. The first pass includes the photons being released from a light source and bouncing off their first object; this map of where are the photons are located is then recorded<ref name=":18" />. The photon map contains both the position and direction of each photon which either bounce or are absorbed<ref name=":19" />. The second pass happens with [[Rendering (computer graphics)|rendering]] where the reflections are calculated for different surfaces<ref name=":20">{{Cite web|url=https://web.cs.wpi.edu/~emmanuel/courses/cs563/write_ups/zackw/photon_mapping/PhotonMapping.html|title=Photon Mapping - Zack Waters|website=web.cs.wpi.edu|access-date=2019-11-08}}</ref>. In this process, the photon map is decoupled from the geometry of the scene, meaning rendering can be calculated separately<ref name=":18" />. It is a useful technique because it can simulate caustics, and pre-processing steps do not need to be repeated if the view or objects change<ref name=":20" />.
[[Photon]] mapping was created as a two-pass global illumination algorithm that is more efficient than ray tracing.<ref name=":19">Wann Jensen, Henrik (1996). "[http://graphics.ucsd.edu/~henrik/papers/photon_map/global_illumination_using_photon_maps_egwr96.pdf Global Illumination using Photon Maps] {{Webarchive|url=https://web.archive.org/web/20080808140048/http://graphics.ucsd.edu/~henrik/papers/photon_map/global_illumination_using_photon_maps_egwr96.pdf |date=2008-08-08 }}" (PDF). ''Rendering Techniques ’96'': 21–30.</ref> It is the basic principle of tracking photons released from a light source through a series of stages.<ref name=":19" /> The first pass includes the photons being released from a light source and bouncing off their first object; this map of where the photons are located is then recorded.<ref name=":18" /> The photon map contains both the position and direction of each photon which either bounce or are absorbed.<ref name=":19" /> The second pass happens with [[Rendering (computer graphics)|rendering]] where the reflections are calculated for different surfaces.<ref name=":20">{{Cite web|url=https://web.cs.wpi.edu/~emmanuel/courses/cs563/write_ups/zackw/photon_mapping/PhotonMapping.html|title=Photon Mapping - Zack Waters|website=web.cs.wpi.edu|access-date=2019-11-08}}</ref> In this process, the photon map is decoupled from the geometry of the scene, meaning rendering can be calculated separately.<ref name=":18" /> It is a useful technique because it can simulate caustics, and pre-processing steps do not need to be repeated if the view or objects change.<ref name=":20" />


== Polygonal shading ==
== Polygonal shading ==
{{Main|Shading}}
{{Main|Shading}}


Polygonal [[shading]] is part of the [[Rasterisation|rasterization]] process where [[3D computer graphics|3D]] models are drawn as [[2D computer graphics|2D]] pixel images<ref name=":4">{{Cite web|url=https://cglearn.codelight.eu/pub/computer-graphics/shading-and-lighting|title=Computer Graphics: Shading and Lighting|website=cglearn.codelight.eu|access-date=2019-10-30}}</ref>. Shading applies a lighting model, in conjunction with the geometric attributes of the 3D model, to determine how lighting should be represented at each [[Fragment (computer graphics)|fragment]] (or pixel) of the resulting image<ref name=":4" />. The [[Polygon mesh|polygons]] of the 3D model store the geometric values needed for the shading process<ref name=":11">{{Cite web|url=http://math.hws.edu/graphicsbook/c4/s1.html|title=Introduction to Lighting|last=|first=|date=|website=|url-status=live|archive-url=|archive-date=|access-date=}}</ref>. This information includes [[Vertex (geometry)|vertex]] positional values and [[Normal (geometry)|surface normals]], but can contain optional data, such as [[Texture mapping|texture]] and [[Bump mapping|bump]] maps<ref>{{Cite web|url=https://www.khronos.org/opengl/wiki/Vertex_Specification#Primitives|title=Vertex Specification - OpenGL Wiki|website=www.khronos.org|access-date=2019-11-06}}</ref>.
Polygonal [[shading]] is part of the [[Rasterisation|rasterization]] process where [[3D computer graphics|3D]] models are drawn as [[2D computer graphics|2D]] pixel images.<ref name=":4">{{Cite web|url=https://cglearn.codelight.eu/pub/computer-graphics/shading-and-lighting|title=Computer Graphics: Shading and Lighting|website=cglearn.codelight.eu|access-date=2019-10-30}}</ref> Shading applies a lighting model, in conjunction with the geometric attributes of the 3D model, to determine how lighting should be represented at each [[Fragment (computer graphics)|fragment]] (or pixel) of the resulting image.<ref name=":4" /> The [[Polygon mesh|polygons]] of the 3D model store the geometric values needed for the shading process.<ref name=":11">{{Cite web|url=http://math.hws.edu/graphicsbook/c4/s1.html|title=Introduction to Computer Graphics, Section 4.1 -- Introduction to Lighting|website=math.hws.edu}}</ref> This information includes [[Vertex (geometry)|vertex]] positional values and [[Normal (geometry)|surface normals]], but can contain optional data, such as [[Texture mapping|texture]] and [[Bump mapping|bump]] maps.<ref>{{Cite web|url=https://www.khronos.org/opengl/wiki/Vertex_Specification#Primitives|title=Vertex Specification - OpenGL Wiki|website=www.khronos.org|access-date=2019-11-06}}</ref>
[[File:Flatshading00.png|alt=|thumb|165x165px|An example of flat shading.]]
[[File:Flatshading00.png|alt=|thumb|165x165px|An example of flat shading]]
[[File:Gouraudshading01.png|alt=|thumb|165x165px|An example of Gouraud shading.]]
[[File:Gouraudshading01.png|alt=|thumb|165x165px|An example of Gouraud shading]]
[[File:Phongshading00.png|alt=|thumb|165x165px|An example of Phong shading.]]
[[File:Phongshading00.png|alt=|thumb|165x165px|An example of Phong shading]]


=== Flat shading ===
=== Flat shading ===
Flat shading is a simple shading model with a uniform application of lighting and color per polygon<ref name=":12">{{Cite web|url=https://www.cs.brandeis.edu/~cs155/Lecture_16.pdf|title=Illumination Models and Shading|last=Foley|date=|website=|url-status=live|archive-url=|archive-date=|access-date=}}</ref>. The color and normal of one vertex is used to calculate the shading of the entire polygon<ref name=":4" />. Flat shading is inexpensive, as lighting for each polygon only needs to be calculated once per render<ref name=":12" />.
Flat shading is a simple shading model with a uniform application of lighting and color per polygon.<ref name=":12">{{Cite web|url=https://www.cs.brandeis.edu/~cs155/Lecture_16.pdf|title=Illumination Models and Shading|last=Foley}}</ref> The color and normal of one vertex is used to calculate the shading of the entire polygon.<ref name=":4" /> Flat shading is inexpensive, as lighting for each polygon only needs to be calculated once per render.<ref name=":12" />


=== Gouraud shading ===
=== Gouraud shading ===
[[Gouraud shading]] is a type of interpolated shading where the values inside of each polygon are a blend of its vertex values<ref name=":4" />. Each vertex is given its own normal consisting of the average of the surface normals of the surrounding polygons<ref name=":12" />. The lighting and shading at that vertex is then calculated using the average normal and the lighting model of choice<ref name=":12" />. This process is repeated for all the vertices in the 3D model<ref name=":7">{{Cite web|url=https://www.cs.uic.edu/~jbell/CourseNotes/ComputerGraphics/LightingAndShading.html|title=Intro to Computer Graphics: Lighting and Shading|website=www.cs.uic.edu|access-date=2019-11-05}}</ref>. Next, the shading of the edges between the vertices is calculated by [[Interpolation|interpolating]] between the vertex values<ref name=":7" />. Finally, the shading inside of the polygon is calculated as an interpolation of the surrounding edge values<ref name=":7" />. Gouraud shading generates a smooth lighting effect across the 3D model's surface<ref name=":7" />.
[[Gouraud shading]] is a type of interpolated shading where the values inside of each polygon are a blend of its vertex values.<ref name=":4" /> Each vertex is given its own normal consisting of the average of the surface normals of the surrounding polygons.<ref name=":12" /> The lighting and shading at that vertex is then calculated using the average normal and the lighting model of choice.<ref name=":12" /> This process is repeated for all the vertices in the 3D model.<ref name=":7"/> Next, the shading of the edges between the vertices is calculated by [[Interpolation|interpolating]] between the vertex values.<ref name=":7" /> Finally, the shading inside of the polygon is calculated as an interpolation of the surrounding edge values.<ref name=":7" /> Gouraud shading generates a smooth lighting effect across the 3D model's surface.<ref name=":7" />


=== Phong shading ===
=== Phong shading ===
[[Phong shading]], similar to Gouraud shading, is another type of interpolative shading that blends between vertex values to shade polygons<ref name=":13">{{Cite web|url=http://www.hao-li.com/cs420-fs2018/slides/Lecture05.2.pdf|title=Shading in OpenGL|last=Li|first=Hao|date=2018|website=|url-status=live|archive-url=|archive-date=}}</ref>. The key difference between the two is that Phong shading interpolates the vertex normal values over the whole polygon before it calculates its shading<ref name=":12" />. This contrasts with Gouraud shading which interpolates the already shaded vertex values over the whole polygon<ref name=":13" />. Once Phong shading has calculated the normal of a fragment (pixel) inside the polygon, it can then apply a lighting model, shading that fragment<ref name=":12" />. This process is repeated until each polygon of the 3D model is shaded<ref name=":13" />.
[[Phong shading]], similar to Gouraud shading, is another type of interpolative shading that blends between vertex values to shade polygons.<ref name=":13">{{Cite web|url=http://www.hao-li.com/cs420-fs2018/slides/Lecture05.2.pdf|title=Shading in OpenGL|last=Li|first=Hao|date=2018}}</ref> The key difference between the two is that Phong shading interpolates the [[vertex normal]] values over the whole polygon before it calculates its shading.<ref name=":12" /> This contrasts with Gouraud shading which interpolates the already shaded vertex values over the whole polygon.<ref name=":13" /> Once Phong shading has calculated the normal of a fragment (pixel) inside the polygon, it can then apply a lighting model, shading that fragment.<ref name=":12" /> This process is repeated until each polygon of the 3D model is shaded.<ref name=":13" />


== Lighting effects ==
== Lighting effects ==
[[File:Miroir-cercle.jpg|thumb|A reflective material demonstrating caustics.]]
[[File:Miroir-cercle.jpg|thumb|A reflective material demonstrating caustics]]


=== Caustics ===
=== Caustics ===
{{Main articles|Caustic (optics)}}
{{Main articles|Caustic (optics)}}
[[Caustic (optics)|Caustics]] are a lighting effect of reflected and refracted light moving through a medium<ref name=":14">{{Cite web|url=https://developer.nvidia.com/gpugems/GPUGems/gpugems_ch02.html|title=GPU Gems|website=NVIDIA Developer|language=en|access-date=2019-10-30}}</ref>. They appear as ribbons of concentrated light and are often seen when looking at bodies of water or glass<ref name=":15">{{Cite web|url=https://www.dualheights.se/caustics/caustics-water-texturing-using-unity3d.shtml|title=Caustics water texturing using Unity 3D|website=www.dualheights.se|access-date=2019-11-06}}</ref>. Caustics can be implemented in 3D graphics by blending a caustic [[Texture mapping|texture map]] with the texture map of the affected objects<ref name=":15" />. The caustics texture can either be a static image that is animated to mimic the effects of caustics, or a [[Real-time computing|Real-time]] calculation of caustics onto a blank image<ref name=":15" />. The latter is more complicated and requires backwards [[Ray tracing (graphics)|ray tracing]] to simulate photons moving through the environment of the 3D render<ref name=":14" />. In a photon mapping illumination model, [[Monte Carlo sampling|Monte Carlo]] sampling is used in conjunction with the ray tracing to compute the intensity of light caused by the caustics<ref name=":14" />.
[[Caustic (optics)|Caustics]] are an effect of light reflected and refracted in a medium with curved interfaces or reflected off a curved surface.<ref name=":14">{{Cite web|url=https://developer.nvidia.com/gpugems/GPUGems/gpugems_ch02.html|title=GPU Gems|website=NVIDIA Developer|language=en|access-date=2019-10-30}}</ref> They appear as ribbons of concentrated light and are often seen when looking at bodies of water or glass.<ref name=":15">{{Cite web|url=https://www.dualheights.se/caustics/caustics-water-texturing-using-unity3d.shtml|title=Caustics water texturing using Unity 3D|website=www.dualheights.se|access-date=2019-11-06}}</ref> Caustics can be implemented in 3D graphics by blending a caustic [[Texture mapping|texture map]] with the texture map of the affected objects.<ref name=":15" /> The caustics texture can either be a static image that is animated to mimic the effects of caustics, or a [[Real-time computing|Real-time]] calculation of caustics onto a blank image.<ref name=":15" /> The latter is more complicated and requires backwards [[Ray tracing (graphics)|ray tracing]] to simulate photons moving through the environment of the 3D render.<ref name=":14" /> In a photon mapping illumination model, [[Monte Carlo sampling|Monte Carlo]] sampling is used in conjunction with the ray tracing to compute the intensity of light caused by the caustics.<ref name=":14" />


=== Reflection mapping ===
=== Reflection mapping ===
{{Main articles|Reflection mapping}}
{{Main articles|Reflection mapping}}
Reflection mapping (also known as environment mapping) is a technique which uses 2D environment maps to create the effect of [[Reflectance|reflectivity]] without using ray tracing<ref name=":5">{{Cite web|url=https://cglearn.codelight.eu/pub/computer-graphics/environment-mapping|title=Computer Graphics: Environment Mapping|website=cglearn.codelight.eu|access-date=2019-11-01}}</ref>. Since the appearances of reflective objects depend on the relative positions of the viewers, the objects, and the surrounding environments, graphics algorithms produce reflection vectors to determine how to color the objects based on these elements<ref>{{Cite web|url=http://web.cse.ohio-state.edu/~wang.3602/courses/cse5542-2013-spring/17-env.pdf|title=Environment Mapping|last=Shen|first=Han-Wei|date=|website=|url-status=live|archive-url=|archive-date=|access-date=}}</ref>. Using 2D environment maps rather than fully rendered, 3D objects to represent surroundings, reflections on objects can be determined using simple, computationally inexpensive algorithms<ref name=":5" />.
Reflection mapping (also known as environment mapping) is a technique which uses 2D environment maps to create the effect of [[Reflectance|reflectivity]] without using ray tracing.<ref name=":5">{{Cite web|url=https://cglearn.codelight.eu/pub/computer-graphics/environment-mapping|title=Computer Graphics: Environment Mapping|website=cglearn.codelight.eu|access-date=2019-11-01}}</ref> Since the appearances of reflective objects depend on the relative positions of the viewers, the objects, and the surrounding environments, graphics algorithms produce reflection vectors to determine how to color the objects based on these elements.<ref>{{Cite web|url=http://web.cse.ohio-state.edu/~wang.3602/courses/cse5542-2013-spring/17-env.pdf|title=Environment Mapping|last=Shen|first=Han-Wei}}</ref> Using 2D environment maps rather than fully rendered, 3D objects to represent surroundings, reflections on objects can be determined using simple, computationally inexpensive algorithms.<ref name=":5" />


=== Particle systems ===
=== Particle systems ===
{{Main articles|Particle system}}
{{Main articles|Particle system}}
Particle systems use collections of small [[Particle|particles]] to model chaotic, high-complexity events, such as fire, moving liquids, explosions, and moving hair<ref name=":6">{{Cite web|url=http://web.engr.oregonstate.edu/~mjb/cs491/Handouts/particlesystems.2pp.pdf|title=Particle Systems|last=Bailey|first=Mike|date=|website=|url-status=live|archive-url=|archive-date=|access-date=}}</ref>. Particles which make up the complex animation are distributed by an emitter, which gives each particle its properties, such as speed, lifespan, and color<ref name=":6" />. Over time, these particles may move, change color, or vary other properties, depending on the effect<ref name=":6" />. Typically, particle systems incorporate [[randomness]], such as in the initial properties the emitter gives each particle, to make the effect realistic and non-uniform<ref name=":6" /><ref>{{Cite web|url=https://web.cs.wpi.edu/~matt/courses/cs563/talks/psys.html|title=Particle Systems|website=web.cs.wpi.edu|access-date=2019-11-01}}</ref>.
Particle systems use collections of small [[Particle|particles]] to model chaotic, high-complexity events, such as fire, moving liquids, explosions, and moving hair.<ref name=":6">{{Cite web|url=http://web.engr.oregonstate.edu/~mjb/cs491/Handouts/particlesystems.2pp.pdf|title=Particle Systems|last=Bailey|first=Mike}}</ref> Particles which make up the complex animation are distributed by an emitter, which gives each particle its properties, such as speed, lifespan, and color.<ref name=":6" /> Over time, these particles may move, change color, or vary other properties, depending on the effect.<ref name=":6" /> Typically, particle systems incorporate [[randomness]], such as in the initial properties the emitter gives each particle, to make the effect realistic and non-uniform.<ref name=":6" /><ref>{{Cite web|url=https://web.cs.wpi.edu/~matt/courses/cs563/talks/psys.html|title=Particle Systems|website=web.cs.wpi.edu|access-date=2019-11-01}}</ref>


== See also ==
== See also ==
*[[Per-pixel lighting]]
*[[Computer graphics]]
*[[Computer graphics]]

==References==
==References==
{{reflist}}
{{reflist}}


{{Computer graphics}}
{{DEFAULTSORT:Computer Graphics Lighting}}
{{DEFAULTSORT:Computer Graphics Lighting}}
[[Category:3D rendering]]
[[Category:3D rendering]]
[[Category:Lighting| ]]
[[Category:Lighting]]
[[Category:Shading| ]]
[[Category:Shading]]

Latest revision as of 13:30, 19 August 2024

Computer graphics lighting is the collection of techniques used to simulate light in computer graphics scenes. While lighting techniques offer flexibility in the level of detail and functionality available, they also operate at different levels of computational demand and complexity. Graphics artists can choose from a variety of light sources, models, shading techniques, and effects to suit the needs of each application.

Light sources

[edit]

Light sources allow for different ways to introduce light into graphics scenes.[1][2]

Point

[edit]

Point sources emit light from a single point in all directions, with the intensity of the light decreasing with distance.[3] An example of a point source is a standalone light bulb.[4]

A directional light source illuminating a terrain

Directional

[edit]

A directional source (or distant source) uniformly lights a scene from one direction.[4] Unlike a point source, the intensity of light produced by a directional source does not change with distance over the scale of the scene, as the directional source is treated as though it is extremely far away.[4] An example of a directional source is sunlight on Earth.[5]

Spotlight

[edit]

A spotlight produces a directed cone of light.[6] The light becomes more intense as the viewer gets closer to the spotlight source and to the center of the light cone.[6] An example of a spotlight is a flashlight.[5]

Area

[edit]

Area lights are 3D objects which emit light. Whereas point lights and spot lights sources are considered infinitesimally small points, area lights are treated as physical shapes.[7] Area light produce softer shadows and more realistic lighting than point lights and spot lights.[8]

Ambient

[edit]

Ambient light sources illuminate objects even when no other light source is present.[6] The intensity of ambient light is independent of direction, distance, and other objects, meaning the effect is completely uniform throughout the scene.[6] This source ensures that objects are visible even in complete darkness.[5]

Lightwarp

[edit]

A lightwarp is a technique of which an object in the geometrical world refracts light based on the direction and intensity of the light. The light is then warped using an ambient diffuse term with a range of the color spectrum. The light then may be reflectively scattered to produce a higher depth of field, and refracted. The technique is used to produce a unique rendering style and can be used to limit overexposure of objects. Games such as Team Fortress 2 use the rendering technique to create a cartoon cel shaded stylized look.[9]

HDRI

[edit]

HDRI stands for High dynamic range image and is a 360° image that is wrapped around a 3D model as an outdoor setting and uses the sun typically as a light source in the sky. The textures from the model can reflect the direct and ambient light and colors from the HDRI.[10]

Lighting interactions

[edit]

In computer graphics, the overall effect of a light source on an object is determined by the combination of the object's interactions with it usually described by at least three main components.[11] The three primary lighting components (and subsequent interaction types) are diffuse, ambient, and specular.[11]

Decomposition of lighting interactions

Diffuse

[edit]

Diffuse lighting (or diffuse reflection) is the direct illumination of an object by an even amount of light interacting with a light-scattering surface.[4][12] After light strikes an object, it is reflected as a function of the surface properties of the object as well as the angle of incoming light.[12] This interaction is the primary contributor to the object's brightness and forms the basis for its color.[13]

Ambient

[edit]

As ambient light is directionless, it interacts uniformly across all surfaces, with its intensity determined by the strength of the ambient light sources and the properties of objects' surface materials, namely their ambient reflection coefficients.[13][12]

Specular

[edit]

The specular lighting component gives objects shine and highlights.[13] This is distinct from mirror effects because other objects in the environment are not visible in these reflections.[12] Instead, specular lighting creates bright spots on objects based on the intensity of the specular lighting component and the specular reflection coefficient of the surface.[12]

Illumination models

[edit]

Lighting models are used to replicate lighting effects in rendered environments where light is approximated based on the physics of light.[14] Without lighting models, replicating lighting effects as they occur in the natural world would require more processing power than is practical for computer graphics.[14] This lighting, or illumination model's purpose is to compute the color of every pixel or the amount of light reflected for different surfaces in the scene.[15] There are two main illumination models, object oriented lighting and global illumination.[16] They differ in that object oriented lighting considers each object individually, whereas global illumination maps how light interacts between objects.[16] Currently, researchers are developing global illumination techniques to more accurately replicate how light interacts with its environment.[16]

Object oriented lighting

[edit]

Object oriented lighting, also known as local illumination, is defined by mapping a single light source to a single object.[17] This technique is fast to compute, but often is an incomplete approximation of how light would behave in the scene in reality.[17] It is often approximated by summing a combination of specular, diffuse, and ambient light of a specific object.[14] The two predominant local illumination models are the Phong and the Blinn-Phong illumination models.[18]

Phong illumination model

[edit]

One of the most common reflection models is the Phong model.[14] The Phong model assumes that the intensity of each pixel is the sum of the intensity due to diffuse, specular, and ambient lighting.[17] This model takes into account the location of a viewer to determine specular light using the angle of light reflecting off an object.[18] The cosine of the angle is taken and raised to a power decided by the designer.[17] With this, the designer can decide how wide a highlight they want on an object; because of this, the power is called the shininess value.[18] The shininess value is determined by the roughness of the surface where a mirror would have a value of infinity and the roughest surface might have a value of one.[17] This model creates a more realistic looking white highlight based on the perspective of the viewer.[14]

Blinn-Phong illumination model

[edit]

The Blinn-Phong illumination model is similar to the Phong model as it uses specular light to create a highlight on an object based on its shininess.[19] The Blinn-Phong differs from the Phong illumination model, as the Blinn-Phong model uses the vector normal to the surface of the object and halfway between the light source and the viewer.[14] This model is used in order to have accurate specular lighting and reduced computation time.[14] The process takes less time because finding the reflected light vector's direction is a more involved computation than calculating the halfway normal vector.[19] While this is similar to the Phong model, it produces different visual results, and the specular reflection exponent or shininess might need modification in order to produce a similar specular reflection.[20]

Global illumination

[edit]

Global illumination differs from local illumination because it calculates light as it would travel throughout the entire scene.[16] This lighting is based more heavily in physics and optics, with light rays scattering, reflecting, and indefinitely bouncing throughout the scene.[21] There is still active research being done on global illumination as it requires more computational power than local illumination.[22]

Ray tracing

[edit]
Image rendered using ray tracing

Light sources emit rays that interact with various surfaces through absorption, reflection, or refraction.[3] An observer of the scene would see any light source that reaches their eyes; a ray that does not reach the observer goes unnoticed.[23] It is possible to simulate this by having all of the light sources emit rays and then compute how each of them interact with all of the objects in the scene.[24] However, this process is inefficient as most of the light rays would not reach the observer and would waste processing time.[25] Ray tracing solves this problem by reversing the process, instead sending view rays from the observer and calculating how they interact until they reach a light source.[24] Although this way more effectively uses processing time and produces a light simulation closely imitating natural lighting, ray tracing still has high computation costs due to the high amounts of light that reach viewer's eyes.[26]

Radiosity

[edit]

Radiosity takes into account the energy given off by surrounding objects and the light source.[16] Unlike ray tracing, which is dependent on the position and orientation of the observer, radiosity lighting is independent of view position.[25] Radiosity requires more computational power than ray tracing, but can be more useful for scenes with static lighting because it would only have to be computed once.[27] The surfaces of a scene can be divided into a large amount of patches; each patch radiates some light and affects the other patches, then a large set of equations needs to be solved simultaneously in order to get the final radiosity of each patch.[26]

Photon mapping

[edit]

Photon mapping was created as a two-pass global illumination algorithm that is more efficient than ray tracing.[28] It is the basic principle of tracking photons released from a light source through a series of stages.[28] The first pass includes the photons being released from a light source and bouncing off their first object; this map of where the photons are located is then recorded.[22] The photon map contains both the position and direction of each photon which either bounce or are absorbed.[28] The second pass happens with rendering where the reflections are calculated for different surfaces.[29] In this process, the photon map is decoupled from the geometry of the scene, meaning rendering can be calculated separately.[22] It is a useful technique because it can simulate caustics, and pre-processing steps do not need to be repeated if the view or objects change.[29]

Polygonal shading

[edit]

Polygonal shading is part of the rasterization process where 3D models are drawn as 2D pixel images.[18] Shading applies a lighting model, in conjunction with the geometric attributes of the 3D model, to determine how lighting should be represented at each fragment (or pixel) of the resulting image.[18] The polygons of the 3D model store the geometric values needed for the shading process.[30] This information includes vertex positional values and surface normals, but can contain optional data, such as texture and bump maps.[31]

An example of flat shading
An example of Gouraud shading
An example of Phong shading

Flat shading

[edit]

Flat shading is a simple shading model with a uniform application of lighting and color per polygon.[32] The color and normal of one vertex is used to calculate the shading of the entire polygon.[18] Flat shading is inexpensive, as lighting for each polygon only needs to be calculated once per render.[32]

Gouraud shading

[edit]

Gouraud shading is a type of interpolated shading where the values inside of each polygon are a blend of its vertex values.[18] Each vertex is given its own normal consisting of the average of the surface normals of the surrounding polygons.[32] The lighting and shading at that vertex is then calculated using the average normal and the lighting model of choice.[32] This process is repeated for all the vertices in the 3D model.[2] Next, the shading of the edges between the vertices is calculated by interpolating between the vertex values.[2] Finally, the shading inside of the polygon is calculated as an interpolation of the surrounding edge values.[2] Gouraud shading generates a smooth lighting effect across the 3D model's surface.[2]

Phong shading

[edit]

Phong shading, similar to Gouraud shading, is another type of interpolative shading that blends between vertex values to shade polygons.[21] The key difference between the two is that Phong shading interpolates the vertex normal values over the whole polygon before it calculates its shading.[32] This contrasts with Gouraud shading which interpolates the already shaded vertex values over the whole polygon.[21] Once Phong shading has calculated the normal of a fragment (pixel) inside the polygon, it can then apply a lighting model, shading that fragment.[32] This process is repeated until each polygon of the 3D model is shaded.[21]

Lighting effects

[edit]
A reflective material demonstrating caustics

Caustics

[edit]

Caustics are an effect of light reflected and refracted in a medium with curved interfaces or reflected off a curved surface.[33] They appear as ribbons of concentrated light and are often seen when looking at bodies of water or glass.[34] Caustics can be implemented in 3D graphics by blending a caustic texture map with the texture map of the affected objects.[34] The caustics texture can either be a static image that is animated to mimic the effects of caustics, or a Real-time calculation of caustics onto a blank image.[34] The latter is more complicated and requires backwards ray tracing to simulate photons moving through the environment of the 3D render.[33] In a photon mapping illumination model, Monte Carlo sampling is used in conjunction with the ray tracing to compute the intensity of light caused by the caustics.[33]

Reflection mapping

[edit]

Reflection mapping (also known as environment mapping) is a technique which uses 2D environment maps to create the effect of reflectivity without using ray tracing.[35] Since the appearances of reflective objects depend on the relative positions of the viewers, the objects, and the surrounding environments, graphics algorithms produce reflection vectors to determine how to color the objects based on these elements.[36] Using 2D environment maps rather than fully rendered, 3D objects to represent surroundings, reflections on objects can be determined using simple, computationally inexpensive algorithms.[35]

Particle systems

[edit]

Particle systems use collections of small particles to model chaotic, high-complexity events, such as fire, moving liquids, explosions, and moving hair.[37] Particles which make up the complex animation are distributed by an emitter, which gives each particle its properties, such as speed, lifespan, and color.[37] Over time, these particles may move, change color, or vary other properties, depending on the effect.[37] Typically, particle systems incorporate randomness, such as in the initial properties the emitter gives each particle, to make the effect realistic and non-uniform.[37][38]

See also

[edit]

References

[edit]
  1. ^ "Light: The art of exposure". GarageFarm. 2020-11-11. Retrieved 2020-11-11.
  2. ^ a b c d e "Intro to Computer Graphics: Lighting and Shading". www.cs.uic.edu. Retrieved 2019-11-05.
  3. ^ a b "Intro to Computer Graphics: Lighting and Shading". www.cs.uic.edu. Retrieved 2019-11-05.
  4. ^ a b c d "Lighting in 3D Graphics". www.bcchang.com. Retrieved 2019-11-05.
  5. ^ a b c "Understanding Different Light Types". www.pluralsight.com. Retrieved 2019-11-05.
  6. ^ a b c d "Intro to Computer Graphics: Lighting and Shading". www.cs.uic.edu. Retrieved 2019-11-05.
  7. ^ Lagarde, Sebastien; de Rousiers, Charles (Summer 2014). Moving Frostbite to Physically Based Rendering 3.0. SIGGRAPH.
  8. ^ Pharr, Matt; Humphreys, Greg; Wenzel, Jakob (2016). Physically Based Rendering: From Theory to Implementation (3rd ed.). Morgan Kaufmann. ISBN 978-0128006450.
  9. ^ Vergne, Romain; Pacanowski, Romain; Barla, Pascal; Granier, Xavier; Schlick, Christophe (February 19, 2010). "Radiance Scaling for Versatile Surface Enhancement". Proceedings of the 2010 ACM SIGGRAPH symposium on Interactive 3D Graphics and Games. ACM. pp. 143–150. doi:10.1145/1730804.1730827. ISBN 9781605589398. S2CID 18291692 – via hal.inria.fr.
  10. ^ https://visao.ca/what-is-hdri/#:~:text=High%20dynamic%20range%20images%20are,look%20cartoonish%20and%20less%20professional. [bare URL]
  11. ^ a b "Lighting in 3D Graphics". www.bcchang.com. Retrieved 2019-11-05.
  12. ^ a b c d e Pollard, Nancy (Spring 2004). "Lighting and Shading" (PDF).
  13. ^ a b c "Lighting in 3D Graphics". www.bcchang.com. Retrieved 2019-11-05.
  14. ^ a b c d e f g "LearnOpenGL - Basic Lighting". learnopengl.com. Retrieved 2019-11-08.
  15. ^ "Intro to Computer Graphics: Lighting and Shading". www.cs.uic.edu. Retrieved 2019-11-08.
  16. ^ a b c d e "Global Illumination" (PDF). Georgia Tech Classes. 2002.
  17. ^ a b c d e Farrell. "Local Illumination". Kent University.
  18. ^ a b c d e f g "Computer Graphics: Shading and Lighting". cglearn.codelight.eu. Retrieved 2019-10-30.
  19. ^ a b James F. Blinn (1977). "Models of light reflection for computer synthesized pictures". Proc. 4th annual conference on computer graphics and interactive techniques: 192–198. CiteSeerX 10.1.1.131.7741. doi:10.1145/563858.563893
  20. ^ Jacob's University, "Blinn-Phong Reflection Model", 2010.
  21. ^ a b c d Li, Hao (2018). "Shading in OpenGL" (PDF).
  22. ^ a b c Li, Hao (Fall 2018). "Global Illumination" (PDF).
  23. ^ "Introducing the NVIDIA RTX Ray Tracing Platform". NVIDIA Developer. 2018-03-06. Retrieved 2019-11-08.
  24. ^ a b Reif, J. H. (1994). "Computability and Complexity of Ray Tracing"(PDF). Discrete and Computational Geometry.
  25. ^ a b Wallace, John R.; Cohen, Michael F.; Greenberg, Donald P. (1987). "A Two-pass Solution to the Rendering Equation: A Synthesis of Ray Tracing and Radiosity Methods". Proceedings of the 14th Annual Conference on Computer Graphics and Interactive Techniques. SIGGRAPH '87. New York, NY, USA: ACM: 311–320. doi:10.1145/37401.37438. ISBN 9780897912273.
  26. ^ a b Greenberg, Donald P. (1989-04-14). "Light Reflection Models for Computer Graphics". Science. 244 (4901): 166–173. Bibcode:1989Sci...244..166G. doi:10.1126/science.244.4901.166. ISSN 0036-8075. PMID 17835348. S2CID 46575183.
  27. ^ Cindy Goral, Kenneth E. Torrance, Donald P. Greenberg and B. Battaile,"Modeling the interaction of light between diffuse surfaces", Computer Graphics, Vol. 18, No. 3. (PDF)
  28. ^ a b c Wann Jensen, Henrik (1996). "Global Illumination using Photon Maps Archived 2008-08-08 at the Wayback Machine" (PDF). Rendering Techniques ’96: 21–30.
  29. ^ a b "Photon Mapping - Zack Waters". web.cs.wpi.edu. Retrieved 2019-11-08.
  30. ^ "Introduction to Computer Graphics, Section 4.1 -- Introduction to Lighting". math.hws.edu.
  31. ^ "Vertex Specification - OpenGL Wiki". www.khronos.org. Retrieved 2019-11-06.
  32. ^ a b c d e f Foley. "Illumination Models and Shading" (PDF).
  33. ^ a b c "GPU Gems". NVIDIA Developer. Retrieved 2019-10-30.
  34. ^ a b c "Caustics water texturing using Unity 3D". www.dualheights.se. Retrieved 2019-11-06.
  35. ^ a b "Computer Graphics: Environment Mapping". cglearn.codelight.eu. Retrieved 2019-11-01.
  36. ^ Shen, Han-Wei. "Environment Mapping" (PDF).
  37. ^ a b c d Bailey, Mike. "Particle Systems" (PDF).
  38. ^ "Particle Systems". web.cs.wpi.edu. Retrieved 2019-11-01.