Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Special pages
Niidae Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Ray tracing (graphics)
(section)
Page
Discussion
English
Read
Edit
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
View history
General
What links here
Related changes
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Detailed description of ray tracing computer algorithm and its genesis== ===What happens in nature (simplified)=== {{see also|Electromagnetism|Quantum electrodynamics}} In nature, a light source emits a ray of light which travels, eventually, to a surface that interrupts its progress. One can think of this "ray" as a stream of [[photon]]s traveling along the same path. In a perfect vacuum this ray will be a straight line (ignoring [[general relativity|relativistic effects]]). Any combination of four things might happen with this light ray: [[Absorption (electromagnetic radiation)|absorption]], [[Reflection (physics)|reflection]], [[refraction]] and [[fluorescence]]. A surface may absorb part of the light ray, resulting in a loss of intensity of the reflected and/or refracted light. It might also reflect all or part of the light ray, in one or more directions. If the surface has any [[Transparency (optics)|transparent]] or [[Transparency (optics)|translucent]] properties, it refracts a portion of the light beam into itself in a different direction while absorbing some (or all) of the [[Visible spectrum|spectrum]] (and possibly altering the color). Less commonly, a surface may absorb some portion of the light and fluorescently re-emit the light at a longer wavelength color in a random direction, though this is rare enough that it can be discounted from most rendering applications. Between absorption, reflection, refraction and fluorescence, all of the incoming light must be accounted for, and no more. A surface cannot, for instance, reflect 66% of an incoming light ray, and refract 50%, since the two would add up to be 116%. From here, the reflected and/or refracted rays may strike other surfaces, where their absorptive, refractive, reflective and fluorescent properties again affect the progress of the incoming rays. Some of these rays travel in such a way that they hit our eye, causing us to see the scene and so contribute to the final rendered image. ===Ray casting algorithm=== {{Main|Ray casting}} The idea behind ray casting, the predecessor to recursive ray tracing, is to trace rays from the eye, one per pixel, and find the closest object blocking the path of that ray. Think of an image as a screen-door, with each square in the screen being a pixel. This is then the object the eye sees through that pixel. Using the material properties and the effect of the lights in the scene, this algorithm can determine the [[shading]] of this object. The simplifying assumption is made that if a surface faces a light, the light will reach that surface and not be blocked or in shadow. The shading of the surface is computed using traditional 3-D computer graphics shading models. One important advantage ray casting offered over older [[scanline rendering|scanline algorithms]] was its ability to easily deal with non-planar surfaces and solids, such as [[cone (geometry)|cones]] and [[sphere]]s. If a mathematical surface can be intersected by a ray, it can be rendered using ray casting. Elaborate objects can be created by using [[solid modeling]] techniques and easily rendered. ===Volume ray casting algorithm=== {{Main|Volume ray casting}} In the method of volume ray casting, each ray is traced so that color and/or density can be sampled along the ray and then be combined into a final pixel color. This is often used when objects cannot be easily represented by explicit surfaces (such as triangles), for example when rendering clouds or 3D medical scans. [[File:Visualization of SDF ray marching algorithm.png|thumb|Visualization of SDF ray marching algorithm]] ===SDF ray marching algorithm=== {{main|Ray marching#Distance-aided ray marching}} In SDF ray marching, or sphere tracing,<ref>{{Citation | last1 = Hart | first1 = John C. | title = Sphere Tracing: A Geometric Method for the Antialiased Ray Tracing of Implicit Surfaces | journal = The Visual Computer | date = June 1995 | url = http://graphics.stanford.edu/courses/cs348b-20-spring-content/uploads/hart.pdf}}</ref> each ray is traced in multiple steps to approximate an intersection point between the ray and a surface defined by a [[signed distance function]] (SDF). The SDF is evaluated for each iteration in order to be able take as large steps as possible without missing any part of the surface. A threshold is used to cancel further iteration when a point is reached that is close enough to the surface. This method is often used for 3-D fractal rendering.<ref>{{Citation | last1 = Hart | first1 = John C. | last2 = Sandin | first2 = Daniel J. | last3 = Kauffman | first3 = Louis H. | title = Ray Tracing Deterministic 3-D Fractals | journal = Computer Graphics | date = July 1989 | volume = 23 | issue = 3 | pages = 289–296 | doi = 10.1145/74334.74363 | url = http://graphics.stanford.edu/courses/cs348b-20-spring-content/uploads/hart.pdf}}</ref> ===Recursive ray tracing algorithm=== [[File:Glasses 800 edit.png|right|thumb|300px|Ray tracing can create photorealistic images.]] [[File:BallsRender.png|right|thumb|300px|In addition to the high degree of realism, ray tracing can simulate the [[Camera#Mechanics|effects of a camera]] due to [[depth of field]] and [[aperture]] shape (in this case a [[hexagon]]).]] [[File:Ray-traced steel balls.jpg|right|thumb|300px|The number of reflections, or bounces, a "ray" can make, and how it is affected each time it encounters a surface, is controlled by settings in the software. In this image, each ray was allowed to reflect up to 16 times. Multiple "reflections of reflections" can thus be seen in these spheres. (Image created with [[Cobalt (CAD program)|Cobalt]].)]] [[File:Glass ochem.png|right|thumb|300px|The number of [[refraction]]s a “ray” can make, and how it is affected each time it encounters a surface that permits the [[Transparency and translucency|transmission of light]], is controlled by settings in the software. Here, each ray was set to refract or reflect (the "depth") ''up to 9 times''. [[Fresnel reflection]]s were used and [[Caustic (optics)|caustics]] are visible. (Image created with [[V-Ray]].)]] Earlier algorithms traced rays from the eye into the scene until they hit an object, but determined the ray color without recursively tracing more rays. Recursive ray tracing continues the process. When a ray hits a surface, additional rays may be cast because of reflection, refraction, and shadow.:<ref>{{cite journal | url = https://dip.felk.cvut.cz/browse/pdfcache/nikodtom_2010bach.pdf | title = Ray Tracing Algorithm For Interactive Applications | author = Tomas Nikodym | journal = Czech Technical University, FEE | date = June 2010 | archive-url = https://web.archive.org/web/20160303180450/https://dip.felk.cvut.cz/browse/pdfcache/nikodtom_2010bach.pdf | archive-date = March 3, 2016 }}</ref> * A reflection ray is traced in the mirror-reflection direction. The closest object it intersects is what will be seen in the reflection. *A refraction ray traveling through transparent material works similarly, with the addition that a refractive ray could be entering or exiting a material. [[Turner Whitted]] extended the mathematical logic for rays passing through a transparent solid to include the effects of refraction.<ref>{{cite book |last=Whitted |first=T. |year=1979 |chapter=An Improved Illumination Model for Shaded Display |title=Proceedings of the 6th annual conference on Computer graphics and interactive techniques |publisher=Association for Computing Machinery |citeseerx=10.1.1.156.1534 |isbn=0-89791-004-4 |chapter-url=http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.156.1534 }}</ref> * A shadow ray is traced toward each light. If any opaque object is found between the surface and the light, the surface is in shadow and the light does not illuminate it. These recursive rays add more realism to ray traced images. ===Advantages over other rendering methods=== Ray tracing-based rendering's popularity stems from its basis in a realistic simulation of [[Computer graphics lighting|light transport]], as compared to other rendering methods, such as [[rasterisation|rasterization]], which focuses more on the realistic simulation of geometry. Effects such as reflections and [[shadow]]s, which are difficult to simulate using other algorithms, are a natural result of the ray tracing algorithm. The computational independence of each ray makes ray tracing amenable to a basic level of [[parallelization]],<ref>{{cite book |first1=A. |last1=Chalmers |first2=T. |last2=Davis |first3=E. |last3=Reinhard |title=Practical Parallel Rendering |isbn=1-56881-179-9 |publisher=AK Peters |year=2002 }}</ref> but the divergence of ray paths makes high utilization under parallelism quite difficult to achieve in practice.<ref>{{cite book |last1=Aila |first1=Timo |first2=Samulii |last2=Laine |year=2009 |chapter=Understanding the Efficiency of Ray Traversal on GPUs |title=HPG '09: Proceedings of the Conference on High Performance Graphics 2009 |pages=145–149 |doi=10.1145/1572769.1572792 |isbn=9781605586038 |s2cid=15392840 }}</ref> === Disadvantages === A serious disadvantage of ray tracing is performance (though it can in theory be faster than traditional scanline rendering depending on scene complexity vs. number of pixels on-screen). Until the late 2010s, ray tracing in real time was usually considered impossible on consumer hardware for nontrivial tasks. Scanline algorithms and other algorithms use data coherence to share computations between pixels, while ray tracing normally starts the process anew, treating each eye ray separately. However, this separation offers other advantages, such as the ability to shoot more rays as needed to perform [[spatial anti-aliasing]] and improve image quality where needed. Whitted-style recursive ray tracing handles interreflection and optical effects such as refraction, but is not generally [[photorealistic rendering|photorealistic]]. Improved realism occurs when the [[rendering equation]] is fully evaluated, as the equation conceptually includes every physical effect of light flow. However, this is infeasible given the computing resources required, and the limitations on geometric and material modeling fidelity. [[Path tracing]] is an algorithm for evaluating the rendering equation and thus gives a higher fidelity simulations of real-world lighting. ===Reversed direction of traversal of scene by the rays=== The process of shooting rays from the eye to the light source to render an image is sometimes called ''backwards ray tracing'', since it is the opposite direction photons actually travel. However, there is confusion with this terminology. Early ray tracing was always done from the eye, and early researchers such as James Arvo used the term ''backwards ray tracing'' to mean shooting rays from the lights and gathering the results. Therefore, it is clearer to distinguish ''eye-based'' versus ''light-based'' ray tracing. While the direct illumination is generally best sampled using eye-based ray tracing, certain indirect effects can benefit from rays generated from the lights. [[Caustic (optics)|Caustics]] are bright patterns caused by the focusing of light off a wide reflective region onto a narrow area of (near-)diffuse surface. An algorithm that casts rays directly from lights onto reflective objects, tracing their paths to the eye, will better sample this phenomenon. This integration of eye-based and light-based rays is often expressed as bidirectional path tracing, in which paths are traced from both the eye and lights, and the paths subsequently joined by a connecting ray after some length.<ref>{{cite journal | url = http://www.graphics.cornell.edu/~eric/Portugal.html | title = Bi-Directional Path Tracing | author = Eric P. Lafortune and Yves D. Willems | journal = Proceedings of Compugraphics '93 | date = December 1993 | pages = 145–153}}</ref><ref>{{cite web | url = https://old.cescg.org/CESCG98/PDornbach/paper.pdf | title = Implementation of bidirectional ray tracing algorithm | author = Péter Dornbach | access-date = 2008-06-11 |date=1998 }}</ref> [[Photon mapping]] is another method that uses both light-based and eye-based ray tracing; in an initial pass, energetic photons are traced along rays from the light source so as to compute an estimate of radiant flux as a function of 3-dimensional space (the eponymous photon map itself). In a subsequent pass, rays are traced from the eye into the scene to determine the visible surfaces, and the photon map is used to estimate the illumination at the visible surface points.<ref>[http://graphics.ucsd.edu/~henrik/papers/photon_map/global_illumination_using_photon_maps_egwr96.pdf Global Illumination using Photon Maps] {{webarchive|url=https://web.archive.org/web/20080808140048/http://graphics.ucsd.edu/~henrik/papers/photon_map/global_illumination_using_photon_maps_egwr96.pdf |date=2008-08-08 }}</ref><ref>{{cite web| url = http://web.cs.wpi.edu/~emmanuel/courses/cs563/write_ups/zackw/photon_mapping/PhotonMapping.html| title = Photon Mapping - Zack Waters<!-- Bot generated title -->}}</ref> The advantage of photon mapping versus bidirectional path tracing is the ability to achieve significant reuse of photons, reducing computation, at the cost of statistical bias. An additional problem occurs when light must pass through a very narrow aperture to illuminate the scene (consider a darkened room, with a door slightly ajar leading to a brightly lit room), or a scene in which most points do not have direct line-of-sight to any light source (such as with ceiling-directed light fixtures or [[torchiere]]s). In such cases, only a very small subset of paths will transport energy; [[Metropolis light transport]] is a method which begins with a random search of the path space, and when energetic paths are found, reuses this information by exploring the nearby space of rays.<ref>{{cite book |first1=Eric |last1=Veach |first2=Leonidas J. |last2=Guibas |chapter=Metropolis Light Transport |title=SIGGRAPH '97: Proceedings of the 24th annual conference on Computer graphics and interactive techniques |year=1997 |pages=65–76 |doi=10.1145/258734.258775 |isbn=0897918967 |s2cid=1832504 }}</ref> [[File:PathOfRays.svg|thumb|Image showing recursively generated rays from the "eye" (and through an image plane) to a light source after encountering two [[diffuse surface]]s]] To the right is an image showing a simple example of a path of rays recursively generated from the camera (or eye) to the light source using the above algorithm. A diffuse surface reflects light in all directions. First, a ray is created at an eyepoint and traced through a pixel and into the scene, where it hits a diffuse surface. From that surface the algorithm recursively generates a reflection ray, which is traced through the scene, where it hits another diffuse surface. Finally, another reflection ray is generated and traced through the scene, where it hits the light source and is absorbed. The color of the pixel now depends on the colors of the first and second diffuse surface and the color of the light emitted from the light source. For example, if the light source emitted white light and the two diffuse surfaces were blue, then the resulting color of the pixel is blue. ===Example=== [[File:Parametric surface illustration (trefoil knot).png|thumb|[[Trefoil knot]], created with a [[parametric equation]] and ray traced in [[Python (programming language)|Python]].]] As a demonstration of the principles involved in ray tracing, consider how one would find the intersection between a ray and a sphere. This is merely the math behind the [[line–sphere intersection]] and the subsequent determination of the colour of the pixel being calculated. There is, of course, far more to the general process of ray tracing, but this demonstrates an example of the algorithms used. In [[vector notation]], the equation of a sphere with center <math>\mathbf c</math> and radius <math>r</math> is :<math>\left\Vert \mathbf x - \mathbf c \right\Vert^2=r^2.</math> Any point on a ray starting from point <math>\mathbf s</math> with direction <math>\mathbf d</math> (here <math>\mathbf d</math> is a [[unit vector]]) can be written as :<math>\mathbf x=\mathbf s+t\mathbf d,</math> where <math> t</math> is its distance between <math>\mathbf x</math> and <math>\mathbf s</math>. In our problem, we know <math>\mathbf c</math>, <math>r</math>, <math>\mathbf s</math> (e.g. the position of a light source) and <math>\mathbf d</math>, and we need to find <math> t</math>. Therefore, we substitute for <math>\mathbf x</math>: :<math>\left\Vert\mathbf{s}+t\mathbf{d}-\mathbf{c}\right\Vert^{2}=r^2.</math> Let <math>\mathbf{v}\ \stackrel{\mathrm{def}}{=}\ \mathbf{s}-\mathbf{c}</math> for simplicity; then :<math>\left\Vert\mathbf{v}+t\mathbf{d}\right\Vert^{2}=r^{2}</math> :<math>\mathbf{v}^2+t^2\mathbf{d}^2+2\mathbf{v}\cdot t\mathbf{d}=r^2</math> :<math>(\mathbf{d}^2)t^2+(2\mathbf{v}\cdot\mathbf{d})t+(\mathbf{v}^2-r^2)=0.</math> Knowing that d is a unit vector allows us this minor simplification: :<math>t^2+(2\mathbf{v}\cdot\mathbf{d})t+(\mathbf{v}^2-r^2)=0.</math> This [[quadratic equation]] has solutions :<math>t=\frac{-(2\mathbf{v}\cdot\mathbf{d})\pm\sqrt{(2\mathbf{v}\cdot\mathbf{d})^2-4(\mathbf{v}^2-r^2)}}{2}=-(\mathbf{v}\cdot\mathbf{d})\pm\sqrt{(\mathbf{v}\cdot\mathbf{d})^2-(\mathbf{v}^2-r^2)}.</math> The two values of <math>t</math> found by solving this equation are the two ones such that <math>\mathbf s+t\mathbf d</math> are the points where the ray intersects the sphere. Any value which is negative does not lie on the ray, but rather in the opposite [[Line (mathematics)|half-line]] (i.e. the one starting from <math>\mathbf s</math> with opposite direction). If the quantity under the square root (the [[quadratic equation#Discriminant|discriminant]]) is negative, then the ray does not intersect the sphere. Let us suppose now that there is at least a positive solution, and let <math>t</math> be the minimal one. In addition, let us suppose that the sphere is the nearest object on our scene intersecting our ray, and that it is made of a reflective material. We need to find in which direction the light ray is reflected. The laws of [[Reflection (physics)|reflection]] state that the angle of reflection is equal and opposite to the angle of incidence between the incident ray and the [[surface normal|normal]] to the sphere. The normal to the sphere is simply :<math>\mathbf n=\frac{\mathbf y- \mathbf c}{\left\Vert\mathbf y- \mathbf c\right\Vert},</math> where <math>\mathbf y=\mathbf s+t\mathbf d</math> is the intersection point found before. The reflection direction can be found by a [[Reflection (mathematics)|reflection]] of <math>\mathbf d</math> with respect to <math>\mathbf n</math>, that is : <math>\mathbf r = \mathbf d - 2(\mathbf n \cdot \mathbf d ) \mathbf n.</math> Thus the reflected ray has equation : <math>\mathbf x = \mathbf y + u \mathbf r. \, </math> Now we only need to compute the intersection of the latter ray with our [[field of view]], to get the pixel which our reflected light ray will hit. Lastly, this pixel is set to an appropriate color, taking into account how the color of the original light source and the one of the sphere are combined by the reflection.
Summary:
Please note that all contributions to Niidae Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Encyclopedia:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Search
Search
Editing
Ray tracing (graphics)
(section)
Add topic