Participating Media in Ray Tracing
Author: Samuel Fournier
Student ID: 20218212
Course: IFT 3150 - Cours Projet
Professor: Dr. Pierre Poulin
Date: 22 December 2024
For the french version of this document, please visit the following link: Résumé du Projet Final
For the project website, please visit the following link: Project Website
For the full source code, please visit the following link: GitHub Repository
For the full report, please visit the following link: Full Report
This project is part of the IFT 3150 course and aims to extend the capabilities of a rudimentary ray tracer by adding the ability to render participating media. In other words, it enables the realistic representation of smoke (or any other volumetric medium). The work builds upon the concepts covered during the Winter 2024 term, such as the implementation of basic primitives (spheres, planes, triangles, etc.) and lighting models (e.g., Blinn-Phong).
The main challenge here is handling the interaction of light with a medium that is not a rigid object but rather a collection of voxels—each voxel representing a sample of volumetric density (e.g., smoke). The complexity lies in modeling absorption, scattering, and potentially the emission of light within this volume.
There exists a wide range of methods for rendering 3D scenes such as the ones I used. Among them, the ray tracing algorithm is a popular choice for its ability to produce realistic images.
Ray tracing simulates the propagation of light rays within a 3D scene. Instead of casting rays from the light source, we trace them from the camera (the Helmholtz principle). For each pixel of the final image, we compute the intersection between the ray and the objects in the scene, then apply a lighting model to determine the visible color. In this project, the lighting model is primarily Blinn-Phong.
The basic algorithm proceeds as follows:
ray_depth
).
In this project, the main extension is accounting for a participating medium along the ray’s path, in order to compute absorption and scattering within this volume.
The participating medium implemented here can be considered as smoke, represented in a cube subdivided into a grid of voxels. Each voxel has a density that can be generated in various ways: constant density, linear or exponential gradient, Perlin noise, etc. Depending on the density value, the amount of light passing through that voxel is attenuated more or less strongly.
To determine if a ray enters the volume (and where), we calculate its intersection
with the cube’s faces (x=0
, x=1
, y=0
,
y=1
, z=0
, z=1
). The parameter values
t
that correspond to these faces let us identify the
entry point (tMin
) and the exit point
(tMax
) of the ray within the cube.
In some cases, we get particular edge case we have to handle:
tMin
is negative). We start from inside
the volume directly.
Once the volume is intersected, we need to sample it to compute absorption and scattering. In order to sample it, we need to traverse the grid. In order to do so, two approaches were implemented:
In both cases, at each sample (or step), we compute the density and model the interaction with light.
Light attenuation in a medium is primarily based on the Beer-Lambert law:
T = exp(-d · σ · density)
To make the results more realistic, we also account for:
σa
):
part of the light is absorbed and transformed (e.g., into heat).σs
):
light is scattered out of the original ray direction.σs
):
light coming from other directions is scattered towards the camera.
The attenuation formula thus becomes
T = exp(-d · (σa + σs) · density)
,
where density
is the volumetric density at the sampled point.
A phase function is used to model the direction of scattering.
The simplest type of phase function is the isotropic one. These functions
scatter light in all directions equally. The formula for the isotropic phase function is
piso = 1 / (4π)
where piso
is the probability of scattering in a given direction.
The other type of phase functions are the anisotropic functions. These functions scatter light in a prefered direction. The function used in this project was the Henyey-Greenstein phase function. The formula for the Henyey-Greenstein phase function is
phg = (1 - g2) / (4π(1 + g2 - 2gcosθ)3/2)
where phg
is the probability of scattering in a given direction, g
is the asymmetry factor and can take any value between -1
and 1
, and θ
is the angle between the camera ray and light ray.
At each step along the ray, we must evaluate the amount of light from
the various sources. To do this, we cast a light ray toward each light
and compute the transmittance (lightTransmittance
) at the
considered point. We then apply the phase function to determine how much
of that light is redirected towards the eye.
Russian Roulette can be used to stop computations when transmittance becomes very low, avoiding wasted time in nearly opaque regions. If the random test fails, we multiply the remaining transmittance to compensate and continue.
shade()
Function
Previously, the shade()
function handled illumination in a
binary fashion (an object either blocked the light or not). With the
participating medium, we now weight the light contribution
by the transmittance measured along the shadow ray in the volume.
Thus, the light source contribution can range between 0 and 1,
reflecting partial absorption within the medium.
Once we've computed the shading at our intersection point, the final colour that get displayed is evaluated as:
finalColor = shade() * transmittance + scatteringColor
This project provides a deeper understanding of volume rendering concepts applied to ray tracing. The implementation covers absorption, out-scattering, in-scattering, and phase functions (notably Henyey-Greenstein), and relies on either DDA or Ray Marching to traverse the volume.
Possible improvements include adding emission (e.g., fire) and porting the code to a shader language for real-time rendering on the GPU (e.g., via OpenGL, Vulkan, or compute shaders in Unity). One might also implement other fluid types (water, fog, etc.) using the same volume sampling logic.
The results (static images) show that the simulation of smoke, mist, or clouds is convincing, demonstrating the effectiveness of the methods established to render light scattering within a medium.
Here are some examples of images generated by the program:
Voxel grid with a uniform density of 1.
Sphere with a uniform density of 0.5.
Grid with Perlin densities.
Sphere with Perlin densities.
Grid with g = -0.65.
Mirror cube in the volume.
Mirror sphere in the volume.
Mirror cube in the volume (view 3).
Reference: SIGGRAPH Comput. Graph., vol. 11, no. 2, pp. 192–198, July 1977.
Abstract: In the production of computer generated pictures of three dimensional objects, one stage ... [omitted for brevity].
Reference: Proceedings of the 4th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH '77), ACM, New York, NY, USA, pp. 192–198.
Abstract: In the production of computer generated pictures of three dimensional objects, one stage ... [omitted for brevity].
URL: https://raytracing.github.io/books/RayTracingInOneWeekend.html
Note: [Accessed August 2024]
URL: https://en.wikipedia.org/w/index.php?title=Ray_tracing_(graphics)&oldid=1253778958
Note: [Online; accessed 3-December-2024]
URL: https://en.wikipedia.org/w/index.php?title=Perlin_noise&oldid=1258165795
Note: [Online; accessed 3-December-2024]
URL: https://en.wikipedia.org/w/index.php?title=Voxel&oldid=1260713506
Note: [Online; accessed 3-December-2024]
Note: [Online; accessed 3-December-2024]
Reference: Proceedings of EuroGraphics, vol. 87, August 1987.
URL: https://en.wikipedia.org/w/index.php?title=Beer%E2%80%93Lambert_law&oldid=1262471473
Note: [Online; accessed 12-December-2024]