Summary of Final Project

Participating Media in Ray Tracing

Author: Samuel Fournier
Student ID: 20218212
Course: IFT 3150 - Cours Projet
Professor: Dr. Pierre Poulin
Date: 22 December 2024

For the french version of this document, please visit the following link: Résumé du Projet Final

For the project website, please visit the following link: Project Website

For the full source code, please visit the following link: GitHub Repository

For the full report, please visit the following link: Full Report

1. Introduction

This project is part of the IFT 3150 course and aims to extend the capabilities of a rudimentary ray tracer by adding the ability to render participating media. In other words, it enables the realistic representation of smoke (or any other volumetric medium). The work builds upon the concepts covered during the Winter 2024 term, such as the implementation of basic primitives (spheres, planes, triangles, etc.) and lighting models (e.g., Blinn-Phong).

The main challenge here is handling the interaction of light with a medium that is not a rigid object but rather a collection of voxels—each voxel representing a sample of volumetric density (e.g., smoke). The complexity lies in modeling absorption, scattering, and potentially the emission of light within this volume.

2. Basic Ray Tracing Principles

There exists a wide range of methods for rendering 3D scenes such as the ones I used. Among them, the ray tracing algorithm is a popular choice for its ability to produce realistic images.

Ray tracing simulates the propagation of light rays within a 3D scene. Instead of casting rays from the light source, we trace them from the camera (the Helmholtz principle). For each pixel of the final image, we compute the intersection between the ray and the objects in the scene, then apply a lighting model to determine the visible color. In this project, the lighting model is primarily Blinn-Phong.

The basic algorithm proceeds as follows:

  1. Cast a ray from the camera for each pixel.
  2. Check for intersections with objects (triangles, spheres, etc.).
  3. Compute the color at the intersection point using the Blinn-Phong model or an equivalent approach.
  4. Handle reflection and refraction based on a maximum ray depth (ray_depth).
    1. For reflection, cast a new ray in the reflected direction.
    2. For refraction, cast a new ray in the refracted direction (e.g., using Snell’s law).

In this project, the main extension is accounting for a participating medium along the ray’s path, in order to compute absorption and scattering within this volume.

3. Participating Media

The participating medium implemented here can be considered as smoke, represented in a cube subdivided into a grid of voxels. Each voxel has a density that can be generated in various ways: constant density, linear or exponential gradient, Perlin noise, etc. Depending on the density value, the amount of light passing through that voxel is attenuated more or less strongly.

3.1 Cube-Ray Intersection

To determine if a ray enters the volume (and where), we calculate its intersection with the cube’s faces (x=0, x=1, y=0, y=1, z=0, z=1). The parameter values t that correspond to these faces let us identify the entry point (tMin) and the exit point (tMax) of the ray within the cube.

In some cases, we get particular edge case we have to handle:

  • Ray origin inside the volume: in this situation, there is no entry point (or tMin is negative). We start from inside the volume directly.
  • Zero direction component: division by zero. Fortunately for us, in C++, division by zero is defined as infinity, so we can handle this case. However, thanks to the bit representation of floating points, it is possible to get -0.0. Other than having a negative sign, C++ treats the value as 0 notheless. However, division by -0.0 will not give the same value as division by 0.0 because of the negative sign.

4. Traversing the Participating Medium

Once the volume is intersected, we need to sample it to compute absorption and scattering. In order to sample it, we need to traverse the grid. In order to do so, two approaches were implemented:

  • DDA (Digital Differential Analyzer)
    This method advances through the volume by moving from one voxel-plane intersection to the next. It guarantees that no voxel in the path of the ray is skipped, even if the ray barely touches a corner of the voxel. However, it also means that we will sample each voxel only once.
  • Ray Marching
    Here, we move forward using a constant step size. The samples are thus regularly spaced along the ray, sometimes with a slight jitter to reduce repeating artifacts. Unlike the DDA approach, with a small enough step size, we can sample individual voxels multiple times. However, this method can be slower than the DDA methods (especially when the step size is small) and we are not guaranteed to sample every voxel in the path of the ray.

In both cases, at each sample (or step), we compute the density and model the interaction with light.

5. Light Attenuation

Light attenuation in a medium is primarily based on the Beer-Lambert law:

T = exp(-d · σ · density)

To make the results more realistic, we also account for:

  • Absorption (σa): part of the light is absorbed and transformed (e.g., into heat).
  • Out-Scattering (σs): light is scattered out of the original ray direction.
  • In-Scattering (σs): light coming from other directions is scattered towards the camera.
  • Emission: not implemented here (e.g., fire or flames) but could be added in a future version.

The attenuation formula thus becomes T = exp(-d · (σa + σs) · density), where density is the volumetric density at the sampled point. A phase function is used to model the direction of scattering. The simplest type of phase function is the isotropic one. These functions scatter light in all directions equally. The formula for the isotropic phase function is

piso = 1 / (4π)

where piso is the probability of scattering in a given direction.

The other type of phase functions are the anisotropic functions. These functions scatter light in a prefered direction. The function used in this project was the Henyey-Greenstein phase function. The formula for the Henyey-Greenstein phase function is

phg = (1 - g2) / (4π(1 + g2 - 2gcosθ)3/2)

where phg is the probability of scattering in a given direction, g is the asymmetry factor and can take any value between -1 and 1, and θ is the angle between the camera ray and light ray.

5.1 Computing Light Inside the Volume

At each step along the ray, we must evaluate the amount of light from the various sources. To do this, we cast a light ray toward each light and compute the transmittance (lightTransmittance) at the considered point. We then apply the phase function to determine how much of that light is redirected towards the eye.

Russian Roulette can be used to stop computations when transmittance becomes very low, avoiding wasted time in nearly opaque regions. If the random test fails, we multiply the remaining transmittance to compensate and continue.

6. shade() Function

Previously, the shade() function handled illumination in a binary fashion (an object either blocked the light or not). With the participating medium, we now weight the light contribution by the transmittance measured along the shadow ray in the volume. Thus, the light source contribution can range between 0 and 1, reflecting partial absorption within the medium.

Once we've computed the shading at our intersection point, the final colour that get displayed is evaluated as:

finalColor = shade() * transmittance + scatteringColor

7. Conclusion

This project provides a deeper understanding of volume rendering concepts applied to ray tracing. The implementation covers absorption, out-scattering, in-scattering, and phase functions (notably Henyey-Greenstein), and relies on either DDA or Ray Marching to traverse the volume.

Possible improvements include adding emission (e.g., fire) and porting the code to a shader language for real-time rendering on the GPU (e.g., via OpenGL, Vulkan, or compute shaders in Unity). One might also implement other fluid types (water, fog, etc.) using the same volume sampling logic.

The results (static images) show that the simulation of smoke, mist, or clouds is convincing, demonstrating the effectiveness of the methods established to render light scattering within a medium.

8. Images

Here are some examples of images generated by the program:

Voxel grid with a uniform density of 1

Voxel grid with a uniform density of 1.

Scene 2

Sphere with a uniform density of 0.5.

Scene 3

Grid with Perlin densities.

Scene 4

Sphere with Perlin densities.

Scene 5

Grid with g = -0.65.

Scene 6

Mirror cube in the volume.

Scene 7

Mirror sphere in the volume.

Scene 8

Mirror cube in the volume (view 3).

Bibliography

1. Blinn, James F. (1977). Models of light reflection for computer synthesized pictures.
Journal Article

Reference: SIGGRAPH Comput. Graph., vol. 11, no. 2, pp. 192–198, July 1977.

DOI: 10.1145/965141.563893

Abstract: In the production of computer generated pictures of three dimensional objects, one stage ... [omitted for brevity].

2. Blinn, James F. (1977). Models of light reflection for computer synthesized pictures.
Conference Proceeding

Reference: Proceedings of the 4th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH '77), ACM, New York, NY, USA, pp. 192–198.

DOI: 10.1145/563858.563893

Abstract: In the production of computer generated pictures of three dimensional objects, one stage ... [omitted for brevity].

3. Shirley, Peter; Black, Trevor David; Hollasch, Steve (2024). Ray Tracing in One Weekend.
Miscellaneous
4. Wikipedia contributors (2024). Ray tracing (graphics).
Miscellaneous
5. Josh's Channel (2022). How Ray Tracing (Modern CGI) Works And How To Do It 600x Faster.
Miscellaneous
6. Lague, Sebastian (2023). Coding Adventure: Ray Tracing.
Miscellaneous
7. Wikipedia contributors (2024). Perlin noise.
Miscellaneous
8. Wikipedia contributors (2024). Voxel.
Miscellaneous

URL: https://en.wikipedia.org/w/index.php?title=Voxel&oldid=1260713506

Note: [Online; accessed 3-December-2024]

9. Wikipedia contributors (2024). Digital differential analyzer (graphics algorithm).
Miscellaneous
10. Amanatides, John; Woo, Andrew (1987). A Fast Voxel Traversal Algorithm for Ray Tracing.
Article

Reference: Proceedings of EuroGraphics, vol. 87, August 1987.

11. Gyurgyik, C.; Kellison, A. (2022). An Overview of the Fast Voxel Traversal Algorithm.
Miscellaneous
13. Wikipedia contributors (2024). Beer–Lambert law.
Miscellaneous
14. Lague, Sebastian (2024). Coding Adventure: Rendering Fluids.
Miscellaneous