Back to blog
May 04, 2024
12 min read

Lumière - A powerful new path tracer

The start of my own path tracer, used for didactical purposes in implementing various PBR papers.

What is Lumière

Lumière is the name I have given to what is to be my personal, didactical path tracer. In the past a variety of path tracers have been available for production use-cases (think about Arnold, Octane and Cycles just to name a few). These path tracers are often available for commercial use and even though some of these are open-sourced, most of them are quite limited in a number of ways:

  • Most open-source path tracers are written in C++. While this choice is often motivated by performance and easy integration with existing infrastructure such as CUDA, Embree and other libraries, for new users or for people just getting started with path tracing, it is often quite cumbersome to set up, extend and debug.
  • Path tracers such as Moonray and Renderman are production-ready path tracers (the aforementioned path tracers are the in-house path tracers of Dreamworks and Pixar respectively). This allows them to be tailored to very specific requirements and very special architectures. The problem here lies in the fact that they are not suited for didactical purposes and extension.
  • Didactical path tracers such as PBRT and Mitsuba offer easy extendability, but suffer from other inherited drawbacks. PBRT is a didactical path tracing system that is entirely written in C++. While easy to understand if you follow along with the book, people who aren’t at hile with the C++ language will most likely struggle with extending the system due to a variety of reasons. Mitsuba on the other hand, while written in Python, still has connections to C to implement its integrating schemes and is therefor also not the easiest to understand as a complete beginner.

Python & PyCUDA

The goal of this series of blog posts is to explain the fundamental ideas behind path tracing and implement them step by step using Python with GPU acceleration, provided by PyCUDA. PyCUDA will allow us to write our path tracer in Python, while still being able to leverage the power of the GPU. This will allow us to render images in a reasonable amount of time while still being able to understand and extend the system in a didactical way.

Here is a small example taken from the PyCUDA documentation to show how easy it is to use the GPU with Python:

import numpy as np
import pycuda.autoinit
import pycuda.driver as cuda

from pycuda.compiler import SourceModule

# Create a 4x4 matrix of random numbers
array = np.random.randn(4, 4)
array = array.astype(np.float32)

# Allocate memory on the GPU
array_gpu = cuda.mem_alloc(array.nbytes)

# Transfer data to the GPU
cuda.memcpy_htod(array_gpu, array)

# Example of how to execute a CUDA kernel using PyCUDA
module = SourceModule("""
    __global__ void doublify(float *array)
    {
        // Calculate the index of the element. 
        // Since a float is 4 bytes, we need to multiply the index by 4.
        int idx = threadIdx.x + threadIdx.y * 4;
                      
        // Double the value at the index
        array[idx] *= 2;
    }
""")

# Get the function from the module
func = module.get_function("doublify")
func(array_gpu, block=(4, 4, 1))

# Load the result back from the GPU
array_result = np.empty_like(array)
cuda.memcpy_dtoh(array_result, array_gpu)

# Print the original and modified arrays
print(f'Original array: {array}')
print(f'Modified array: {array_result}')

More information and documentation can be found here.

The plan

In this section I will briefly introduce my plan for Lumière, what is to come in this blog post and in the following posts. First of all, in this blog post, I will introduce Monte Carlo integration, which is a mathematical technique that will be used extensively in Lumière. It will ge a little math-heavy, but I will try to explain everything as clearly as possible. After introducing Monte Carlo integration, the blog posts will slowly start working on core features of a renderer. Some of these features are:

  • Scene description: A core part of any renderer is the scene description. To keep it simple, we will use JSON to describe the scene and load it into Lumière. In the beginning we will be able to render simple scenes with spheres and planes, and we will later (hopefully) extend Lumière to support more complex geometry. Together with this JSON file format, I will also be creating a simple Blender plugin that will allow us to export scenes from Blender to the Lumière format. An example scene description might look like this:
{
    "scene": {
        "geometry": [
            {
                "type": "sphere",
                "position": [0, 0, 0],
                "radius": 2
            },

        ],
        "lights": [
            {
                "type": "point",
                "position": [0, 5, 0],
                "intensity": [1, 1, 1]
            }
        ]
    }
}
  • Camera: The camera is the eye of the renderer. It is responsible for generating rays that represent the light paths in the scene. Without a camera, we wouldn’t be able to render anything. We will quickly extend Lumière to support more complex cameras and camera features (e.g. different camera films).
  • Materials: Once we are able to describe and render our basic scene, we will start working on materials. Materials are responsible for describing how light interacts with the surfaces or volumes in the scene. We will start with simple materials like Lambertian and Phong but expand to more complex materials like microfacet models and subsurface scattering. Hopefully, we will also introduce a material-system such as the Disney BRDF, allowing for easy material creation and extension.

Once we have created these basic features, we can start working on more advanced features such as optimal sampling, global illumination strategies and more. The goal of Lumière is to be a didactical path tracer that is easy to understand and extend, while still being performant enough to render complex scenes in a reasonable amount of time equivalent to production-ready path tracers.

The Rendering Equation

Monte Carlo Integration

Path tracers are complex machines that require a lot of math and programming knowledge to understand. Before we start with anything, I will assume all readers have the basic knowledge of linear algebra, calculus and programming. If you are not familiar with these topics, I would strongly recommend you to read up on them before continuing with this blog post. Of course, you can still follow along with the code (which I) will explain in detail while skipping over the math.

Monte Carlo integration is a way to approximate the value of an integral. For example, say we have a function f(x)f(x) which we want to integrate. Sometimes, f(x)f(x) is not analytically integrable, which means we cannot find a closed-form solution for the integral. In this case, if we can sample from the function, we can approximate the integral using Monte Carlo integration. The idea is to compute NN estimates of the integrand at random points xix_i and average them to get an approximation of the integral:

f(x)dx1Ni=1Nf(xi)\int f(x)dx \approx \frac{1}{N} \sum_{i=1}^{N} f(x_i)

Take for example below for the normal distribution (where μ=0\mu = 0 and σ2=1\sigma^2 = 1 ). 7

Monte Carlo Estimator for Normal Distribution

As you can see in the images below, the Monte Carlo estimator for the normal distribution converges to the true value of the integral as we increase the number of samples NN . Since we are integrating the normal distribution which is a probability density function, we know that the integral of the normal distribution over the entire real line is equal to 1. Luckily, the Monte Carlo estimator converges to this value as we increase the number of samples!

N = 10 (Estimate = 0.7685)
N = 10 (Estimate = 0.8789)
N = 100 (Estimate = 0.9436)
N = 1000 (Estimate = 0.9760)

This is the code that was used to generate the plots you are seeing above can be found on the Lumière Project Page.

The Rendering Equation

You might ask “What does Monte Carlo integration have to do with path tracing?“. The answer is: everything! To render images, we will use famous rendering equation (Kajiya, 1986), which is a fundamental equation in computer graphics that describes how light interacts with surfaces in a scene. The rendering equation is given by:

Lo(x,ω)=Le(x,ω)+Ωfr(x,ωi,ω)Li(x,ωi)(ωin)dωiL_o(x, \omega) = L_e(x, \omega) + \int_{\Omega} f_r(x, \omega_i, \omega) L_i(x, \omega_i) (\omega_i \cdot n) d\omega_i

In this equation, the factors and terms have the following meaning:

  • Lo(x,ω)L_o(x, \omega) is the radiance leaving point xx in direction ω\omega .
  • Le(x,ω)L_e(x, \omega) is the emitted radiance at point xx in direction ω\omega .
  • fr(x,ωi,ω)f_r(x, \omega_i, \omega) is the bidirectional reflectance distribution function (BRDF) at point xx . The BRDF describes how light is reflected at a point on a surface. It takes the incoming light direction ωi\omega_i and the outgoing light direction ω\omega as input and returns the ratio of the reflected radiance to the incident radiance.
  • Li(x,ωi)L_i(x, \omega_i) is the incoming radiance at point xx from direction ωi\omega_i .
  • ωin\omega_i \cdot n is the cosine term that accounts for the angle between the incoming light direction and the surface normal nn .
  • Ω\Omega is the hemisphere of incoming light directions.

In other words, to compute the radiance leaving a point xx in direction ω\omega , we need to consider the emitted radiance at that point and the reflected radiance from all incoming light directions. In the explanation of these terms and factors, we used the term “radiance”, but what is radiance?

Radiance is a measure of the amount of light that passes through or is emitted from a surface in a given direction. In path tracers, radiance is most often used to describe the amount of light that leaves a point in a scene in a given direction. This quantity can then be used to compute the color of a pixel in an image, as we will see later.

The rendering equation is a powerful tool that allows us to simulate the physical behavior of light in a scene. It allows us to simulate a variety of real-world lighting effects such as reflections, refractions, (hard and soft) shadows and global illumination. The meaning of these terms might not make a lot of sense yet, and their relation to the rendering equation might be unclear. However, as we progress through this blog post series, we will explain these terms in more detail and show how they relate to the rendering equation.

Solving the Rendering Equation

As you can see, the rendering equation is an integral equation that is recursive. This basically means it is an impossible problem to solve it analytically, like we would for other integrals. Luckily, we have seen a tool that can help us solve this equation; Monte Carlo integration!

We don’t know the exact solution to the rendering equation, but we can compute samples of this equation. As seen before, we can use these samples in a Monte Carlo estimator to approximate the solution to the rendering equation. This is the basic idea behind ray tracing:

  1. We shoot rays from the camera into the scene.
  2. At the intersection point of the ray with the scene, we evaluate the rendering equation (for example, at the intersection point we might choose to evaluate one or multiple incoming directions).
  3. Per-pixel, we do this for a number of samples and average the results to get an approximation of the radiance arriving at the camera.

Seems simple enough, right? In the next blog post, we will start implementing the basic building blocks of such a ray tracing system. For now, I think it would be helpful to make the distinction between ray tracing and path tracing. Ray tracing is a general term that describes the process of shooting rays into a scene and evaluating the rendering equation at the intersection points. Path tracing builds from ray tracing by constructing a path through the scene. This basically means that step 2. in the ray tracing process is repeated multiple times, where we evaluate the rendering equation at each intersection point and accumulate the results. This allows us to simulate more complex lighting effects such as global illumination and caustics.

For the math-nerds among us, a Monte Carlo estimator of the rendering equation would look like this:

Lo(x,ω)=Le(x,ω)+1Ni=1Nfr(x,ωi,ω)Li(x,ωi)(ωin)L_o(x, \omega) = L_e(x, \omega) + \frac{1}{N} \sum_{i=1}^{N} f_r(x, \omega_i, \omega) L_i(x, \omega_i) (\omega_i \cdot n)

Disclaimer

As some of you might have noticed, this blog series will jump over quite a lot of detail. The goal is to make a path tracer that allows users to easily understand the underlying concepts of path tracing, while also being able to extend the system to support complex and state-of-the-art rendering techniques. This means that many details will be left out. In the sources section, I will provide links to papers and books that explain these concepts in more detail.

Sources