CS 184: Computer Graphics and Imaging, Spring 2018

Assignment 3: PathTracer


A short foreword, in celebration of fixing a major bug in the previous project: per Dillon's suggestion, the dimness was a precision effect. I was able to ameliorate the dimness problem by using doubles in triangle intersection rather than floats. After weeks and weeks of dealing with this dimness, it feels greatly relieving to have properly lit scenes!


The purpose of this project is to extend the raytracer I developed in the previous project, adding new material-light interactions, environmental lighting, and even real-time rendering. The first two parts deal with reflective and refractive substances, using experimental data and transformations to best simulate their appearance. The next part enables 3D scenes (especially unlit ones) to be illuminated by the brightness of a jpeg image background rather than a pre-determined area or point light. I also explore simulating light exposure through a camera aperture. Finally, and separately from the main renderer, is an in-browser real-time renderer using Node.js and OpenGL libraries, available here.

Part 1: Mirror and Glass Materials

The mirror bsdf material uses pure reflection, while the glass bsdf uses a mixture of reflection and refraction probabilistically determined by R of Schlick's approximation. The actual process of reflection is just a simple redirection of the ray, while refraction is slightly different, using Snell's law to redirect the ray.

The following images were generated using mirror and glass bsdfs at 1028 samples per pixel and 4 samples per light.

Max Ray = 0
Max Ray = 1
Max Ray = 2
Max Ray = 3
Max Ray = 4
Max Ray = 5
Max Ray = 100

At each step, you can observe changes:

  1. 0 bounces only includes the original light source.
  2. 1 bounce includes all direct lighting (so spheres are black).
  3. 2 bounces includes reflections off objects and most indirect lighting (mirror now works and glass is moderately reflective).
  4. 3 bounces includes the transparency of glass (two bounces are required to traverse the glass).
  5. 4 bounces includes the light from the source bouncing through the glass onto the floor.
  6. 5 bounces includes a small amount of light from the source refracting onto the blue wall.
  7. 100 bounces includes no new features since 5 is the minimum bounce for the above features. It is slightly more noisy because of the increased bounce.

One of the most difficult parts in this part was getting rid of noise. In the end, this required normalizing the in-going vectors calculated for many of the bsdfs.

Part 2: Microfacet Material

Microfacets are implemented using data from real measurements. The final reflectance returned by the BSDF of a microfacet material is a combination of the shadowing-masking term and the following:

  1. Fresnel term: the interface between two materials (usually air and metal)
  2. Normal distribution term: returns a value based on the ingoing and outgoing rays

Below are renderings of the dragon at 256 samples per pixel, 1 sample per light, and max bounce 7, comparing alpha factors for glossiness:

alpha = 0.5
alpha = 0.25
alpha = 0.05
alpha = 0.005

As expected, as alpha grows smaller, the dragon's surface grows glossier, since alpha is a representation of the material's roughness. Note that noise does naturally increase because of reflectivity, and after a lot of troubleshooting, I couldn't really get the pronounced white noise out for very small values of alpha.

In the implementation of sampling the ingoing vector, we also use a Beckman distribution to make the microfacet surface converge faster to their proper value. Below is a comparison with and without the Beckman values, at the low sampling rate of 64 samples/pixel. As you can observe, importance sampling causes the bronze appearance to converge much faster:

Importance (Beckman)

Finally, using the reflectance values from experimental data, we can simulate different types of metal:

Cobalt, Glossy
Diesel Soot, Diffuse

Part 3: Environment Map Lights

Instead of receiving illumination from an area or point light for this part, the mesh is instead illuminated by the contents of a photograph, the lighter portions of the image receiving precedence for use in lighting the object. A probability distribution is developed to give more importance to bright light sources, forming the probability distribution function as such. The photograph used is doge.expr, and its probability distribution is shown below.

Original image
PDF image

Let's compare what happens when we use the PDF (importance) vs. when we choose a random ray (hemisphere) on a Lambertian bunny. These samples are taken at 4 per pixel and 64 per light.


There seems to be a slight difference in the noise levels, though not considerable. This is kind of expected. On the other hand, the noise levels do seem to be uniformly a little higher than I would like, but I wasn't able to get a cleaner convergence at 4 samples/pixel. And now on a microfacet bronze bunny:


Interestingly, the noise difference on bronze is more noticeable than for the Lambertian. This is probably due to the higher reflectivity of microfacets -- for example, view the bunnies in Part 2, all taken at low sampling rates.

I tried a higher sampling rate on the bunny to see if the noise goes away.

Copper bunny sampled at 128 px/sample.

Looks like it did, at the cost of some artifacts in the shadows, likely due to random readings from the bright sky in doge.expr throwing off the brightness.

Part 4: Depth of Field

The purpose of this part is to simulate the effect of a thin lens to get an artistic, camera-esque effect. The default will be set at a focal depth of 1.7 and an aperture size of 0.0883883 per Piazza, with a sampling rate of 256 per pixel, 4 per light, and 5 ray bounces at max.

The following series of images change with focal depth:

depth = 1.4
depth = 1.7
depth = 2.0
depth = 2.3

These ones instead change with aperture size.

size = 0.02
size = 0.08
size = 0.12
size = 0.18

Notice how as the aperture size increases, the exposure changes so that the full image is much blurrier than the original. The smallest chosen aperture size appears quite sharp (though going even smaller might cause blur issues in an actual camera due to diffraction). Increasing the focal distance, of course, makes objects that are further in the scene sharper than their surroundings.

For funzies, I rendered my cobalt microfacet at 4096 samples/pixel and 64 samples/light, and this is how it turned out:

Dragons are honestly the coolest.

Part 5: GLSL Shaders

Before anything else, check out the interactive OpenGL page!

All of the following part is rendered in real time using Node.js and the GLSL library. The shader program used computes values very efficiently in parallel, taking advantage of GPU power. To provide the program with its algorithm, I used two kinds of files:

The Blinn-Phong shading model uses three components to achieve a plasticky appearance: ambient, diffuse, and specular lighting. Ambient lighting is global for the object and prevents shadows from being too dark. Diffuse, or lambertian, lighting simulates the natural bounce of light off of a non-glossy material directly from the light source. Specular lighting results from the reflection of light towards the viewer (thus, it's the only light that changes as our viewpoint changes, as you can observe in the interactive view.)

Ambient only.
Lambertian only.
Specular only.

As for texture mapping, I used a CC0 creative commons stock photo of yellow and orange triangles and mapped it onto a sphere to produce a beachball-like effect.

Beachball texture-map.

Bump shading and displacement had an interesting effect on the shape of the sphere. Because the weighting function, h, uses the r-value of the texture to determine intensity, it seems that the red triangles were not included as much in the bump map. Displacing this texture by a factor of 2 produced a "scaly" texture.

Instead, I used the stock displacement of the ABC characters, and below are the results:


Upon increasing the coarseness of the mesh by reducing the number of primitives, the result becomes more blurry and jagged. For example, with the orange triangle ball, the triangles are less discernable. Horizontal striation artifacts result from this, probably aliasing. Increasing the number of mesh elements makes the triangle smoother, but at the cost of performance. Actually, having too many mesh elements makes some artifacts appear in the displacement, probably as a result of the texture to object ratio.

For the last part, I was able to create my own "orange light" effect by taking the hexadecimal value for orange, 0xFF7630, and converting that into a 3D GLSL vector. I dotted this vector with the ambient and diffuse lighting of the Blinn-Phong model (not specular since I figured pure white light would make for a nice contrast) and used this shading algorithm on the cube below.

Orange is a creative color.