CG‎ > ‎

Pathtracing on CPU

October 2011


The data used to demonstrate this software were taken from The Princeton Shape Benchmark and should only be used for academic purposes unless permission is obtained from Princeton University to do otherwise.


"Beam" - A Pathtracer
  • Diffuse, specular reflective, specular refractive and glossy surfaces
  • Depth-of-field, arbitrary camera configuration
  • Instanced triangle meshes organized into an acceleration structure
  • Texturing with linear or bicubic filtering
  • Cubemaps for scene environment
  • Heightmaps and synthesised textures
  • Support for OBJ, OFF, MTL, and PPM file formats
  • Multithreaded with OpenMP
After writing the GPU-based brute-force (but highly parallel) pathtracer listed elsewhere on this site ("Pathtracing on GPU"), I wrote a CPU-based one with greater support for complex scenes and most importantly, support of arbitrary triangle lists. It traces rays against instanced triangle meshes with any affine transformation. Instances are partitioned into a BIH (Bounding Interval Hierarchy) tree and a KD-tree is constructed for the triangles themselves, one for each mesh. A superset of the material parameters from my GPU implementation of pathtracing is provided and in fact they have turned out to be more numerous than I had planned, since mid-way through developing this pathtracer I started reading Physically Based Rendering. This is an excellent and informative book, and certainly I could have done things a lot better had I read it before embarking on this project. For that reason, in the future I think I will not make improvements to Beam where they are needed but begin afresh with a completely new renderer.

Extended lightsources, refraction, and diffuse inter-reflection

Probably the best thing about Beam is the explicit lightsourcing which alleviates slow convergence in the presence of small emissive surfaces, which is a well-known characteristic of brute-force pathtracing. Importance sampling is applied for these direct lightsources, and is also employed in the lambertian reflection of rays. The scene format is in plain text and provides what I hope is an intuitive interface with easily nestable transformations (similar to the OpenGL matrix stack). A number of commandline switches provide further interesting and useful features.

The core raytracing algorithm originally used mailboxing to improve efficiency by skipping repeated tests of rays against triangles. However this complicated the promotion to multi-threaded rendering, so mailboxing was dropped completely to allow a simple implementation and unlimited scalability with increasing processor cores. Mailboxing is the technique of assigning a unique ID to each ray, and allocating enough storage with each triangle to store a copy of one ray ID. An intersection test is only performed if the ray ID does not match the stored one, and once a test is done a copy is stored. This only makes sense where a triangle may be referenced by more than one voxel in the acceleration structure (as is typically done within a KD-tree).

Here is a psuedo-code example of mailboxing in action:

if(triangle.mailbox_ray_id != ray_id)
{
    testRayVersusTriangle();
    triangle.mailbox_ray_id = ray_id;
}

This allows us to know that we have already performed a specific test, but also saves us from having to visit every triangle to reset each 'already tested' flag.

I started working on this some time before I started reading PBR. Having now read most of this book, I've realised some of the mistakes I made with Beam. I would like to produce a completely new version of Beam which would offer tangent-space normal vector textures to increase detail without increasing geometric complexity, a much better integration method like bidirectional pathtracing for example, and HDR formats (OpenEXR looks very useful). It would be interesting to allow custom shaders for surface appearance, and they may be achieved through dynamically-linked modules - this was already suggested and tested in Ingo Wald's PhD (linked below) which proposed the SaarCOR realtime global illumination engine. An alternative would be to allow source-text shader files which are parsed at runtime. JIT compilation is suitable here, especially if the shading language is designed so that parallelization through SIMD instructions is possible. Alternatively, the shader can be interpreted but this could be slow depending on the granularity of the shading language. At a basic level, this is a compromise between pre-compiled (fast) and extensible (slow).

Depth-of-field and sharp reflections

Caustics are producible with Beam, although convergence is extremely slow as caustics are only produced by emissive materials, and NOT by explicit light sources. I believe this would not be a problem if bidirectional pathtracing was used, or the newer energy redistribution pathtracing which seems to behave very well in the presence of LSDE paths (here I'm using the regexp style of path classification - see Paul S. Heckbert's "Adaptive Radiosity Textures for Bidirectional Ray Tracing" for more information).

I created a video using this tool, a Perl script, a bash shell on my laptop, and mencoder. At one point during the animation's generation, my laptop ground to a halt from what I can only assume was a depletion of resources. I was however able to resume processing from the last completed frame due to the fact that I was using a Perl script which created each frame as a separate image file and from a separate instance of the Beam executable. This made me very glad that I had not opted to link a video encoding library because this could have meant that the entire process would have to be restarted from the first frame. Perhaps this is an example of doing things the Unix way.

I have also begun work on a Python module which would bind Beam natively, but it has a pretty low priority at the time of writing.

http://openmp.org/wp/ - Supported by most modern compilers, OpenMP is the multithreading API which was used to make Beam scale on multicore processors.

http://www.pbrt.org/ - Brilliant fully-featured physically-based rendering system. Luxrender is an open-source project which began as a branch of the pbrt codebase.

http://www.sci.utah.edu/~wald/PhD/index.html - Ingo Wald. Fascinating discussion on adapting global illumination to realtime application. Also introduces OpenRT, which is an API designed to be similar to OpenGL with the purpose being to ease a transition from forward rendering (such as rasterization, which is still prevalent in videogames at the time of writing) to raytracing.



An Ugly Problem, and a Performance Deficiency

Some of the images produced with Beam appear to have very bright specular highlights on glossy surfaces. This may be an issue of BRDF design - it is possible to create physically implausible reflectance functions which do not conserve energy. This can be considered an oversight of the interface design, and is easily accounted for when writing scene files. Something that may be nice to add to my future efforts is a testing suite for BRDFs - reciprocity and stability being important properties to ensure.

Shown here are high amounts of noise produced by LSDS?E paths

The multithreading in Beam could be better, as the screen is divided into a horizontal tile for each thread. When a thread completes it's assigned task then it does not take on more work. This results in most threads waiting idle by the end of the rendering. A better approach would be to divide the screen into more tiles than there are threads, and have a thread take an unallocated tile when it has finished one.

Antialiasing is performed, however only one type of filter is available - the box filter. It is currently not adjustable.


How to use Beam
What follows is all of the information required to use Beam to create images from a scene description using any meshes and images you want (so long as they are in a supported format!).

Commandline switches:

s  Sets the sample count (required)
h  Sets the viewport height (required)
w  Sets the viewport width (required)
o  Specifies the output file name (required)
i  Specifies the input file name (required)
r  Specifies the range of pixel components in the image
a  Generates an alpha mask image for primary rays. Useful for compositing.
p  Specifies a maximum number of passes to make. Good for automation.
m  Specifies a (scalar) factor to apply to the image's colours before clamping and quantization
q  Supresses the printing of continous progress update messages

For example:

Beam -i my_scene.txt -o my_image.ppm -w 800 -h 600 -s 16 -a my_image_alpha.ppm -p 10 -m 2

This will generate an 800x600 image my_image.ppm from scene description my_scene.txt. Each pixel will be sampled 16 times per pass, and there will be 10 passes made (each pass is averaged, and the entire image is written ONLY immediately after the end of each pass). The colour values of this image will be multiplied by 2. In addition, an image my_image_alpha.ppm wil be generated and will contain the anti-aliased alpha mask.

You can select different types of output image by changing the filename extension of the output
image file. PPM is supported, and a text file containing float values in ASCII is obtained
by using a TXT extension. This does not apply for the alpha mask output, which is always
in PPM format.


Information on the plaintext scene file format:
  • Command arguments must be separated by a comma
  • Anywhere a colour is accepted, there are keywords which can be used: "white", "black"
  • Anywhere a positional vector is accepted, there are keywords which can be used: "origin" is the point { 0.0 0.0 0.0 }
  • Indentation may be used.
  • To indicate the start of a single-line comment, use the '#' character.
  • 3D arguments such as colours and positions can either be one of the appropriate keywords, or 3 decimal values inside curly braces, for example { 1.0 0.5 0.5 } makes a reddish colour.
  • Angles are always given in radians.
  • Names cannot have spaces in them, but filenames can be delimited by double quotes (speech marks) which allows them to contain spaces.

The following commands may be used in the scene file:


point_light position, colour
Creates a point light and positions it according to the current matrix. It is translated in local space by position.

quad_light position, colour, width, height
Creates a light-emitting quadrilateral with dimensions width by height and positions and orientates it according to the current matrix. It is translated in local space by position. Light is only emitted from one side, and this is the side that points towards negative Z (in the light's local space).

rotate angle, axis
Rotates about the given 3D vector axis by angle radians.

camera distance, radius, angle
Sets the camera's focal distance (any object at exactly this distance from the viewing plane will be in perfect focus), circle of confusion, and field-of-view. A higher radius will increase the amount of blur applied to objects beyond or behind the focal distance.

environment colour
Sets the colour to be used for rays which escape the scene. This will show up in reflections, or in areas of the rendered image where no object is present. If an environment cube is set, then this colour modulates the texels from that cube.

translate vector
Appends a translation by the given vector to the current matrix.

scale vector
Appends a scaling by the given factors to the current matrix.

material_refractive name, surface_colour, specular_colour, reflectivity, glossiness, inner, outer
Creates a refractive material. Inner is the refractive index inside the mesh, and outer is the refractive index outside the mesh. Windings must match the notions of inside and outside. NULL is a reserved material name.

material name, diffuse_colour, specular_colour, emissive_colour, reflectivity, glossiness
Creates a non-refractive material with lambertian reflectance and a specular / jittered specular component. NULL is a reserved material name.

push
Push the current matrix onto the matrix stack.

pop
Restore a matrix state from the matrix stack and remove it from the stack. The current matrix is replaced by the one restored from the stack.
 
identity
Replace the current matrix with the identity matrix, which means that there is effectively no transformation applied.

texture name, filtering_mode, filename
Loads the specified texture file and assigns it the given name. The available filtering modes are "cubic", "linear", and "nearest". Texture files must be in 3-channel binary (raw) PPM format.

environment_cube <6 textures>
Creates a cube map with the 6 referenced textures as follows: 1st - negative x, 2nd - positive x, 3rd - negative y, 4th - positive y, 5th - negative z, 6th - positive z. It is oriented according to the current matrix. Note that the texture references must be names and not filenames.

mesh name, filename
Loads the file and assigns it the given name.

instance mesh, material
Creates an instance of the given mesh with the given material applied. This instance is given the orientation, position, and scale represented by the current matrix. If material is NULL then the mesh's internally-configured materials will be used. There are built-in meshes that can be used: "cube" which is a cube with a diagonal from { -1.0 -1.0 -1.0 } to { +1.0 +1.0 +1.0 }.

compose_texture name, filtering_mode, width, height, operation, source0, source1
Creates a new texture by composing two others. The new texture has the given dimensions and filtering mode. Texels are produced by applying the binary (diadic) operation to the corresponding texels from each of the source textures. The following operators are available: '+' (sum), '*' (product), '-' (difference). The operation argument must be a single character and not contain quotes. See texture for a list of available filtering modes.

set_material_texture material, type, texture
Sets a texture for a material. The type can be "diffuse", "specular", or "emission".
  
mesh_heightfield name, texture
Creates a mesh which is a grid of vertices, the X and Z coordinates being defined by the grid position, and the Y coordinate being sampled from a texture which is mapped over the grid. Texture coordinates are also generated, which match the coordinates used to sample the heightfield texture. Note that the red channel R is used as the height value.

If anything else appears on a line, the scene file is considered invalid and will not be loaded.
If you are interested in finding out more about how these commands are used or how they work, please see either the example scenes or SceneFile.cpp.

Download (23/Aug/2011) - Includes source, Windows 32-bit executable, example scenes, and a couple of Perl scripts which I used to produce some of the example scene files.

Comments