Dennis Liang and Faith Rivera
For our final project, we want to implement volumetric rendering and photon mapping to our existing path tracer. Within volumetric rendering, some of the effects that we want to achieve or explore are fog/smoke effects and rendering a sunset/sky. With photon mapping some of the effects we aim to achieve are the caustic and reflection effect.
We first plan on implementing volumetric rendering. From reading the slides and papers on volumetric rendering we understand that it involves ray marching instead of checking the closest intersection. We need to perform ray marching because we are treating clouds/fog differently than solid objects. Light can hit fog and still pass through. There are also two types of volumes, homogeneous and heterogeneous. We plan to render scenes with homogeneous and heterogeneous substances (ie fog or smoke). Finally, we would try to add implementations for atmospheric effects. If we are able to develop these aspects individually, we hope to render a more complex scene using these aspects.
To go more in depth with how we want to go about implementing volumetric rendering, we want to first implement single-scattering. Single scattering is easier to implement than multiscattering because it only interacts once with the medium before reaching the viewer. However, not all volumtetric objects exhibit single scattering properties. Clouds exhibit muti scattering behavior in which the light interacts with the object many times before being absorbed. This scattering behavior defines whether an object has high or low albedo.
While researching volumetric rendering,we also came across the delta tracking method in the 2017 SIGGRAPH Course by Pixar. It is said that delta tracking will improve the process compared to using ray marching. However, this comes as the cost of adding complexity to our code. We plan to start with ray marching (as explained on reference 7) as a beginning milestone for our exploration of the topic.
After, if we have time, we plan on implementing caustics through photon mapping. To do so, we need to implement a kd-tree to store the photon mapping information. Then, we can attempt different effects such as caustics. Then we would try to build a more complicated scene with volumetric rendering and caustics. Our implementation would involve first constructing a photon map for the first pass. On the second pass we would use the photon map to estimate the radiance of every pixel for the output image. We would then use the rendering equation to calculate the surface radiance and perform further ray tracing calculations.
Finally, we hope to combine volumetric rendering and caustics to see how the rendering will behave with both effects enabled. Scenes such as the underwater lighting scene should provide an opportunity for us to see both effects.
By May 29, we worked on being able to develop custom scenes and render them using our HW 3 pathtracer. We chose to use tinyobjloader to load object files, and ended up integrating various commands into the Embree scene loading code.
The scene loader currently parses a test file, where some commands have obj commands. From there, we use the tinyobjloader methods to parse, organize and store "faces", which at this point are only triangles. Thus, we can now combine different files into one test file to render.
You can see our progress of being able to transform objects and render them into our NEE cornell box (our progress goes from bottom to top). At the top, we have a solid cloud occluding the box and bunny.
Our next steps are to implement ray marching on our path tracer to create a more realistic cloud, because it is not a solid object and the ray would not terminate upon the closest intersection. This would involve changing our intersection values to incrementally check for our t value.
We ended up choosing to explore volumetric rendering out of our time constraints.
After reading through several papers and the CSE 168 website, we began by trying to implement ray marching so that we could step through our mediums. However, we went to our TA office hours and he guided us towards the CSE 272 slides and homework assignment on volumetric rendering. It was interesting that there was no specific mention of ray marching, as the implementation uses transmittance, probability sampling and integrals to check change over time.
We were able to render a emissive sphere within a homogenous medium, using absorption only. The gradient is very smooth and settle, and the sphere gets brighter due to the exponential nature of transmittance.
Notice how the RGB values differ at different distances from the center of the sphere.
Single Scattering Correct Scattering Pattern
Single Scattering Incorrect Scattering Pattern
Our next step was to develop a volumetric path tracer that is able to computer single scattering. In single scattering, we would sample along the ray being shot from the camera. We sample at different points along the ray and check to see the transmittance from each light source. From research, there are techniques which we only have to sample one light source and multiply it by the number of total light sources. This allows for reduced computation. However, for our volume path tracer, we decided to add contribution from each light source individually to enhance the image realism.
Along the way while developing our volume path tracer, we encountered several problems such as our single scattering not being in the right location. Later, we discovered that this was an issue with how we stored each volumetric object without first applying transformation.
In single scattering, the light scatters once in the volume. We need to integrate the radiative transfer equation. To do so we need to utilize Monte Carlo sampling. To sample the integral we perform importance sampling of the transmittance and sample a point on the ray that is proportional to the transmittance. We also included additional add on to perform average of sampling multiple points along the ray because one point might be visible to multiple light sources and another point might be only visible to one light source. In cases where our sampling resulted in a point beyond the intersected object, we know that we hit a surface and account for the surface emission.
Next, we moved on to "Multiple monochromatic homogeneous volumes with absorption and multiple-scattering using only phase function sampling, no surface lighting".
We saw that this was very similar to indirect lighting from previous homeworks in the sense that you could have multiple bounces.
To the left is a comparison from the homework tester reference image (top) and our rendering (bottom). You can see that the smaller sphere on our rendering is also transparent.
Although the TA said we did not need to worry about medium IDs, we found that this was super helpful in order to render more transparent spheres. This was how we could consider index-matching mediums.
The image below shows the difference when we didn't consider materialIDs, because we did not check if the smaller sphere was index-matching (meaning light passes through without impacting its energy or direction).
We actually spent the most time trying to render custom scenes with volumetric rendering / multiscattering at this point. The medium qualities (absorption, scattering, etc) meant that you had to be relatively close to emissive objects to see them in our scenes.
Heres some custom images we rendered and some lessons/reflections we gained:
We then proceeded to add triangles into the scene, as we were previously only rendered spheres. This required us to alter our parsing commands to save mediums to the triangle materials.
As you can see, the triangle is emissive and also has scattering, as you can see other mediums behind it. We hoped to move forward and render more complicated scenes now that we could render triangles.
The color wheen image is compomsed of 8 emissive sphere lights and 1 non emissive transparent sphere. You can see around the objects, there is scattering from the light of each sphere. The sphere in the center of the coor wheel demonstrates the transparent properties. Some light can pass through the sphere and some light are scattered. The striping inside the center sphere is likely due to our isotropic scattering which uniformly samples a direction in the sphere.
We attempted to render the standford tragon scene with our volumetric path tracer. We were mostly successful. Even though the image is a bit hard to see but the dragon is partially see through. We did have difficulty increasing the birghtness to make the effects more visible. Ther are two sphere lights in this scene. One in the center of the image and another that is forming the arc in the background. The second large sphere is needed to adequatly illuminate the scene.
We also rendered the cornell box scene using our volumetric renderer. The only light source in the scene is the emissive quadlight. Initially when we rendred this scene, we had a lot of noise. We were able to mitigate this problem by increasing the sampling per pixel as well as the number of samples we are taking along our ray. This scene showcases a room filled with fog where there are particles in the air that is scattering and absorbing light.
We would hope to expand on our current renderer in several aspects:
Exploring different phase functions: We would want to implement other phase functions other than the isotropic phase function, which would include Rayleigh Scattering and Henyey-Greenstein Scattering. This would change how light is sampled, as mentioned before (ie: the color wheel). This could allow us to go into more complex scenes, such as rendering the sky.
Adding NEE, MIS: Not every object is a medium, so we would end up merging this renderer into a NEE renderer to combine NEE sampling and allow different objects to exist (ie render the Cornell box with a sphere medium).
Heterogeneous mediums: Continuing on, we would attempt to render heterogeneous volumes, such as smoke and fire. This might mean that absorption and scattering can vary on different parts of the medium.
Lecture 15 Slides: https://cseweb.ucsd.edu/~viscomp/classes/cse168/sp24/lectures/168-lecture15.pdf
https://web.cs.wpi.edu/~emmanuel/courses/cs563/write_ups/zackw/photon_mapping/PhotonMapping.html
https://graphics.pixar.com/library/ProductionVolumeRendering/paper.pdf
http://graphics.ucsd.edu/~henrik/papers/photon_map/global_illumination_using_photon_maps_egwr96.pdf
Karl Wang’s final project (as a source of inspiration for developing a timeline and goals): https://cse168.blogspot.com/2020/06/final-project-single-scattering.html
https://www.cs.cornell.edu/courses/cs6630/2015fa/notes/10volpath.pdf
CSE 272: https://cseweb.ucsd.edu/~tzli/cse272/wi2023/homework2.pdf
https://cglearn.eu/pub/advanced-computer-graphics/volumetric-rendering
Thank you to Professor Ramammorthi and TA Wesley Chang for all your guidance, encouragement, and advice! This is no easy class, and even with the ups and downs we have found great reward in our work. We hope to learn more about computer graphics as our journeys move forward!
~ Dennis and Faith
<-- Professor Ramammorthi rendered with NEE in cornell box!