The project proposal, I proposed to implement the following features.
1. 2 additional primitives for cylinder and cone.
2. Color bleeding achieved using radiosity.
3. Penumbras(soft shadow) achieved using distributed ray tracing.
4.Depth of Field using distributive Ray Tracing
5. Texture mapping.
6.Bump Mapping
7. Reflection ray.
8. Refraction ray.
9. Grid Acceleration.
10.Final ray traced Scene.
11.Extra: Supersampling
Features for ray tracer that I submitted to CS488 course project, are 1,5,7,8 and 10, and 11.
I finished the features 2 and 3 after the course. Instead of 9. grid acceleration I implemented bounding box acceleration.
I also made the ray tracer run in parallel using C++11 threads.
Back to the original project post.
-------------------------------------------------------------------------
So lesson learned after A4, that I needed to write better debug messages. I find myself writing debug cerr along with writing the code itself.
Objective 1: Cone and Cylinder primitives
First thing I worked on is the unit length cylinder. I found out on the internet that the equation of a cylinder to be
x2+y2=1, zmin < z < zmax.
Equating it to 3 equations of line in 3D space,
x(t) = x_0+(x_1-x_0)t
y(t) = y_0+(y_1-y_0)t
z(t) = z_0+(z_1-z_0)t
Plugging into equation of cylinder and rearranging above gets
(x_0+(x_1-x_0)t)^2+(y_0+(y_1-y_0)t)^2 =1
((x_1-x_0)+(y_1-y_0))t^2 +t(2x_0(x_1-x_0)+2y_0(y_1-y_0))+(x_0^2+y_0^2-1) = 0 which is in the form that can be plugged into quadratic formula.
Similar for cone. At first, I implemented a non-hierarchical cylinder but much later due to needing to use unit coordinates (so I didn't need to call normalize) for texturing I changed all my primitives to hierarchical.
I got this image.
Updated:November
Objective 5: Texturing
I did this objective last before the deadline. From the LUA file, I created a new material that takes in the reflection and refraction coefficient as before but takes in also the filename of the texture. In ray tracer so far, determining the color of the intersection point in ray tracing depend solely on the material's kd and ks coefficient that interacts with the color of light from light source and emitting i. One primitive has a single material color and the shade of the color is determined by how much light it receives. Using texturing, a primitive can take many colors. The idea of texture mapping is that the material color at each surface point on the primitive with coordinates in x,y,z can be found in the texture file with coordinates u in the horizontal and v in the vertical. To implement this for the sphere primitive, it was easier to work in a unit sphere centered around (0,0,0). For a point, A, on the sphere, find the phi angle(the angle A makes to the x-axis) by finding the arcos of the negative dot product with the north vector. This angle will be in the range 0 to PI. So divide by PI to map this angle to [0,1]. This is the v-coordinate. To get the u-coordinate find the the phata angle(the angle A makes with the Z-axis) dot product with a equator vector then divide by sine of phi to account for the varying latitude length.
Objective 7: reflections
As I started writing the code I realized I needed to pass more parameters to my trace function. For A4, the trace function only needed to pass from position, direction vector, the stack of lights, and a color pointer. I needed to get the direction of the specular reflection pass the a pointer to the intersection point. Looking back, I don't think I needed the intersection point when trace function returns because I already computed the direction of the reflection ray in A4 for calculating the specular color contribution. I got reflections fully working without too much fuss after realizing the tracing the reflection ray is a light source and is added into the color contribution exactly like the shadow ray.
Objective 8: Refraction
I made a time consuming mistake here. I thought to myself I was keep adding parameters to my trace function and each time it requires to update in many places. Why can't I bunch them all up into an object and pass as a reference pointer. It was a nightmare to debug afterwards. I had to keep track of what parameter I need to save locally because changing the parameter in the reference pointer result in the same changes throughout the function call stack. In the end, I only put const things like the lights list in the parameters object and resorted to function parameter passing as I had been doing all along.
After all the headache of function passing, the other key thing I learned from implementing refraction is when a ray enters a primitive, lets say a cube, if the normals are only specified for outside the cube, the refraction ray direction will be flipped. This leads to a fully opaque object. So, what I did is pass in the name of the the previous object that the ray intersected. If the current and previous interesected objects are the same then we are inside the object and the normal is flipped.
Extra Objective
Because I didn't implement supersampling as the extra objectives for A4, I implemented it for A5.
It is very simple. As opposed to sending 1 ray to every pixel in the screen, send like 9 samples in a 3x3 grid, and average out the color value. This reduces jaggies because points along a curve sampled with only 1 ray per pixel results in a series of horizontal lines. When more sample, the horizontal line lenghts becomes subpixel and then take the average color.
This is the final scene after applying 3x3 supersampling to 5.png