At first I thought ray tracer is really simple. It is the reverse of light from light source to object to eye. My first attempt, I had mixed up local and setting a pointer bad boolean flag (whether or not there is a intersect) resulted like this.
I spent some more time to debug. After tons of debugging thinking my ray tracing code is wrong, I found the bug. I find intersection from point to through all the objects in the scene and the later comparisons that do not intersect flags my found intersection boolean to be false.
So I arrived at:
Quickly I narrowed down the mistake to be summing the light components wrong. My mistake was that I initialized the color contribution to be the kd and ks component thinking that kd and ks are the material color I need to add to the color from the various light sources. Kd and Ks actually represent the spectacular and diffuse material absorption coefficient that changes the color of the incoming light when the light is reflected.
After using self epsilon I arrive with this:
Now, thinking that I have more double precision problems I used epsilon in comparison in sphere intersection.
Another mistake here is forgetting to set the shininess value. I was using a uninitialized shininess value. Surprised I don't get different results every time. So it was initialized to whatever.
Now I arrived at
A while later, I realized I used the current location as opposed to the intersected object's location when calculating the distance used to find the closest object.
Finally, I arrive at the final scene for this lua file after adding the ambient component,
Hierarchical Ray Tracer-Updated October 18, 2015
Now the next step I had to do is make it hierarchical. After shuffling code around, I managed just that.
I also fixed a bug where I did not normalize the the primary ray and I always end up with no image because the objects are closer than the length of the primary ray.
The bellow can see that the 2 pillars and the sphere are defined in relation to 'arc' object.
Now, hierarchical scene I want to render requires polygon mesh primitive. First, I tried implementing polygon intersection by finding the intersection with the plane of the polygon, then send a ray to a arbritary direction on the plane and count the number of times the ray is between consecutive 2 vertices of the polygon. But due to some floating point error somewhere, I was never able to successully get it right. I'm sure I have a bug somewhere.
I wrote a second method by breaking down the convex polygon into triangles by starting with a arbitrary vertex and making triangles with consecutive 2 vertices of the polygon.
The idea works like bellow
source: COSC 3P98: Ray Tracing Basics Dept. of Computer Science Brock University
Given triangle with points T1,T2,T3 and calculate alpha, beta for the plane intersected point P with the ray.
P = alpha(T2-T1)+beta(T3-T1) + T1
If 0<=alpha<=1 and 0<=beta<=1 and P is closer to T1 than projection of P on T3-T2
P is inside the triangle.
Edit: Feb26
I had previously made the assumption that If 0<=alpha<=1 and 0<=beta<=1 and alpha+beta <= 1 if the P is within the triangle. This resulted in holes in the middle of a triangle.
Edit:sometime before I restarted Rasterization
There are some cases where the above failed in the dodecahedron. So I ended up projected P onto T2 and T3, call it point gamma. Then if the distance of T1 to is less than T1 to gamma, then the point P is in the triangle.
With x, y, z information, we have 3 equations and 2 variables. At first, I just naively substituted the x equation into y to solve for beta and then alpha. I arrive at the bellow 'hole-ly image'.
After looking at my failed first method, which had multiple code paths to account for division by 0, I realized the problem. One example is say T2 and T1 have the same x coordinate. I would be dividing by 0 in the alpha or beta calculation.
I arrive at this. The dodecahedron works as intended.
Now the problem is the stretched normal. I applied inverse transformation matrix from world coordinate to model coordinate and then do the intersection and then apply the transformation again to both the physical coordinates of the model and its normal.
Another problem is the fact that the sphere is supposed to be in front of the pillars. After refactoring code to update a inverse transformation matrix stack to perform a reverse transformation of the hierarchical objects, I now have the final completed scene.
Let's call spot light source 1 in behind of the viewer and and light source 2 in front of the viewer both on the right side in perspective of the viewer.
The reason why tracing shadow ray to light source 1 results in 2 shades is because the back of the pillar is a diffuse surface and resends out light from spot light source 2. Here is the image bellow without this code. This is the image akin to the reference a4 image provided by CS488.
Test Scenes from ray tracer
simple-cows.lua
Specular coefficient =1, antialiasing = 4x == 16 samples.
macho-cows.lua
There are some horizontal lines behind the buckyball because didn't check for P = A(T2-T1)+B(T3-T1)+T1 and the hack assumption that A+B<=1 if the point is within the triangle.