Real time ray tracing (personal project 2020)


 I wrote a real time ray tracer with denoiser  ( no RTX support )

The aim of this project is to understand recent developments in "1 ray per pixel" ray tracing, and the accompanying noise reduction techniques, as seen in the real time ray traced versions of Quake2 and Minecraft.

Traditional offline ray tracing renderers generate 100's or 1000's of rays per pixel to produce a high quality image. Each ray is expensive, and computing an image can take minutes or hours. 

To generate a real time image at 30fps or above, the number of rays per pixel has to be reduced right down, to a few or even 1 primary ray per pixel. 

In the following demo, a noisy ray traced image is generated (each frame) using one primary ray per screen space pixel. The image is then processed to reduce the noise to almost imperceptible amounts.

The geometry is rasterized and the ray tracing occurs in a HLSL fragment shader. 

After the primary ray hit, then secondary reflection and bounced indirect light rays are generated. Area lights and sky provide light sources. 

Denoising filters are finally applied as a post process.

This test scene has low complexity geometry. Low complexity means less ray/triangle intersection testing and improved frame rate. On a highly complex scene I would utilise a ray acceleration data structure, for example a BVH. Or utilise ray accelerating hardware, for example a nVidia RTX GPU. 

This ray tracer does not use RTX, and runs on most graphics cards. 

Running in real time 

The videos on this page show a test scene of the Cornell Box ray tracing in realtime at 30fps/1080p on a 2017 iMac.

Lots of Noise

With only 1 ray per pixel, we start with a very noisy image. 

The stages in denoising  the image are outlined below. 

White Noise

I initially sampled with random white noise.

The generated images are very noisy.

Please click on the image.

Blue Noise

Noise is then reduced by switching to blue noise sampling. 

Blue noise has a more evenly spaced distribution, resulting in reduced clumping of samples.

Please click on the image.

Temporal Accumulation

The theoretical ray count is then increased by recovering light samples from previous frames. 

Previous frame screen position is calculated using per pixel motion vectors (temporal reprojection).

Diffuse and reflection light samples are recovered from history buffer at previous screen position, and accumulated with current light samples in a feedback loop.

When reprojecting the diffuse, we track the motion of the surface.

When reprojecting the reflection, we track the motion of the reflected image, not the motion of the reflector surface.

Please click on the image.

Denoising filter (SVGF)

An edge aware noise reduction filter is then applied. 

Here I have implemented the Spatiotemporal Variance-Guided Filter.

This technique uses the variance of the light history buffers to determine how noisy the image is. This drives how much each sample is averaged with neighbouring samples - averaging in noisy areas of the image have a higher weighting than less noisy areas. 

Contact shadows will remain sharp, while noisy shadow penumbras will become soft.

Please click on the image.

Fast movement

Here fast camera movement reveals disoccluded regions. 

These are regions which were hidden or off screen in the last frame; temporal reprojection fails to recover any meaningful samples.

These are clearly visible around the screen edges (green). 









Filling in the gaps

In these regions, maximum weight is given to the light samples calculated in the current frame, and an additional pass fills in these regions with a wide convolution (49 taps) over neighbouring samples, before passing to the SVGF filter.