Gargantua: Raytracing Schwarzschild Black Holes

Aseem Doriwala, Sarthak Kamat, Vincent Lim, Jeremy Ferguson


Abstract

We implement an efficient pipeline to create realistic visual renders of Schwarzschild black holes with negligible charge and zero angular momentum while modeling various physical effects on the structure and behavior of the black hole.

Our approach augments the traditional raytracing pipeline by utilizing Euler integration to map out the path of light traveling through the immensely strong gravitational field induced by the presence of black holes. Additionally,  we simulate what an ideal black body's accretion disk would look like by calculating the density and temperature of matter at every intersection point and converting that to the color and relative intensity of emitted thermal radiation. Furthermore, we model the effects of redshifting, which is the result of light being doppler-shifted due to the expansion of space itself.

We further improve upon the quality and speed of the renders by utilizing various computer graphics methods such as supersampling, anti-aliasing, adaptive sampling, bezier curve interpolation, texture mapping, and most importantly Euler integration.

Technical Approach

We iterate over each pixel in the image and trace a ray emitted from the camera's location. Because of the black hole's strong gravitational field, we simulate the nonlinear path of light rays over a certain number of steps through Euler Integration. At each step, we update the location of the photon using the prior velocity and position, along with calculating the force acting on the particle at that point in space which is then used to update the velocity. We define the step size and total ray depth before the start of the render and use the last calculated velocity at the end of the ray depth in order to continue the ray and intersect with the background texture. Due to this process, rays will curve as they move through time, which is an effect commonly referred to as gravitational lensing where gravitational forces create a so-called lens in space, spacially distorting the light incident to it. The region around the black hole with the most extreme distortion is called the Einstein ring and is actually a superimposed circular image of a point far away in space behind the black hole.

As our scene essentially only contains 2 objects, the black hole, and accretion disk, we only have to do 2 intersection tests at each point along the ray integration. At each intersection, we calculate the light being emitted at that point in space in order to integrate over the ray's path and compute the total amount of light being received on the camera sensor. If the ray is intersecting the event horizon, it has entered the black hole and has no chance of exiting the gravitational pull thus is displayed as pure black from that point onwards. If it is crossing the accretion disk, we can calculate the color according to what material style we have set. If we are using blackbody radiation, we can calculate the temperature and then map that temperature to its corresponding color. If we are using texture mapping, we calculate the current point's texture coordinates and map it to a given texture. Additionally, we implemented an optional redshift, which shifts the wavelength of the particles toward the red end of the spectrum due to their velocity.

gravitational lensing

solid white accretion disk

texture mapping

bb + temp and redshift

In order to create high-quality renders, supersampling was utilized to mitigate the presence of jaggies on the edges of the accretion disk and the thin superimposed ring on the internal edge of the event horizon/photon sphere. We used up to 64x supersampling in the highest frequency parts of our image in order to get a smoother and more gradual transition from the disk and the space behind it. This led to great improvement in image quality, but also led to incredibly high render times which were not feasible to run on a large resolution. So we then used adaptive sampling to only force high levels of supersampling where the individual samples showed high variance, and choosing to early terminate the supersampling process when the values were converging quickly. The sampling rate image is shown below under the original render

Another area where quality was improved was with bilinear texel interpolation during the lookup from pixels to the corresponding texel. This was needed as on the Einstein ring, numerous pixels were all being mapped to the same set of a handful of texels and were leading to portions of the image that were quite blocky and not smooth. Instead of direct lookup, by querying the 4 closest texels to each pixel and taking a bilinear weighted sum of them to calculate the color at that point, the Einstein ring was smoothed out and looked a lot better.

In order to render the video at the top of this webpage, we had to define a path for our camera in 3d space. One of our implementations for this included defining key points in our desired trajectory and using Bezier interpolation in order to define a parametric equation of the camera position which would be a function of time. Additionally, we modified some of the rendering parameters to make it such that each frame would be a bit faster as otherwise the render would have taken over 24 hours.

We also tweaked various parameters such as the physical size of the black hole and accretion disk, intensity and opacity of the light being emitted from the black body, presence of dust in the scene, intensity of the background sky texture, and others in order to achieve results which we deemed most visually pleasing.

Certain changes including but not limited to 1) early termination of the ray integration algorithm when crossing the event horizon as light would be perpetually trapped, 2) multithreading while rendering by splitting the image into small chunks and spreading the task amongst all available cores, and 3) adaptive step size during Euler integration according to distance from the black hole allowed us to render high quality 4k stills like the one below in less than 5 minutes.

























Problems and Lessons

The initial challenge we encountered was with understanding the technical details in the paper we used as a reference. The lack of a physics background of our team meant it took a considerable amount of time to parse through all the dense information and fully understand the key parts of the paper while leaving behind the more complicated and unnecessary parts. We additionally used the blog post Raytracing a Black Hole as it did a great job of simply explaining the overall concepts to the point where we were able to use that as a starting point for our implementation

Other problems with the quality and speed of our renders are addressed in the above section.

Additionally, we encountered some issues in implementing the GUI. To start off with, we knew we wanted something with various sliders and input fields on the screen that allows the user to change different simulation properties. However, adding the NanoGui library to our existing code proved to be quite difficult, as there were some dependency issues. Then, we decided to use the ImGui library instead. However, there were also some issues here when trying to add it into the main window, so we instead had to build visual debugging code because that was the section that was already using the ImGui library.

Overall, we were able to get a strong grasp of the physical attributes and properties of black holes along with a keen understanding of how to create high-quality visual renders of them. Throughout the process, integrating various computer graphics algorithms and tools in order to improve the quality and efficiency of our pipeline allowed us to appreciate how these concepts are implemented and used in real life in ways which we would have not necessarily previously imagined.

Real Black Hole Image

This is the direct image of a supermassive black hole at the center of Messier 87 released by the Event Horizon telescope team in 2019 and is the only current publically available image of what a black hole actually looks like.

The images we were able to produce are quite similar to this and provide a rough level of confirmation that we were doing the right things in our rendering pipeline. The only major difference is the apparent lack of a "cross-bar" seen across the center of a black hole. Physicist Kip Thorne who was the scientific consultant for the movie Interstellar claims that this image has been taken from above the black hole rather than from one side, meaning if the accretion disk is flat on the XY plane, the image is taken from the Z axis thus the "cross-bar" is not visible



GUI Demonstration

This is a video demonstrating the various features of the GUI that we built to make rendering images more convenient. The GUI allows the user to change the camera position, the size and material properties of the accretion disk, and save the rendered output to a file.

gui demo.mp4

References

James, Oliver, et al. “Gravitational Lensing by Spinning Black Holes in Astrophysics, and in the Movie Interstellar.” Classical and Quantum Gravity, vol. 32, no. 6, 2015. IOP Science, https://iopscience.iop.org/article/10.1088/0264-9381/32/6/065001/meta. 

Antonelli, Riccardo. Raytracing a Black Hole, 2017, http://rantonels.github.io/starless/.

Contributions

Aseem: I worked on the whole rendering pipeline along with Sarthak. Additionally, I created the video render and high-quality images on this website. I also worked on the final presentation video and this website.

Sarthak: Along with Aseem, I implemented the main ray-tracing algorithm in C++ and wrote the bash script for producing the video render. Besides this, I wrote the code for the animations for the teaser video submission using the Manim library in Python.

Jeremy: I set up the basic infrastructure for the project, starting from the Project 3 code and stripping out the unnecessary parts, then adding the skeleton for the raytracing algorithm. Additionally, I implemented the GUI by investigating and evaluating multiple GUI libraries, choosing the ImGui library, selecting which simulation properties to add to the GUI, and adding various interactive components to the GUI so that users could selectively change those properties.

Vincent: I helped debug the black hole ray tracing & simulation implementation, created the final presentation along with the intermediate renders.