All of this was to lead up to my personal project for the past few months, which was creating a way to explore these 3D fractals in real time using a game engine called Unreal Engine. There hasn't really been a software like this created before. While there are plenty of fractal raytracing software, they are all renderers, which take seconds to produce coherent images, and movement feels slow and chunky. I wanted to create a way to move throughout these fractals at 60 frames per second and have control over it in a new way. In my research, I came across a plugin for the engine I'm using, called "Machina Fractals"(Link), but it was behind a hefty paywall, and did not have the features I wanted. Its performance was lower than I wanted, and couldn't display the true depth of the fractals. This plugin was more oriented around creating beautiful images, which in its own regard is okay, but didn't suit my needs. For this reason, I opted to code my own raymarcher and controller.
Prior to the full start of this project, I dabbled in generating fractals in python with pointclouds, and then blender with meshing algorithms. It became clear soon, however, that these methods wouldn't produce the results I was looking for. Here are the results from that project:
Here is the code to plot in matplotlib(which admittedly is terribly inefficent, but more of a proof of concept). Ignore the terrible code practices used here.
import numpy as np
import matplotlib.pyplot as plt
percision = 100
iterations = 50
scale = 0.05 * (50 / percision)
offset = (scale * percision) / 2
n = 8
progress = 0
class vector:
def __init__(self, x, y, z):
self.x = x
self.y = y
self.z = z
def sum_neighbors(array, x, y, z):
sum = 0
if x == 0 or x == Tensor3.shape[0]:
return 8
if y == 0 or y == Tensor3.shape[1]:
return 8
if z == 0 or z == Tensor3.shape[2]:
return 8
for xb in range(x-1, x+1):
for yb in range(y-1, y+1):
for zb in range(z-1, z+1):
sum += array[xb, yb, zb]
return sum
def convert_coords(indexx, indexy, indexz):
x = indexx * scale
y = indexy * scale
z = indexz * scale
x -= offset
y -= offset
z -= offset
return(x, y, z)
Tensor3 = np.zeros((percision, percision, percision))
vertices = []
edge_vertices = []
for indexx, indexy, indexz in np.ndindex(Tensor3.shape):
progress += 1
if (progress / percision ** 3) * 100 in range(1,100):
print(progress / percision ** 3)
x, y, z = convert_coords(indexx, indexy, indexz)
coord = vector(x, y, z)
c = vector(x, y, z)
radius = np.sqrt(x**2 + y**2 + z**2)
theta = np.arctan2( np.sqrt(x**2 + y**2), z)
phi = np.arctan2(y, x)
for i in range(iterations+1):
newx = pow(radius, n) * np.sin(theta * n) * np.cos(phi * n)
newy = pow(radius, n) * np.sin(theta * n) * np.sin(phi * n)
newz = pow(radius, n) * np.cos(theta * n)
z = vector(newx, newy, newz)
z = vector(z.x+c.x, z.y+c.y, z.z+c.z)
radius = np.sqrt(z.x**2 + z.y**2 + z.z**2)
theta = np.arctan(z.y / z.x)
phi = np.arctan(np.sqrt(z.x**2 + z.y**2) / z.z)
if radius > 2:
break
if i == iterations:
Tensor3[indexx, indexy, indexz] = 1
vertices.append((coord.x, coord.y, coord.z))
for x, y, z in np.ndindex(Tensor3.shape):
if Tensor3[x, y, z] != 0:
if sum_neighbors(Tensor3, x, y, z) != 8:
xb, yb, zb = convert_coords(x, y, z)
edge_vertices.append((xb, yb, zb))
ax = plt.axes(projection="3d")
x_data = []
y_data = []
z_data = []
for i in edge_vertices:
x_data.append(i[0])
y_data.append(i[1])
z_data.append(i[2])
ax.scatter(x_data, y_data, z_data, alpha=0.2)
plt.show()
This code ended up taking wayyyy to long to run, so thats why I opted for raymarching. The original plan was to rewrite this in c++ to be used in unreal engine, but it clearly was not going to work
At this point I was familiar with raymarching, and decided to implement the algorithm into unreal engine in the form of a shader. I ended up using a custom node, with inputs for world position and user defined parameters. Before I figured this out however, it was two custom nodes, one with global functions and one with the actual raymarcher calculator. One particular article that was invaluable for this was by Void Goat (link here). However, I ended up reorganizing the functions into a structure so that I would be able to control parameters more easily. The global functions method is very clean, however it makes it challenging to adjust parameters in the way I wanted to. I also ended up adjusting many of the functions, like the normal calculation, so it would greater suit my needs.
The main component of the program was a material I set up to render the Mandelbulb through ray marching. It can be applied to any shape, but I had it on a plane in front of the camera. This was the most efficient approach, since after CPU compiling, the GPU runs the script in real time. This makes it much faster, since the amount of processors in the GPU allows whats called parallelization, or running many lines of code at the same time. The architecture of the GPU is designed for this, since rendering pixels can be highly optimized for parallelization. This is what allows the computer to run such an intensive script in real time at 30-60 fps.
The material itself went through many iterations before I settled on the final version. I encountered many bugs throughout the development. One thing I did wrong at first was I had a fixed step size, where instead of marching based on distance, it would just march a fixed value, which would result in a "stepping" like bug. I also had to rewrite the normal calculation script, or the part that tells the computer which direction the surface rendered at a pixel is facing. This calculation allows for "phong shading", or a basic shading where shadows and specular maps are rendered based on face direction. Towards the end, I also implemented ray-traced shadows, where it would go from the intersection point towards the light and check for an intersection. If it did intersect, you knew it was in the shadow, and it rendered darker. The final graphical improvement I made was adding a "sun" light, lens flares, and glare just to make it look a little nicer.
The finished material is below:
The "Custom" node is what actually uses the raymarching algorithm. It uses a language called "HLSL" or high level shader language. Its designed for shaders, so its reasonably fast and very easy to read and code in. The code in that node is below:
float MAX_DIST = 100000.0; //variable declaration
float MAX_STEPS = 100;
float3 lightDir = normalize(LightDirection);
double epsilonDouble = EPSILON / 10; //kind of useless variable change, but I wrote code around it, so yeah
struct sdfShapes
{
float sceneSDF( float3 p, float power, float iterations ) { //distance estimator function
float3 w = p;
float m = dot(w,w);
float dz = 1.0;
for( int i=0; i< iterations; i++ ) //recursive functiojn
{
dz = power*pow(sqrt(m), power - 1.0 )*dz + 1.0;
float r = length(w);
float b = power*acos( w.y/r);
float a = power*atan2( w.x, w.z );
w = p + pow(r,power) * float3( sin(b)*sin(a), cos(b), sin(b)*cos(a) );
m = dot(w,w);
if( m > 2) //break if function goes beyond radius
break;
}
return 0.25*log(m)*sqrt(m)/dz;
}
};
sdfShapes sdf;
float depth = 0;
float3 positionRaymarch = float3(0,0,0); //for shadows later
for ( int i = 0; i < MAX_STEPS; i++ ) {
double effective_epsilon = epsilonDouble * ((depth / EPSILON) / 100); //change epsilon based on distance for optimization
// Calculate next position along view ray
float3 pos = CameraPosition + ( RayDirection * depth );
positionRaymarch = pos; //keep updating end position
// calcualte shortest distance to scene at position
float dist = sdf.sceneSDF( pos, power, iterations);
depth += dist;
// finish when very close to a surface or when ray has travelled too far
if ( dist <= effective_epsilon || depth > MAX_DIST ) {
break;
}
}
if (depth >= MAX_DIST) {
opacityMask = 0;
return float3( 0.0, 0.0, 0.0); //break if it went too far and set opacity to clear
}
float shadowDepth = 0.0;
float shadowVal = 0.0;
if(ColorBlend > 0.5){ //shadow calculation
for ( int j = 0; j < 2; j++ ) { //this loop steps only 3 times to escape surface(otherwise returns black always)
float3 pos = positionRaymarch + ( lightDir * shadowDepth );
float shadowDist = sdf.sceneSDF( pos, power, iterations);
shadowDepth += shadowDist;
if ( shadowDist <= epsilonDouble * 0.5) {
shadowVal = 0.1;
break;
}
if (shadowDist >= MAX_DIST){
shadowVal = 0;
break;
}
}
}
if(ColorBlend > 0.5){ //actual shadow calculation
for ( int k = 0; k < MAX_STEPS; k++ ) {
float3 pos = positionRaymarch + ( lightDir * shadowDepth );
// Calcualte shortest distance to scene at position
float shadowDist = sdf.sceneSDF( pos, power, iterations);
if ( shadowDist <= epsilonDouble) {
shadowVal = 10;
break;
}
if (shadowDist >= MAX_DIST){
shadowVal = 0;
break;
}
shadowDepth += shadowDist;
}
}
opacityMask = 1;
float3 pos = CameraPosition + RayDirection*depth; //for normal calulation
//checks for surface direction by sampling around the pixel for change in depth
float3 normal = normalize(float3(
sdf.sceneSDF(float3(pos.x + EPSILON, pos.y, pos.z), power, iterations) - sdf.sceneSDF(float3(pos.x - EPSILON, pos.y, pos.z), power, iterations),
sdf.sceneSDF(float3(pos.x, pos.y + EPSILON, pos.z), power, iterations) - sdf.sceneSDF(float3(pos.x, pos.y - EPSILON, pos.z), power, iterations),
sdf.sceneSDF(float3(pos.x, pos.y, pos.z + EPSILON), power, iterations) - sdf.sceneSDF(float3(pos.x, pos.y, pos.z - EPSILON), power, iterations)));
//use normals to calculate basic shadows/shading
float diffuseLight = clamp( dot( lightDir, normal ), 0, 1 );
// calculate specular reflections or shiny areas
float3 reflectDir = reflect( lightDir, normal );
float specular = pow( clamp( dot( reflectDir, RayDirection ), 0.0, 1.0 ), 16.0 );
//combines all the parameters we precalculated
float3 result = (diffuseLight + ambientLight) * diffuseColor + specular * roughness * (EPSILON * 10000 / depth) / 100;
result = result * ((1 / depth) * (epsilonDouble * 500)); //adds darkness if further away based on epsilon value
//the closer you get, the larger epsilon, the shallower the vision for sense of scale
if(ColorBlend > 0.5){ //if its on the shaded version, not depth
if(shadowVal > 0){
return (diffuseLight + ambientLight) * diffuseColor / 100;
}
return result;
}
return ((1 / depth) * (epsilonDouble * 500)); //if not, just return depth
I also had a seperate, slightly modified script and material to render the Julia sets. Julia sets are made when you change the "C" component to an arbitrary vector, instead of the original position. This allows even more variations than before, and creates some interesting patterns.
Again, this went through many iterations. Originally, it was a VR controlled viewer, but due to performance reasons, I switched to just normal display. I used unreals built in visual scripting language, blueprints, to create a player controller. At first, it was mouse and keyboard, but I ended up switching to an xbox controller, since it was smoother and more intuitive. The blueprint consisted of a few components. The first, and most important was the movement. I had the controls mapped so that the left stick controlled forward/backward and left/right. I had the right mapped to view direction with left/right being yaw and up/down being pitch. The second component, which made most of raymarching script work, was a distance estimator. I had it check the distance in one iteration to the closest point on the set. I was able to utilize this value to have dynamic scaling. The closer you got to the set, the slower you would move, which would give a dynamic sense of scale. I also had the quality(epsilon value) adjust to the distance dynamically as well, so that the quality would always remain the same no matter the depth. The last components were all quality of life things, like menus, switching between filters, and switching between Julia and normal sets. The whole script is shown below:
Distance estimator
Setup
right stick view
left stick movement
bumper controller iteration control(how many times the recursive function runs)
Filter change
reset switch, if you got stuck/lost
Julia/brot switch
Development Video, all iterations discussed above: