After experimenting with low-level game engine stuff in Love2D, I still felt like I could learn a lot more of how videogames and 3D graphics in general work by going further under the hood. So in late 2021, I decided to learn about how 3D graphics are created in OpenGL (a graphics API that a majority of games use today).
In realtime 3D graphics, shaders make up what's rendered on the screen. In order to do this, shaders are run on the GPU, being specifically designed for running realtime graphics. Regarding this, it's common knowledge that computer graphics are rendered using polygons, and this is where shaders come in.
A vertex shader contains the values of each vertex of a polygon, often storing position and color.
The vertex shader then passes it's information over to the fragment shader, where the fragment shader is processed for every pixel on the screen. The fragment shader then calculates the final color of the pixel.
And this is basically the main use of OpenGL when it comes to rendering graphics in realtime. Though there are other steps that also come with rendering polygons, such as the rasterization pipeline, OpenGL thankfully handles those factors.
So now that rendering polygons are covered, how do we actually get to 3D? That's where matrices come in.
Similar to geometry, matrices in computer science are used to apply different transformations to models, or more specifically vertices of polygons in 3D space, though there are other uses for them as well.
In a vertex shader, the model matrix contains the transformations applied to a model, which includes translations, rotations, and scaling. These transformations can be applied by multiplying the model matrix by additional matrices-