The Final Stretch.
Previously, we began high dynamic range (HDR) images for our environment and for the Framebuffer, which significantly helped improve visual quality. However, we were still missing two important pieces of this HDR workflow - bloom and auto adjusting exposure - so we set about adding them.
Bloom is the effect where significantly bright areas of the image brighten nearby areas in a glow like-effect, causing these points to seem even more bright. HDR lends naturally to this technique, as, with its ability to capture, well, a high range of colors, we can easily find which areas of the image are bright. Taking the source image, we do two rounds of Gaussian blur, each of which is done with separate horizontal and vertical passes. By splitting the blur into these passes, which can get the same result, but spend only O(2n) time instead of O(n^2), which is a nice speedup of 3.5x faster for our kernel size of 7. With this blurred image, we then darken it a bit, and then add it back on top of the original image, which creates a nice bloom effect.
In addition, we also would like the system to automatically adjust the exposure as the scene gets brighter and darker, as opposed to our previous hard coded exposure of 0.5. To do this, we take the output framebuffer image, and then put it through glGenMipmaps. This averages out all the colors in the image, which allows us to obtain the average brightness. Furthermore, this method runs entirely on the GPU, and is very simple to implement. With the average brightness in hand (extracted from the mipmap using textureLog() ), we can calculate exposure accordingly.
Getting gameplay to feel right was important to us. Our gameplay consists of roll, pitch, yaw, thrust, afterburner, shooting, and zoom. Roll, pitch, yaw, thrust, and afterburner use a joystick or keyboard axis to create a delta for them. Each delta is automatically interpolated for us through the input system. However, to smooth things out even more and make the ship have weight, we added another sensitivity level to rotation, thrust, and afterburner. This smooths out things even more and gives the ship a natural feeling when maneuvering.
For rotation, we calculate our delta angles and perform an angle axis rotation using quaternions for each axis. Then we multiply each quaternion together to get an overall delta rotation and multiply that with the current world space rotation of the ship. For thrust and afterburner, we grab our axis input and multiply that by deltaTime and sensitivity for each. Then we cap both values out at a max speed and max afterburner effect.
To make thrust feel right, we implemented a basic momentum system. The player has a travel velocity and a forward velocity. The travel velocity represents the direction that the ship is moving at that time, and the forward velocity represents the desired direction of travel. We perform an interpolation that looks similar to the function 1/x, where changes happen rapidly at first and slowly diminish as you get close to the target vector. This represents our flight model well since our concept of the fighter only had a single set of thrusters for forward velocity.
Afterburner plays off the idea of thrust, understandably. It represents allocating resources away from other things (in our case guns) and towards the engines so you can catch up to enemy fighters or make tighter turns. The way it works is multiply the velocity by a certain factor (we set ours to +20%) to make the ship faster and also make travel direction change quicker by a factor of up to 2x. By making velocity change towards the ship direction faster, you are able to maneuver the ship a lot better in tight quarters and around obstacles.
We made a fairly simple collision system compared to some of our other features due to a time crunch and wanting to get a full game implemented by the due date. We assign approximate bounding boxes by passing offset and dimension to our collider. This attaches itself to the parent game object, and as a result is able to move around with it. This is our standard setup with game objects and components, laid out very similarly to Unity's game engine and scene graph.
A collider will approximate an axis-aligned bounding box (AABB) which it will use for checking rough collision events. Then a static function will loop through each collider and check all other colliders for intersection. Sadly we probably won't have time to implement Sweep and Prune or octrees for this project, but we do have a quick and effective solution to lower computation.
Each collider has a boolean member called "passive," which marks it as either dynamic or passive. What this means in the code is that a passive collider will ignore all other colliders and simply exist, where a dynamic collider will go through the naive algorithm to check for collision for both dynamic and passive colliders. This lifts a lot of CPU overhead since our game will consist of mostly passive colliders.
Using passive collision in engine will look something like this, and is why this method works for our demo. We will have a field of asteroids, a bunch of fighter ships, and just a few bullets at a time. Assigning asteroids to be passive is an obvious optimization, but we're going a step further and assigning passive to fighters as well. It seems counter intuitive, but it makes sense because we already have boids in place. Because of boids, ships will actively avoid other ships in its squadron and all of the asteroids. If a fighter collides with another fighter, it won't look very weird and it'll get resolved by the boid code naturally. The only exception to this will be when one squadron flies through another, which should only last a split second and will likely not be noticed during the demo. Adding collision detection here is unnecessary and will only add more problems than it will solve.
What this leaves us with is bullets having the playground to themselves. And since we only have a few bullets at a time, collision won't take very long. This means we'll be able to crank up the number of asteroids and fighters even more!
We have support for real-time shadow maps for directional lights. While it would have been nice to have shadows for all lights, it would have been prohibitively costly performance wise. To render the shadow map we gather a list of all fully opaque objects in the scene and then render their depth into a framebuffer. Transparent objects are not supported because they would only cast a partial shadow which would be too difficult to simulate for this project. The camera uses an orthogonal projection matrix as the shadows are being cast by a directional light, and so can all be assumed to be parallel. The camera that renders the shadow map is always centered on the main camera so that anything that is outside of the frustum is far away from the camera and less likely to have noticeable shadows.
To actually apply the shadow map once it was rendered, everything in the deferred pass had it bound as a texture and then had their position transformed into shadow space based on another matrix that was also passed in. We use a slope based bias plus a very small constant to avoid shadow acne without introducing severe peter-panning. The texture has clamp to border and checks if the z depth is outside of the shadow map before trying to check shadows so that anything outside of the shadow's frustum is lit instead of in shadow (which looks much better than the alternative).
We experimented with various types of filtering, but found that the best quality/performance ratio was easily regular four tap poisson disk sampling plus hardware PCF. Anything that produced better results (randomized or stratified poisson, etc) required far more samples and computational overhead. Unfortunately, we probably will not have time to implement Cascaded Shadow Maps due to lack of time, although they would have improved the quality/performance ratio.
We wanted some cool sound to go with our cool game, so we decided to implement a sound wrapper for our game. We decided to go with FMOD as a platform for its ease of use and wide adoption by the games industry. The sound API includes looping, playing controlling volume, and 2D/3D audio, depending on the sound. We use 2D audio for things like cabin ambiance and music playback and 3D for stuff like ship explosions. The wrapper also features a sound map that's initialized on startup, which loads in all sounds that could possibly be used in the game.
The map acts as a great way to abstract file loading and managing a sound's life cycle, and since all sounds are pre-loaded on startup, we don't need to make any disk accesses during runtime. To play a sound, we create a copy of its instance inside of the map and attach it onto a channel. Then once it's attached we're able to manipulate that instance individually without having to read from the disk or worry about accidentally changing another sound. We use a single listener, the camera location, to tell FMOD where sounds are being heard. This allows us to have unique 3D sounds attached to objects, which update their 3D attributes for FMOD to process.
I've always been a fan of the long engine trails fighters emit in some games as they zip around in complex maneuvers across the battlefield. The most notable example of this is the Homeworld series of games, where the long trails play an important gameplay role by making small fighters more visible from the zoomed out RTS perspective. While our game isn't an RTS, having engine trails, if not quite so long, helps give ships a sense of movement (since there is little parallax with the background), helps players track enemy ships, and just looks really cool, which is why we decided to implement them in our game, and I think the results came out quite nicely.
We created engine trails by recording the position of the ship every few milliseconds, keeping a fixed queue of points and throwing out the oldest points when we wanted to add more. Then, every time we rendered, we would create a new Vertex Buffer Object containing two duplicates of each point's data (containing position, direction to next point, and which of the duplicates it is), and render it on the GPU as a triangle strip. On the GPU, we would calculate the screen space direction, cross it with the z vector to get the screen space tangent, and the move the points in opposite directions along this vector. While this approach has problems when the trail goes directly into the screen, on the whole, the paths look great and scale correctly. By bill-boarding in this way, we are able to render seamless engine trails with a small number of triangles.