Ivorfall is a third-person shooter where the player combines different types of mods to change their firearm's behaviour. The game was made in Unreal for the my Capstone project with a team of 26 people.
Two of my most notable contributions were the game's environmental features and dialogue system, which are discussed below.
Early in development, I was assigned the role of implementing interactable environmental elements for Ivorfall. In particular, designers wanted many destructible objects with different destruction mechanics and effects. For example, we needed barrels that would explode, dealing damage once or dropping mods or money, and we also needed power line poles that would fall over and deal continuous damage on the cut ends of the power lines.
Though both of these are destructible objects, the mechanics of their destruction are different, along with the effects post-destruction. I decided that the best way to approach this problem would be to leverage component-based design, as loosely-coupled components would allow for a variety of behaviours while also reusing a lot of code. The components are implemented in C++, while actors using the components are implemented in blueprints, as blueprints facilitate rapid iteration and most of the logic connecting these components is simple.Â
As designed, having many independent components made environmental hazard implementation relatively fast. Crates that dropped mods simply inherited from ADestructibleEnvironment, spawned a child of the BP_Debris class, call SpawnRandomMod() on a URandomModSpawnerComponent, and destroy themselves when they run out of health. Ivorfall's power line poles use the same UHealthComponent as the crates, but perform a different set of operations when they run out of health - namely, turning on physics for the top portion of the pole, detaching the ends of cable components, and turning on UConinuousDamageComponents at the ends of the cables.
This system is not perfect however. One issue with this architecture is the prevalence of redundant code connecting components. For example, though both the crates and the power line poles use the same UHealthComponents, the power line poles do not inherit from ADestructibleEnvironment like the crates. This is because ADestructibleEnvironment requires that the root component of the actor be a static mesh, and the power line pole prefers that this be a scene component. So, the power line poles have to do the same work that ADestructibleEnvironment does to hook up the IDamageableInterface to the actor's UHealthComponent. This is an artifact of short-sighted design, as it would be relatively simple to have ADestructibleEnvironment have a TSubclassOf<USceneComponent> parameter to decide the type of the root component, supporting both the crates and power line poles.
However, this isn't the only source of redundant code. Though it never came up in development, if we wanted an object that had both the behaviours of ADestructibleEnvironment and ADamagePusheableEnvironment (i.e. having a health component that takes damage and turning damage into a force that pushes the object), we would have to inherit from one of these two classes and re-implement the other as Unreal does not allow for multiple inheritance in UObjects.
For Ivorfall, designers asked me to implement a system that would support playing dialogue from a variety of triggers. Some of the events that need to play dialogue are when the player picks up a mod; when the player enters a level; and at random times over the course of a level with enemies. However, at the same time, they asked that the system offer control over the number of lines of dialogue playing concurrently.
Thus, we needed both a generic system to handle the variety of use-cases and a centralized system to keep track of how much dialogue is currently playing. With this in mind, I designed a dialogue system with many layers for dependency injection to handle the generic requirement, and a pipeline for playing dialogue that passes through an Unreal Subsystem to satisfy the centralized requirement.
The system can be divided into two parts: the dialogue sequence generation and data, and dialogue sequence execution. The former consists of objects that store the files making up a dialogue sequence (e.g. sound files - UDialogueSequenceData), picking actors at runtime to emit lines of dialogue (UDialogueTargetBase), and storing what in-game events trigger what dialogue sequences (UDialogueData). For Ivorfall, we used a messaging subsystem with FGameplayTags to notify actors globally when a game event occurs, so mapping events to dialogue sequences was straightforward. Executing the dialogue sequences consists of workers to monitor the execution of a single dialogue sequence (UDialogueSequenceWorker), containers to limit the number of workers that can be running at the same time (UDialogueChannel), and a subsystem where all publicly-facing dialogue queries interface with (UInquiryDialogueSubsystem).
This system was sufficiently generic and offered enough control over what dialogue sequences are playing to cover unexpected hitches in Ivorfall's development. For example, it turned out that the event that is triggered when the player picks up a mod was also called when the player drops a mod. However, since UDialogueSequenceGetter accepts the payload of the message, and the message contains the equipped mod - or nullptr when a mod is unequipped - I could easily create a custom getter that uses the existing message to only return a dialogue sequence when the player picks up a mod.
Having multiple layers for dependency injection proved useful. We ended up creating many different types of DialogueSequenceGetters and DialogueTargets.
Additionally, having a single interface for interacting with the dialogue system made adding new functionality - such as playing dialogue from blueprints and listening to when dialogue is interrupted - easy.
In Ivorfall, every so often, a dialogue sequence is played where the nearest enemy to the player will emit one of a random set of lines of dialogue. This became a problem when we introduced a melee thug - a standard enemy who only uses melee attacks - late in development, as some of the thug lines reference ownership of a firearm - which melee thugs don't have. We needed to remove some lines as options if the speaker we chose was a melee thug as opposed to a standard thug.
Unreal already implements UDialogueWaves and UDialogueVoices to handle this scenario, however, Ivorfall's dialogue system was built off of USoundBase. To support choosing a sensible line of dialogue without performing a major refactor of the dialogue system, I implemented a custom DialogueSequence that looks at the result of finding the nearest enemy and decides between two sets of lines of dialogue to play (one with references to firearms, and one without). This was sufficient for our project, but came with drawbacks. First, it meant that the custom dialogue sequence had to know about the types of enemies in the game, as opposed to the enemies choosing lines of dialogue themselves. If we added a third type of enemy, we would need to modify this custom dialogue sequence again. Second, it was fixed to dialogue sequences with only one line of dialogue. It would be possible to extend this custom dialogue sequence to have any length, but include a lot tedium (notably with lines where enemy types don't have different response).
In retrospect, using UDialogueWaves and UDialogueVoices would not have expanded the generalizability of the system, but it would have made some use-cases easier to implement.
Another problem with this implementation of the dialogue system is that there are three different ways of responding to a game event with a line of dialogue. The first is to create a UDialogueSequence from UDialogueSequenceData. The second is to instantiate a UDialogueSequenceGetter to choose a sequence of dialogue at runtime. The third is to instantiate a custom dialogue sequence that inherits from UDialogueSequenceBase. Showing all of these options in-editor was problematic. In theory, since all three of these options reference a single object, we could have used a TUnion for this task. However, Unreal does not support showing TUnions in the editor. As a result, I fell back on using a struct with pointers to all three necessary objects and an enum to decide which to use.
This is not an ideal solution, because a designer could fill out one of the pointers, thinking that new pointer would be used, but without updating the enum, resulting in unexpected behaviour. Though we didn't have the time for it, a custom editor script to display this enum would have been helpful.
A response to a game event using a UDialogueSequenceGetter. Three references to objects are shown in-editor, but only one is ever used at a time. The Type field determines which is used.
Finally, the UDialogueEmitterComponent (component that handles playing a line of dialogue on an actor) used a reference to an AudioComponent to play lines of dialogue. Though this had benefits, such as adding a pitch adjustment to melee thugs to make their voices deeper, it also could have been an entry point for interference - e.g. playing a sound effect through the AudioComponent, unintentionally interrupting a line of dialogue.
Linktree | Steam | Epic Games