An element of real time games that is sometimes underestimated is the responsiveness and feel of the controls. Action game developers know the need for responsive controls and will sacrifice fidelity for greater responsiveness.
That’s not always an easy thing to do. A lot of the work required to achieve this is in code and systems but there are a couple of things related to animated movement that can prove useful.
Input devices buffer inputs until the input receiver can flush the buffer and react to the inputs. Input devices update at much higher frequencies than the software that utilizes those inputs.
What is in a game developers control is how the engine and game processes and reacts to those inputs. Software can target different update rates and for consoles I’ve generally had to target an update rate of 30Hz but with a PC version having a variable update rate that can reach much higher rates and on certain hardware go much lower.
The key to understanding the latency that is under the control of the game is by adding logging to each system, input, action translation layer, state machines and so on until the information is delivered to a presentation system like animation or special fx.
Input devices that send button inputs are easy to translate to the game, they can easily just raise events when the state of the button changes. The order that these events are raised is important as dependent game systems need to stop and start reactions in the correct order.
Game pad analogue sticks are a bit more complicated as you have to tackle the flaws in the design of these sticks in the software.
This means you need to map the unfiltered input data in a translation layer, this layer generally needs to have two dead zones where the stick input is translated either to 0 (in an inner range) or to 1 (in an outer range).
In games that have variable frame rate and that can achieve update rates higher than 30Hz the input stick might be accelerating at the time that the game is updating. For animations we don’t want to react on a half stick input if the stick is still moving so instead of reacting at an intermediate value we can delay the reaction until the stick is no longer accelerating and we can filter the result over time. This ensures that we get a stable result.
Using the input sticks as movement speed directly is also an issue that you might want to avoid. Different games have different goals with their movement controls but you don’t want to just send the input stick magnitude to the character physics as the target speed and you might want to translate that.
In engines that are built with implementations of systems in different components the order that these components are updated is key to achieving a responsive character update. There are people who would say that the component model of certain engines are a source for a lot of problems and while I agree that is true we will always need to solve the order of update problem.
The way to achieve the correct update order is in the basics of how you engineer the character systems. If you simplify the system the order of updates can be something like this:
Input->Action->Combat, Physics State Machines, etc.->Animation Pre Physics Update->Physics Update->Animation Post Physics Update.
It becomes more complicated when you need to interact between characters/etc. and when you need to achieve higher performance you want to split the workload of updating a lot of characters between multiple threads. The Action layer here is a system that I’ve found useful for checking if new actions should be blocked due to already running actions/etc. The AI, Combat and Physics State machines generally operate right after that to execute what has been requested.
In Unreal you can achieve this controlled update order by setting tick prerequisites between components. In other engines something similar is usually available to set up dependencies between the different systems.
All the latency work wouldn’t do much unless the player could affect change on the character during movement. The player wants control and needs to be able to change their mind during movement.
The solution that we’ve moved towards with the player characters is a model that mixes animation driven movement with physics driven movement. The animations are used to inform the simulation of how to move.
We precalculate curves for the movement and rotation speed of the root joint in our animations and we have a physics simulation for our characters that can take the current value of those curves as limits to our acceleration and our rotation.
We also make it so that these limits are only active during certain parts of the animations, specifically during the acceleration, deceleration or during the rotation part of an animation. This is to make it so that full movement control is given earlier to the player.
This is an approach that gives us the solid foot placement of animation driven movement and it gives us the responsive controls of physics based movement.
However, even with such a system there are still cases where you need to use full animation driven movement.
For actions interrupting other actions a common approach to this is setting up windows in the animation where the action can be interrupted. This is something that can easily be done in a way that is inefficient and then needs to be reimplemented for every system.
An approach that I like and that is sufficiently detailed is to use markup Tags in these windows of the animations. That way the action that is trying to run can simply check if that Tag that it is looking for is currently active. Similarly you can use this Tag system to lock out actions specifically by checking to make sure that certain Tags are not active.
Also one feature that is necessary for responsive control is buffering of actions, just like the input device buffers button presses we want to buffer actions if the player presses the button too early. The buffer windows are generally quite short and if the player activates a different action before the currently buffered action has started you want to switch to the last action chosen.
I personally also like to add one small feature selectively and that is to allow ‘aftertouch’ as in you are allowed to press a button to activate an action outside of the window that it was allowed, specifically for cases like jumping after you’ve just started falling. The reason for that is because there is latency in the game engine due to rendering and display latency so when the player presses the button it may look like the perfect time for them.
There are approaches where actions are defined in data as timings by game designers and where the animation layer is simply a visual layer on top of the game.
This separates the animation from the gameplay logic and that split makes data flow in one direction. This would reduce complexity and would make the game perform exactly like the designers want to. It is a design that suits a stats and mechanics based game better than some of the games I’ve worked on.
It can make the game more reactive because the animation system then simply has to reach a certain event by a certain time and the more animation data you give the system the more nicely it can reach that event.
However, I feel that this kind of approach does not work for many games where the actions are what define the timings. The actions require special logic and annotation. Elements that I think won’t change any time soon in character action type games.
Look at the UFC GDC presentation for an example of how you would approach that other solution.