One big challenge in game animation is how do you transition from one animation to another. Do you create more transition animations and add these to your state machine? Do you make your blend a second long? Do you add sync point markers to every animation that could possibly be blended together? Do you implement a joint velocity based blend that simulates towards the target joint velocity? Do you implement motion matching and create large motion capture databases?
You could do one thing that isn’t actually too complicated or costly and you could use it right away replacing sync point markers which can have markup problems.
Pose matching is a matter of checking two animation poses for similarity and calculating a weight. If you apply this pose matching calculation to a search you can find the best pose in an animation and change the start time of the animation you are blending to based on that. If you are basing the pose matching on the actual output pose of the character from the last frame you can find the actual closest match to your currently blended pose.
The way we have done the pose matching calculation is by calculating a position difference and a velocity difference on a few joints with a weight that prioritizes the position difference. It is possible to also calculate rotational velocity to improve this match and to add more joints to the comparison when you need better contextual pose matching.
In animation engines there are certain operations that run on animations as they are being imported. They’ll correct the root node to face in the correct rotation, there will be retargeting applied and there will be animation compression to the regret of every animator out there. These routines were hard coded in earlier animation engines as the developers assumed that was all they were going to want to do with these animations in preprocessing.
Epic last year implemented a feature that they call Animation Modifiers for their Unreal Engine, this is a system that is run on animations at editor time. You can change the code so that it runs on reimport time as well. This way whenever an animator changes the animation the modifiers will be reprocessed.
Animation Modifiers are basically blueprints that run on animation sequences and they can call any kind of code that you want and operates on the raw animation data. You can create animation meta data, generate sync points, you can precalculate curves for joint velocity and rotational velocity, you can add anim notifies and you can also write new raw data tracks to joints (if you add this code to Unreal).
The way we’ve implemented pose matching is by adding an animation modifier that creates pose match animation meta data on selected animation sequences.
This animation modifier is going frame by frame in the animation and calculates the component space transform of a few joints and their velocity. What we’re using for our use case is the left and right feet joints and the hips. This essentially means we are trying to find the phase of an animation which generally works for locomotion.
In our animation sequence and blend space we have a function where we can optionally do a pose match when an animation is started. This searches through the animation sequence for the best pose match and sets the start time to that. In the blend space it first finds the animation with the highest weight and calculates the start time for that animation and then synching ensures that the rest of the animations in the blend space move to the correct time.
The simplest application of pose matching is to use it as the start time of a looping animation. In looping animations it doesn’t matter where they’ll start as they will just be looping and if the animation is a looping locomotion animation that is a perfect case for pose matching.
With pose matching you can author start animations in every direction and you can allow your animators to end with the feet in different poses and rely on the pose match to find the best pose in the locomotion animation.
For transitions out of gameplay animations like dodges/rolls/attacks/jumps/etc. the pose matching will automatically find the best pose into locomotion and as it’s important for responsiveness and feel you can let the designers tweak those branch out windows and you can rely on the pose matching to make whatever they do look good (assuming they branch while the character is upright).
An additional use case for pose matching are transitions into scripted moments. In the locomotion loop case above you are seeking the entire animation. For a scripted moment you will do a search in the entrance part of the animation and you can skip to the part of the animation that the character matches.
The second step beyond this is to have a list of possible animations and do a search in all of those animations for the best matching pose and choose to play that animation. This is something that can be used for entrance, attack, get up animations and so on.
If you are doing a melee combat game what you can do with pose matching is extending the match so that it checks the position of the hands as well, that way you can do a left swing to a right swing that is selected without anyone having to set up a specific sequence of attacks.
If you want to do even better than just selecting from a list of possible animations you can make it so that during a tumbling state you have a few different tumbling poses and as you activate a ragdoll you then search for the best pose as the character is tumbling. This would make the ragdoll constantly driven to a good shape dynamically.