There are more actions that the player character need to perform in the world than move, look around, pull levers and shoot things.
To sell the character and his actions you need to look at each one and see how they can be better connected with the world and the other characters.
A quite common action in action games is to pick things up, some want the player to do this more than others and they have varying implementations. Some where the character function as a magnet that just sweeps things up automatically and some where you choose individual items but no animations are triggered.
If you want the character to be believable and where you want the actions to mean something as in that they take time and you have to think about which action you want to perform you need to show and use the action of picking things up.
In earlier games of this studio this was handled with full body animations that would align with the object and stop the character from moving. Looks pretty good but is quite restrictive.
We want to do better and allow the player character to move while picking things up but how do you make that look good?
First of all we want to create a coverage with animations and we have implemented a system where we can query the various systems of our game for the best choice. In this selection system we’ve set it up so that you can query a table for the best animation choice from the parameters sent in.
However that coverage will never be perfect so you want to correct the animation that’s chosen slightly. For that we are using a procedural IK for the hand that is picking an object up, that hand is guided towards the item that we want to pick up and we added a weight curve to the animation so that we increase the weight towards the time of pick up and then we ease out of that.
Another problem that is common in games is how do you make sure that the player character can hit the targets on stairs and other slopes. Generally you’ll have the animators create a combo set of attack animations for the player characters and enemies. If you then come and ask them to create slope variations for them so that you can use a blend space for the attack angle that will triple their workload.
However, there are tricks you can do and this is something that I know has been used in other games successfully. You can do a procedural modification of the characters spline so that he can bend his spine depending on the angle to the target.
Since our use case is that we don’t fully sync with the target our modification is based on getting information from the attack animation and doing a raycast check to get the height of the world geometry at the point of impact, we can know this ahead of time as we are using an alignment joint and that joint is set at the position of where we want the target to be for a perfect impact.
In Unreal this can be implemented using their Spline IK node.
In action games it is important to give feedback to the player and I think it is one of the key elements in getting the feel for a game right. You need to show to the player when he is successful with his attacks and there are many elements to this.
Rumble, Hit Reaction animations, Visual Effects, Sound effects, Camera shake. You also want to change the attackers animation in some way.
One common approach to giving the player feedback in his animations is by using a hit-stop technique where you pause the character animation at the impact time for a short duration.
There are some games that use synched animations that already know if you are going to damage the other character so they have animated the impact and can get a great result while sacrificing some of the freedom in the controls.
Then there are some games that use branching in the animations where they have a swing miss and a swing hit that is chosen at the time of impact.
If you want to achieve freedom in the control and try to be a bit more realistic while not creating a lot of additional branching content in the attacks there is also a technique that God of War is using where they determine the impact position and use that to lock the position of the characters arm for a short duration as he keeps playing the animation. This gives the feeling to the player of an impact.
We are using the God of War technique together with a slight variation in that we keep the tracking of the arm during the impact and we can do two things with that, we can either stop it fully for a short duration and then blend it out along that recorded path or we can slow down the movement of the arm to simulate slicing through something that is resisting.
To achieve an accurate impact point we have precalculated the path that the arm will move in the attack animations and from that we can then sample at a higher rate and get the in between frame position of our attack shape. We are doing interpolation for the between frames at runtime, that way we can increase the sample rate depending on framerate/etc.
In certain games there is a need to hold objects with two hands, if it’s weapons, boulders or other objects. This is a problem that requires a combination of solutions to work cheaply and well.
First of all, a solution that I’ve used for quite some time to tackle precision issues that come in from compression and layered and additive blending. If you have an object attached to the right hand and the left hand is also holding that object and the animations have been authored for this case the animations are generally correct but due to imprecision in blending the hand will drift slightly away.
My solution to this problem has been to create IK target joints that are children of each hand but inherits the position of the other hand. This way the left hand knows where it is supposed to be in relation to the right hand, since compression/etc. can alter the result in each joint chain and since the object is attached to for example the right hand we can then use a simple two bone IK solver to correct the position of the left hand to the position that the right arm chain says it should be.
Secondly there is a limit to the number of animations that can be created for holding different objects and each object is not created in the same way so you will need to create sockets/locators on the objects where the hands will be corrected towards. It is important that this is calculated as a local relative transform so that it can be applied at pose evaluation time so that there is no one frame latency to the transform and if it is relative we can use the correction together with this offset to get a good stable position that is relative to the other arm.
Also since each object is different it can be necessary to change the hand pose depending on the object, this is simply a layered blend that alters the pose of the hands on top of whatever full body and layered animations that we are already playing. It is also important to blend this out whenever a custom animation is used that was authored for the object that animates the hands specifically.
Finally and this is a problem that comes up with regards to precision and interpolation artefacts and that is that the attach joint of certain objects that are held by two hands need to be more stable and that is to create an attach joint that is connected to something that actually is more stable (as in not rotating wildly during animations) than the hands and you can make that joint a child of the spine. That way you have more stable relative positions for the locators. Relative IK joints for the hands would then also be relative from the spine.
There are systems that can help the pickup animations, motion matching with tags for when the pickup occurs. You’d still want to use IK to correct the hand during the reach for the pickup event and an ease out window. Motion matching would make it so you could drive to the correct pose the closer you get to the target. Last Of Us seemed to use this in their gameplay trailer from E3 this year.
Holding differently sized objects will probably always need to use some relative IK. While full body IK can be enticing here to allow the character to modify more of his body to reach there is a conflict with the animators as they want their pose to be maintained, same as with the head tracking we don’t want to alter the pose too much with our procedural solutions.