When you’re developing movement solutions for games it becomes clear pretty quickly that to get the best results for a player character and NPCs you will need to develop custom solutions.
Players want responsive characters so that takes priority, you need to handle changes in information rapidly and you need to make those changes look smooth and reactive.
Developers want the NPCs to show anticipation of what they are going to do and to be able to show that anticipation you have to plan ahead.
In the earliest games I worked on I didn’t specifically work on animations but rather focused on gameplay and the teams were smaller then so you didn’t have the luxury to specialize quite as much.
The way we solved AI was by simulating and then outputting a state in the same way as the player, setting stance, movement speeds and directions directly. The path finding solutions were nav point or gate based. Nav points evolved to nav mesh and sometimes we even rendered custom grid based solutions that would be up to date with dynamic traffic/etc. The gates were useful for racing games as the NPC just needed to go from gate to gate.
These solutions generally moved an NPC via steering and direct control. There weren’t much planning involved except to generate nice curves to reach those points or gates.
We explored animation planning at 2K and used it for soldiers running down a battlefield. The battlefield was annotated with custom nav points and an associated action. The actions could be take cover, mantle an obstacle, jump through a window or open a door. It was inspired by games like Call of Duty and it was important to present the feeling of combat.
The key to solving these were through animation states that allowed you to get the information of how far the animation moved the character so you knew the entry and exit positions.
Then as the soldier was traversing through his nav points if he reached one with a custom annotation he would perform that action and end up at the exit position. The planning could then start from the exit position ahead of time as it knew the character would end up there.
For Mafia 3 we set up a dynamic system with rules and conditions to trigger certain animation states. It had knowledge about distance to target, current speed, wanted speed, current direction and wanted direction, phase, current stance and wanted stance.
With short term planning you are just evaluating a few meters forward or a second or so into the future. The NPCs can respond to changes fairly rapidly and can show anticipation and reach exact positions nicely.
Generally you don’t need to reach most nav points perfectly with an NPC but it is enough that they reach within a radius around the point but when they do need to reach a particular position exactly we used alignment.
There were solutions for games that would allow the entire path to be planned out and I looked into and even prototyped these in between projects. The solutions I looked at favourably were solutions based on motion graphs and motion fields, they weren’t the best solutions for a player character at the time as they had very high latency.
Essentially these solutions have databases of a lot of mocap animation data and then generate connections between them based on full body pose matching. Your query into this database generates a list of animation sequences to play and it will seamlessly be blended together.
I thought this was very clever but I hadn’t seen this in a game and wanted to explore this myself to try and understand why. The problems I ran into were performance costs, it took time to do that plan and the more animation data you had the longer the planning took. Additionally you had an offline computation process where it calculated pose difference maps between animations and this wasn’t the fastest process either.
It’s obvious where I’m going with this. I believe motion matching or a similar technique is the way to go until we find something even better, you can achieve reactive NPCs with fluid transitions and they will be able to show anticipation in their entire move similar to a longer term plan.
Motion matching however is not the fastest method for playing back animations. You can do optimization where the search is optimized with a kd-tree search and you can set it to not search all the time but at specific intervals and you can balance that so not every NPC searches at the same time.
Even so, I wouldn’t expect to see a crowd of NPCs use MM any time soon.