Title: Learning Crowd Motion Dynamics with Crowds.
Brief Description:
Currently, I am, working on a project related to improving the quality of navigation of multi-agent virtual crowds using deep reinforcement learning. In multi-agent environments, the current decision of any agent would affect the state of all other agents that are nearby it or even the future agents too. For that reason, ensuring both the collision-free and smooth virtual agent's navigation is really a challenging task. We can do that complex task by using an agent-based reinforcement learning approach. In our approach, we use an improved reward function than some existing works in the literature to ensure the smoother navigation of the virtual agents. I have trained the agents in such a way that they can exhibit the notion of real human in different complex crowd simulation scenarios (eg., Hallway, Crossway, Corridor, Circle, Obstacle etc.). My work has direct application in the field of gaming, animations, robotics, and emergency situations (eg., when fireman, police deal with emergency calls in complex scenarios). Below we list the key contributions of our work:
We design a crowd dynamics framework combining RL and position-based dynamics which propel agents to move in human-inspired fashion, observing agent acceleration and spacing.
In addition, our method consists of multiple control parameters (e.g., policy reward weights). Tuning such parameters by hand for a multi-agent simulation is difficult, since what users perceive as realistic navigational behavior may vary. Therefore, we propose a crowd-sourced, Bayesian framework to find the optimal parameters, hence the "best" policy.
Research Areas: Reinforcement Learning, Crowd Simulation, Collision Avoidance.
Languages and Tools Used to Implement the Project: Python, C++, OpenGL, OpenAI gym, Stable Baselines, Boost C++, Tensorflow.
Demo Video: