Current Research Project

Title: Understanding Pedestrian Dynamics for Seamless Human-Robot Interaction (NSF 1825709)

Fahad's current research focuses on developing a pedestrian model that can be used both as a pedestrian model and by a robot to navigate inside a crowd in a socially compliant manner. Humans and mobile robots are increasingly cohabiting in the same environments, such as malls, airports and train stations. This has led to an increase in studies on human robot interaction (HRI). One important topic in these studies is the development of robot navigation algorithms that are socially compliant to humans navigating in the same space. Current models such as the social force model, result in “freeze” behavior, in situations where the pedestrian density is high. The main reason for these unrealistic behavior is the fact that these models are unable to capture complex human decision making capabilities. We propose to develop a model that can capture these complex behaviors and can then exhibit complex human like navigation behaviors.

In this project we propose to learn human navigation behaviors using inverse reinforcement learning (IRL). The method needs a large dataset of existing open source trajectories collected inside a mall, which can be used to develop the pedestrian model. We use a large open dataset of pedestrian trajectories collected in an uncontrolled environment as the expert demonstrations. Human navigation behaviors are captured by a nonlinear reward function through deep neural network (DNN) approximation. We propose to use both handcrafted features and end-to-end learning, to learn the pedestrian navigation behaviors. We intend to evaluate the performance of the developed model by comparison with state of the art algorithms proposed in this domain. We further plan to extend this work to include a model that can directly use the robot exteroceptive sensors to model pedestrian behavior. This method would enable robots to be able to navigate in a socially compliant manner. In addition, we propose to test the performance of both methods, in real world experiments. The proposed method can be used to simulate human motion for various applications such as crowd planning and management, emergency evacuation, robot socially compliant navigation and crowd simulators in indoor environments.


• Fahad, M., Elmzaghi, M., Yang, G., Guo, Y. “Learning Socially Compliant Navigation From Human Trajectories in Crowded Spaces,” IEEE International Conference on Robotics and Automation (ICRA), (Submitted), (2019).

• Fahad, M., Chen, Z., Guo, Y. “Learning How Pedestrians Navigate: A Deep Inverse Reinforcement Learning Approach,” IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pg. 819-826, (2018).

Previous Research Projects

I have previously worked on the projects below listed chronologically.

Dynamic Robot Guides for Emergency Evacuations (NSF 1527016)

The behavior control of pedestrian crowd motion has received considerable research interest due to the increasing demand of effective pedestrian flow regulation and evacuation in public areas. In the absence of crowd regulation using passive and active methods, crowd disorders arise when pedestrians aggregate gradually. Existing pedestrian regulation methods primarily focused on the optimal architectural design and spatial placement of facilities based on the prior knowledge of pedestrian self-organization behavior in their collective motion. However, the optimal design of stationary facilities is not reconfigurable in real time once being deployed and thus is not adaptive to the change of pedestrian flow conditions. Most recently, human-robot interaction (HRI) has received remarkable attention for social robots that are expected to make decisions when interacting with human. It has been increasingly interesting that how social robots can be deployed in place of stationary facilities to affect the collective motion of pedestrians. We study a new robot-assisted pedestrian flow regulation approach that utilizes the effect of passive HRI, i.e., the pedestrian flows are implicitly controlled through the dynamic interaction between the pedestrians and the robots deployed in pedestrian flows. We propose learning-based algorithms for optimal robot motion control, which generate robot motion actions only from the observation data (data-driven) without the pedestrian dynamics. The performance of the proposed algorithm also relies on the speed, accuracy and reliability of online extraction of pedestrian motion quantities, which is challenging for the human motion tracking systems. With the rapid progress in computer vision and deep learning techniques, we use end-to-end robot motion planning approach that takes as input the raw images of pedestrian flows from cameras, where a convolutional neural network is used to extract pedestrian motion features online.


• Wan, Z., Jiang, C., Fahad, M., Ni, Z., Guo, Y., He, H. “Robot-Assisted Pedestrian Regulation Based on Deep Reinforcement Learning,” IEEE Transactions on Cybernetics, , pg. 1-14, (2018).

Distributed Heterogeneous Ocean Robots for Detecting and Monitoring Oil Plumes (NSF 1218155)

Pollution plume monitoring using autonomous mobile robots is important due to the adverse effect of pollution plumes on the environment and associated monetary losses. The recent deep blue oil spill disaster in the Gulf of Mexico, highlighted the need for an autonomous system to autonomously track the outer boundary of the plume. We attempt to address several challenges in this project. We conducted a series of field experiments to study the fine scale structure of the plume. In these experiments, we performed static surveys and dynamic surveys to study various aspects of marine plume structure, namely, the plume source concentration profile, cross section profile and plume front profile. We then proposed a model to capture these fine scale plume structures using the advection-diffusion plume structure, a time varying plume source and a time varying flow field. Using the advection-diffusion plume dispersion model, we present a control law design to track dynamic concentration level curves. A single robot and a multirobot version of this algorithm was extensively tested in simulation and emulation. We also present a gradient and divergence estimation method to enable this control law from concentration only measurements. We then present the field testing results of the control law to track concentration level curves in a plume generated using Rhodamine dye as a pollution surrogate in a near-shore marine environment. These plumes are then autonomously tracked using an unmanned surface vessel equipped with fluorometer sensors. Field experimental results are shown to evaluate the performance of the controller, and complexities of field experiments in real-world marine environments are discussed in the paper.


• Fahad, M., Guo, Y., & Bingham, B. “Simulating Fine-Scale Marine Pollution Plumes for Autonomous Robotic Environmental Monitoring,” Journal of Frontiers in Robotics and AI, vol. 5, pg. 52, (2018).

• Wang, J., Guo, Y., Fahad, M., & Bingham, B. “Dynamic Plume Tracking by Cooperative Robots,” Transactions on Mechatronics, , vol. 24, pg. 609-620, (2019).

• Fahad, M., Guo, Y., Kranosky, K., Fitzpatrick, L., Sanabria, F., & Bingham, B. “Ocean Plume Tracking with Unmanned Surface Vessels: Algorithms and Experiments,” World Congress on Intelligent Control and Automation (WCICA), pg. 1-6, (2018).

• Fahad, M., Guo, Y., Kranosky, K., Fitzpatrick, L., Sanabria, F., & Bingham, B. “Robotic Experiments to Evaluate Ocean Plume Characteristics and Structure,” IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 60986104, (2017).

• Fahad, M., Guo, Y., Kranosky, K., Fitzpatrick, L., Sanabria, F., & Bingham, B. “Evaluation of Ocean Plume Characteristics using Unmanned Surface Vessels,” MTS/IEEE OCEANS conference, pp. 1-7, (2017).

• Fahad M., Saul N., Guo Y., Bingham B., “Robotic Simulation of Dynamic Plume Tracking by Unmanned Surface Vessels,” 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, 2015, pp. 2654-2659.

Robot-Assisted Smartphone Localization for Human Indoor Tracking

With the increasing use of smart devices in our daily lives, and novel ways to use them, one important application is using smart devices for localization. With the use of advanced positioning systems, such as global positioning system (GPS), for outdoor environments, the need for an indoor positioning system has been felt greatly. As intelligent service mobile robots have been introduced into the human’s life and operate alongside humans, location-aware human-robot interaction has been in great demand. Smartphone-based localization utilizing the prevalent WiFi-signature-map techniques have been extensively studied. These systems have been shown to be accurate, however, come with the cost intensive sensing infrastructures deployment. We propose to use a mobile robot capable of self-localization, to aid persons carrying smart phones in indoor localization. Such a use of mobile robots ensures that the indoor localization system can be developed without costly infrastructure such as WiFi devices. The proposed cooperative human localization scheme uses a mobile robot and smartphones to track moving persons in indoor environments. In contrast to the systems that use stationary anchor nodes in wireless sensor networks, the flexibility and scalability of our proposed system is enhanced by taking advantage of the mobility of mobile robots. Also, the robot has adequate computational power to execute the Kalman filter-based method proposed to be used in this work in real time, thus no central data processing units are required. Furthermore, the ranging subsystem employed in the proposed system does not need any wireless sensor network to be present in the environment, which is especially appealing to environments where WiFi infrastructure is not available or not desirable to rely on, such as in search and rescue missions and disaster recovery missions. This work also provides the methodology for robot-smartphone collaboration and contributes to the next generation of mobile computing techniques that integrate robots and other mobile devices.


• Jiang, C., Fahad, M., Guo, Y., & Chen, Y. “Robot-Assisted Smartphone Localization for Human Indoor Tracking,” Journal of Robotics and Autonomous Systems, vol. 106, pg. 82-94, (2018).

• Jiang C., Fahad M., Guo Y., Yang J., Chen Y., "Robot-assisted human indoor localization using the Kinect sensor and smartphones," IEEE/RSJ International Conference on Intelligent Robots and Systems, pp.4083,4089, 14-18 Sept. 2014.