Eye-gaze-based driving behavior modeling and analysis

Simulation of driver's eye-gaze on a 3D map

I studied the 3D LiDAR-based localization for the automated driving until 2017 (link). In addition, Prof. Takatsugu Hirayama who is a professional of eye-gaze analysis joined our research groups since 2018. We discussed about a new research topic and decided to develop a novel robot which estimates driver's eye-gaze on a 3D map.

The bottom left figure shows the developed robotic wheel chair. The robot is equipped with 3D LiDAR and IMU and can accurately estimate its own pose. A motion capture is attached at front of the robot and it faces a face of a driver. The diver wears Tobbi glasses that can measure eye-gaze of the driver. The motion capture can track the glasses because markers are attached to the glasses (see the bottom middle figure). We calibrated all of the devices and realized to simulate 3D eye-gaze of the driver as shown in the bottom right figure.

I analyzed relation between the driving behavior and eye-gaze around the occluded areas. This work suggested safety criteria analysis for negotiating blind corners in personal mobility vehicles.

Driving behavior modeling with eye-gaze

I also tried driving behavior modeling with driver's eye-gaze measurement and 3D ego-vehicle localization. As can be seen the right figure, measuring driver's eye-gaze is significant for predicting driving maneuver. If the maneuver can be precisely predicted, an intelligent driving system can effectively assist the driver.

I applied six types of hidden Markov models (HMMs) as shown in the bottom. AHMM, IOHMM, and AIOHMM respectively stand for autoregressive HMM, input-output HMM, and autoregressive input-output HMM. The 3D localization and eye-gaze measurement respectively give us x and y. I compared modeling performances and suggested that AIOHMM2 appropriately extract average driving actions if it appropriately models the driving behavior.

Publications

The eye-gaze simulation on a 3D map is presented in [1] and it analysed safety criteria for negotiating blind corners in personal mobility vehicles. Yamato Maekawa extended this work and presented it in [2]. The driving behavior modeling with the HMMs is presented in [3].

[1] Naoki Akai, Takatsugu Hirayama, Luis Yoichi Morales, and Hiroshi Murase. "Safety criteria analysis for negotiating blind corners in personal mobility vehicles based on driver's attention simulation on 3D map," In Proceedings of the IEEE International Conference on Intelligent Transportation Systems (ITSC), pp. 2367-2374, 2019. (ResearchGate)

[2] Yamato Maekawa, Naoki Akai, Takatsugu Hirayama, Luis Yoichi Morales, Daisuke Deguchi, Yasutomo Kawanishi, Ichiro Ide, and Hiroshi Murase. "An analysis of how driver experience affects eye-gaze behavior for robotic wheelchair operation," In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Workshop on Egocentric Perception, Interaction and Computing (EPIC), 2019. (ResearchGate)

[3] Naoki Akai, Takatsugu Hirayama, Luis Yoichi Morales, Yasuhiro Akagi, HaiLong Liu, and Hiroshi Murase. "Driving behavior modeling based on hidden Markov models with driver's eye-gaze measurement and ego-vehicle localization," In Proceedings of the IEEE Intelligent Vehicles Symposium (IV), pp. 828-835, 2019. (ResearchGate)