Is the Leader Robot an Adequate Sensor for Posture Estimation and Ergonomic Assessment of a Human Teleoperator?


Amir Yazdani, Roya Sabbagh Novin, Andrew Merryweather, and Tucker Hermans

Utah Robotics Center, The University of Utah

Abstract:

Ergonomic assessment of human posture plays a vital role in understanding work-related safety and health. Current posture estimation approaches face occlusion challenges in teleoperation and physical human-robot interaction. We investigate if the leader robot is an adequate sensor for posture estimation in teleoperation and we introduce a new probabilistic approach which relies solely on the trajectory of the leader robot for generating observations.

We model the human using a redundant, partially-observable dynamical system and we infer the posture using a standard particle filter. We compare our approach with posture from a commercial motion capture system and also two least-squares optimization approaches for human inverse kinematics. The results reveal that the proposed approach successfully estimates human postures and ergonomic risk scores comparable to those estimates from gold-standard motion capture.

Main claims:

  1. We can estimate the teleoperator posture solely based on knowing the trajectory of the haptic-input device stylus

  2. The estimated posture is accurate enough for ergonomic analysis

Contributions:

  • We formulate the problem of human posture estimation as a partially observable dynamical system that uses only the leader robot’s stylus trajectory as the observation.

  • We solve the partially observable problem of human posture estimation from the robot trajectory using a particle filter considering posture validity, and compare the results with other deterministic least square solvers.

  • We provide a systematic RULA analysis and compare RULA scores from our estimated posture and the posture estimated with a MoCap system.

  • We compare three different methods for human body segment length estimation:

    1. manual measurement of segment length on subjects;

    2. measuring the subject height and using an ANSUR II model to calculate the length of the rest of segments;

    3. circle point analysis (CPA) using collected data for each subject during calibration motion routines.

Qualitative results for posture estimation:

Overlaid skeletons reconstructed from our approach (red) and from motion capture (green) on the ground truth posture from video stream

Fig. (1) Video-overlaid skeletons show the posture (most probable particle) estimated from our proposed approach (red) and motion capture (green).

Segment length estimation result:

CPA estimated the segment lengths with less deviation from the segment lengths provided by motion capture system.

Fig. (2) Deviation of segments lengths from the mocap lengths

Quantitative results for posture estimation:

Overall, our approach generally agrees with motion capture in estimating posture with a median deviation less than 0.09 rad (less than 5 deg) and upper quartile less than 0.25 rad (less than 15 deg) as shown in Fig. (3), considering the observation solely from stylus trajectory and having no extra sensors. The results show that the accuracy of our approach is sufficient enough to be used to continuously monitor human posture.

Fig. (3) Deviation of the posture estimated by the proposed approach from mocap system for all the trials.

The results in Fig. (4) shows that 3 approaches to solve the posture estimation solely from the leader robot are not significantly different based on the deviation from MoCap postures.

Fig. (4) Deviation of the posture estimated by the proposed approach from mocap system for all the trials.

Risk assessment results using RULA:

Our proposed posture estimation approach was successful in raising an alert in all the cases where the RULA score was higher than 2 that occurred in all 32 trials. The experiment resulted the same interpretation of RULA score in 28 trials (87.5%) and the same RULA score in 16 trials (50%). Our approach estimates the same RULA score (not the maximum value for task) for all the time steps of tasks done by different subjects with median accuracy of more than 85% for task 1 and 2, and more than 77% for task 3 and 4.

Fig. (5) Comparison of the maximum value of RULA scores of a task using estimated posture from our approach and motion capture for all the subjects and trials.