Virtual reality for fitness and health care applications require accurate and real-time pose estimation for interactive features. Yet, they suffer either a limited angle of view when using handset devices such as smartphones and VR gears for capturing human pose or a limited input interfaces when using distant imaging/computing devices such as Kinect. Our goal is to marry these two into an interactive metaverse system with human pose estimation. This paper describes the design and implementation of Yoroke, a distributed system designed specifically for human pose estimation for interactive metaverse system. It consists of a remote imaging device for estimating human pose, and a handset device for implementing a multi-user interactive metaverse system. We have implemented and deployed Yoroke on embedded platforms and evaluated its effectiveness in delivering accurate and real-time pose estimation for multi-user interactive metaverse platform.
Teleoperation Control Framework for Enhancing Operator Telepresence through Real-Time Virtual Environment Integration
In this study, we explore the enhancement of remote robotic control framework in hazardous environments, such as nuclear power plants and outer space, using Virtual Reality (VR) devices and physical robot platform. This control framework allows the operators to intuitively change their viewpoints and issue pose commands aligned with their head poses. We conducted experiments to compare task performance with and without viewpoint-based pose command calculation. Results indicate that incorporating the operator's viewpoint significantly reduces task execution time and smoothens the robot's trajectory. The proposed framework underscores the potential of VR-assisted control in improving efficiency and safety in remote robotics operations.
A Framework for Scanning and Detecting 3D Objects using 2D LiDAR for Collaborative Robots
The primary aim of this research is to develop an advanced safety system for collaborative robots, enhancing their ability to work safely and efficiently alongside humans in various industrial environments. To achieve this, we are developing a 360-degree LiDAR scanning system that rotates around the robot's base axis, operating at approximately two rotations per second to detect people within a 5-meter range. This system will dynamically control the robot's movements based on object proximity, with a target detection precision of 30mm or less. We are combining 2D LiDAR with an external rotation mechanism to acquire and visualize 3D point cloud data, which is then processed using ROS-based object recognition for practical application. Our project also focuses on cultivating professionals skilled in ROS middleware for sensor communication and data processing, as well as developing expertise in sensor selection, artificial intelligence, and computer vision for human and object recognition. Ultimately, we aim to demonstrate the safe application of this technology across various fields, promoting the advancement and widespread adoption of task-specific collaborative robots in industry.