UMN Visual Computing & AI Seminar

To subscribe VCAI, please join us here.

Upcoming Talk

10/24/2018 Wed 3-4pm @ Shepherd Lab (Drone lab)

Speaker I: Courtney Hutton

Title: Individualized Calibration of Rotation Gain Thresholds for Redirected Walking

Abstract:

Redirected walking allows the exploration of large virtual environments within a limited physical space. To achieve this, redirected walking algorithms must maximize the rotation gains applied while remaining imperceptible to the user. Previous research has established population averages for redirection thresholds, including rotation gains. However, these averages do not account for individual variation in tolerance of and susceptibility to redirection. This paper investigates methodologies designed to quickly and accurately calculate rotation gain thresholds for an individual user. This new method is straightforward to implement, requires a minimal amount of space, and takes only a few minutes to estimate a user's personal threshold for rotation gains. Results from a user study support the wide variability in detection thresholds and indicate that the method of parameter estimation through sequential testing (PEST) is viable for efficiently calibrating individual thresholds.

Bio: Courtney is a third year Ph.D. student studying with Dr. Evan Suma Rosenberg in the Illusioneering Lab. Her research interests include user interfaces and multi-user interaction in augmented reality.

Speaker II: Jerald Thomas

Title: Leveraging Robotics Techniques for Redirected Walking

Abstract:

Redirected walking in virtual reality has been shown to be an effective technique for allowing natural locomotion in a virtual environment that is larger than the tracked physical environment. In this talk I will give a brief overview of redirected walking and how it works, as well as identify two large limitations of redirected walking in its current state. Following this I will introduce the conceptual design for an algorithm, inspired by techniques in the field of coordinated multi-robotics, that improves upon the current state of redirected walking by addressing these limitations.

Bio: Jerald Thomas is a fourth year PhD student studying under the supervision of Dr. Evan Suma Rosenberg. His research interests include natural locomotion and multi-user interactions in virtual reality.

10/31/2018 Wed 3-4pm @ Shepherd Drone Lab

Speaker: Michael Fulton

Title: Robot Communication Via Motion: Closing the Underwater Human-Robot Interaction Loop

Abstract:

Communication with underwater robots is key in enabling their use as partners for divers in the completion of underwater tasks. While some research has addressed methods of human-to-robot communication, very little has addressed the problem of robot-to-human communication, leaving the underwater human-robot interaction loop open and one-way. We propose the use of motion as a method of robot-to-human communication, closing the interaction loop underwater, and evaluate the effectiveness of this method in a small user study, finding it to be sufficiently robust to warrant further exploration, and laying the groundwork for further work implementing motion as a method of communication for non-humanoid robots in all domains.

Bio: Michael Fulton is a second-year PhD student studying under Dr. Junaed Sattar. His research interests include human-robot interaction, underwater robotics, and applications of field robotics.

Past Talk

10/17/2018 Wed 3-4pm @ Shepherd Lab (Drone lab)

Special Invited Speaker: Prof. Brad Holschuh (Design, Housing, and Apparel)

Title: Garment-based Wearable Technology: Principles, Applications, and Challenges

Abstract:

In this talk I will highlight recent advancements in garment-integrated wearable technology. Garment-integration of technology systems (e.g., computation, sensing, actuation technologies) is appealing for wearability and usability considerations, but doing so imparts unique functional and manufacturing challenges that may not be present for stand-alone hardgoods (e.g., wristbands or smart watches). I will discuss the best practices for garment-based sensing and actuation, and discuss potential applications that may benefit from the deployment of such technology that are being pursued in the UMN Wearable Technology Lab (WTL), as well as existing research challenges in this domain.

10/03/2018 Wed 3-4pm @ 4-204C

Speaker: John Harwell

Title: Quantitative Measurement of Scalability and Emergence in Swarm Robotics

Abstract:

Recent swarm works in the literature suffer from (1) a lack of precise measures of swarm scalability so that different algorithms can be equitably compared, (2) a lack of quantitative measurements on the level of emergent behavior/self-organization, despite many correct observations of its presence. We present a problem/domain agnostic measurement methodology that partially addresses these gaps. We demonstrate the applicability of our proposed measures by comparing memory-less, state-based, and more complex task allocation controllers in the context of an object gathering task. Results show that the optimal foraging approach for large problems/swarm sizes can be predicted with the proposed measures with high accuracy in scenarios with low swarm densities, and that the optimal approach for constant vs. variable density scenarios is different, suggesting that swarm density plays a large role in determining asymptotic performance, in addition to the well understood role that swarm size plays.

Bio: John is a third year PhD student under the direction of Maria Gini. His research interests include swarm robotics, swarm intelligence, and stochastic modeling.

09/26/2018 Wed 3-4pm @ 4-204C

Speaker: Jiawei Mo

Title: Direct Stereo Visual Odometry: A Stereo Visual Odometry without Stereo Matching

Abstract:

We propose a stereo visual odometry that is independent of stereo matching, with the goal of accurate camera pose estimation in scenes of repetitive high-frequency textures. We call it DSVO (Direct Stereo Visual Odometry), which is fast, and more accurate than the state-of-the-art stereo matching based methods. DSVO operates directly on pixel intensities, without any explicit feature matching. It applies a semi-direct monocular visual odometry running on one camera of the stereo pair, tracking the camera pose and mapping the environment simultaneously; the other camera is used to optimize the scale of monocular visual odometry. We tested DSVO in different scenes to evaluate its performance and accuracy.

Bio: Jiawei Mo is a second-year Ph.D. student working in Interactive Robotics & Vision Lab, supervised by Professor Junaed Sattar. His research interests include computer vision, 3D geometry, underwater robot state estimation.

Contact: hspark@umn.edu