UMN Visual Computing & AI Seminar

To subscribe VCAI, please join us here.

12/12/2018 Wed 3-4pm @ Shepherd Drone Lab

Speaker: Nick Heller

Title: Deep Learning in Medical Image Analysis: Opportunities and Challenges

Abstract:

Deep Learning (DL) holds great potential for improving medical care and advancing our understanding of physiology and disease. However, the development of DL-based clinical tools is stymied by practical challenges such as the scarcity of relevant data, and is complicated by theoretical challenges such as feature space heterogeneity. In this talk, I present our work on preoperative analysis of renal cell carcinoma as a case study for how we approach some of these challenges. In particular, I discuss the issue of label quality in semantic segmentation, and present a semi-supervised approach to training from a collection of specialized datasets of the same modality.

Bio: Nick Heller is a second year PhD student under Dr. Nikolaos Papanikolopoulos. His interests include machine learning, computer vision, and knowledge discovery in databases.


11/28/2018 Wed 3-4pm @ Shepherd Drone Lab

Speaker: Shi Chen

Title: Towards Efficient Deep Neural Networks

Abstract:

Deep Neural Networks (DNNs) have achieved great successes among various visual tasks, including image recognition, object detection, semantic segmentation and etc. However, accompanied with the improvements on model performance is the significant increase in computational overhead. Intensive computational demand of DNNs makes it difficult to directly apply them on devices with limited resources, prohibiting the wide deployment of DNNs in the industrial fields. In this talk, we will discuss state-of-the-art techniques on improving the efficiency and reducing the computational overhead of DNNs, including our recent work on layer-wise parameter pruning.

Bio: Shi Chen is a first-year PhD student studying under Dr. Catherine Qi Zhao. His research interests lie in multi-modal fusion, human vision and efficient computing.


11/21/2018 Wed 3-4pm @ Shepherd Drone Lab

Speaker: Selim Engin

Title: Establishing Connectivity in Mobile Robot Networks

Abstract:

Consider a group of robots with bounded communication range operating in a large open area. One of the robots has a piece of information which has to be propagated to all other robots. What strategy should the robots pursue to disseminate the information to the rest of the robots as quickly as possible?

This talk explores offline and online algorithms with provable guarantees for efficiently doing so. After presenting the algorithms, I will show deployment of our algorithms on two different multi-robot systems.

Bio: Selim Engin is a third year Ph.D. student working with Prof. Volkan Isler. His main research interests are geometric optimization for robotics and computer vision.

11/14/2018 Wed 3-4pm @ Shepherd Drone Lab

Speaker: Stephen J. Guy

Title: Modern Challenges in Motion Planning for Autonomous Robots

Abstract:

Motion planning forms a central part of modern autonomous robotics systems by providing the “intelligence’’ to allow robots to determine how to move through the environment and safely achieve their goals. The past two decades have seen tremendous progress in new motion planning algorithms based on techniques such as search trees, POMDP solvers, belief-space planning, and reinforcement learning, that can plan robot motion in challenging environments. However, most of these techniques work best in structured environments where the robot is the only activity entity, and where all of the environment's state is readily known. In practice, robots are increasingly needed in unstructured environments, where they may need to operate in the presence of other robots (and people), often under conditions of high uncertainty. In this talk, I will cover some of my recent work as it relates to addressing these challenges including strategies to improve autonomous robot navigation in uncertain, dynamic environments including recent advances in planning techniques for multiple agents in shared spaces. I will also present new methods to measure and improve the quality of predictive simulations of human motion and discuss how these improved simulations can lead to better trajectories for robots moving in shared environments with people.

Bio: Stephen J. Guy is an associate professor in the Department of Computer Science and Engineering at the University of Minnesota. His research focuses on the development of artificial intelligence for use in computer simulations (e.g., crowd simulation and intelligent virtual characters) and autonomous robotics (e.g., collision avoidance and path planning under uncertainty). Stephen’s work has had a wide influence in games, VR, and real-time graphics industries: his work on motion planning has been licensed by Relic Entertainment, EA, and other digital entertainment companies; he has been a speaker in the AI Summit at GDC, the leading conference in the games development industry; and he currently serves as the vice chair of the board of directors for Glitch, a non-profit focused on empowering and diversifying the talent pool of game developers and creators. Prior to joining Minnesota, he received his Ph.D. in Computer Science in 2012 from the University of North Carolina - Chapel Hill with support from fellowships from Google, Intel, and the UNCF, and his B.S. in Computer Engineering with honors from the University of Virginia in 2006.

11/07/2018 Wed 3-4pm @ Shepherd Drone Lab

Speaker: Ruben D'Sa

Title: Design of a Transformable Unmanned Aerial Vehicle

Abstract:

Across a number of applications and domains, the selection of a UAV platform is typically one of compromise. Most UAVs can be classified into the categories of fixed-wing and quad-rotor platforms. Although fixed-wing platforms offer high endurance and range, they lack the maneuverability and hovering capabilities of a quad-rotor platform which can be critical to satisfying mission objectives. Due to the physics of traditional propulsion strategies, both types of platforms suffer from limited flight time. My research as a Ph.D. student is centered around the modeling, design, control, and information gathering of a solar powered, transformable unmanned aerial vehicle. Flight endurance of the platform is enhanced through the use of solar power, allowing the platform to remain in flight all day, transforming to and from quad-rotor and fixed-wing configurations as needed for specific applications. The Transformer Solar UAV provides significant advantages over alternative air and ground based robotic systems by being capable of observing from an information rich vantage point while in a fixed-wing state, and transforming into a quad-rotor state for close-up sensing and observation. The use of the Transformer Solar UAV will be focused on distributed sensor and data collection to support agricultural and environmental monitoring.

Bio: Ruben D'Sa is a fifth-year Ph.D. student working with Prof. Nikolaos Papanikolopoulos. His main interests include aerial vehicle design, power electronics, system modeling, and path-planning.

10/31/2018 Wed 3-4pm @ Shepherd Drone Lab

Speaker: Michael Fulton

Title: Robot Communication Via Motion: Closing the Underwater Human-Robot Interaction Loop

Abstract:

Communication with underwater robots is key in enabling their use as partners for divers in the completion of underwater tasks. While some research has addressed methods of human-to-robot communication, very little has addressed the problem of robot-to-human communication, leaving the underwater human-robot interaction loop open and one-way. We propose the use of motion as a method of robot-to-human communication, closing the interaction loop underwater, and evaluate the effectiveness of this method in a small user study, finding it to be sufficiently robust to warrant further exploration, and laying the groundwork for further work implementing motion as a method of communication for non-humanoid robots in all domains.

Bio: Michael Fulton is a second-year PhD student studying under Dr. Junaed Sattar. His research interests include human-robot interaction, underwater robotics, and applications of field robotics.

10/24/2018 Wed 3-4pm @ Shepherd Lab (Drone lab)

Speaker I: Courtney Hutton

Title: Individualized Calibration of Rotation Gain Thresholds for Redirected Walking

Abstract:

Redirected walking allows the exploration of large virtual environments within a limited physical space. To achieve this, redirected walking algorithms must maximize the rotation gains applied while remaining imperceptible to the user. Previous research has established population averages for redirection thresholds, including rotation gains. However, these averages do not account for individual variation in tolerance of and susceptibility to redirection. This paper investigates methodologies designed to quickly and accurately calculate rotation gain thresholds for an individual user. This new method is straightforward to implement, requires a minimal amount of space, and takes only a few minutes to estimate a user's personal threshold for rotation gains. Results from a user study support the wide variability in detection thresholds and indicate that the method of parameter estimation through sequential testing (PEST) is viable for efficiently calibrating individual thresholds.

Bio: Courtney is a third year Ph.D. student studying with Dr. Evan Suma Rosenberg in the Illusioneering Lab. Her research interests include user interfaces and multi-user interaction in augmented reality.

Speaker II: Jerald Thomas

Title: Leveraging Robotics Techniques for Redirected Walking

Abstract:

Redirected walking in virtual reality has been shown to be an effective technique for allowing natural locomotion in a virtual environment that is larger than the tracked physical environment. In this talk I will give a brief overview of redirected walking and how it works, as well as identify two large limitations of redirected walking in its current state. Following this I will introduce the conceptual design for an algorithm, inspired by techniques in the field of coordinated multi-robotics, that improves upon the current state of redirected walking by addressing these limitations.

Bio: Jerald Thomas is a fourth year PhD student studying under the supervision of Dr. Evan Suma Rosenberg. His research interests include natural locomotion and multi-user interactions in virtual reality.

10/17/2018 Wed 3-4pm @ Shepherd Lab (Drone lab)

Special Invited Speaker: Prof. Brad Holschuh (Design, Housing, and Apparel)

Title: Garment-based Wearable Technology: Principles, Applications, and Challenges

Abstract:

In this talk I will highlight recent advancements in garment-integrated wearable technology. Garment-integration of technology systems (e.g., computation, sensing, actuation technologies) is appealing for wearability and usability considerations, but doing so imparts unique functional and manufacturing challenges that may not be present for stand-alone hardgoods (e.g., wristbands or smart watches). I will discuss the best practices for garment-based sensing and actuation, and discuss potential applications that may benefit from the deployment of such technology that are being pursued in the UMN Wearable Technology Lab (WTL), as well as existing research challenges in this domain.

10/03/2018 Wed 3-4pm @ 4-204C

Speaker: John Harwell

Title: Quantitative Measurement of Scalability and Emergence in Swarm Robotics

Abstract:

Recent swarm works in the literature suffer from (1) a lack of precise measures of swarm scalability so that different algorithms can be equitably compared, (2) a lack of quantitative measurements on the level of emergent behavior/self-organization, despite many correct observations of its presence. We present a problem/domain agnostic measurement methodology that partially addresses these gaps. We demonstrate the applicability of our proposed measures by comparing memory-less, state-based, and more complex task allocation controllers in the context of an object gathering task. Results show that the optimal foraging approach for large problems/swarm sizes can be predicted with the proposed measures with high accuracy in scenarios with low swarm densities, and that the optimal approach for constant vs. variable density scenarios is different, suggesting that swarm density plays a large role in determining asymptotic performance, in addition to the well understood role that swarm size plays.

Bio: John is a third year PhD student under the direction of Maria Gini. His research interests include swarm robotics, swarm intelligence, and stochastic modeling.

09/26/2018 Wed 3-4pm @ 4-204C

Speaker: Jiawei Mo

Title: Direct Stereo Visual Odometry: A Stereo Visual Odometry without Stereo Matching

Abstract:

We propose a stereo visual odometry that is independent of stereo matching, with the goal of accurate camera pose estimation in scenes of repetitive high-frequency textures. We call it DSVO (Direct Stereo Visual Odometry), which is fast, and more accurate than the state-of-the-art stereo matching based methods. DSVO operates directly on pixel intensities, without any explicit feature matching. It applies a semi-direct monocular visual odometry running on one camera of the stereo pair, tracking the camera pose and mapping the environment simultaneously; the other camera is used to optimize the scale of monocular visual odometry. We tested DSVO in different scenes to evaluate its performance and accuracy.

Bio: Jiawei Mo is a second-year Ph.D. student working in Interactive Robotics & Vision Lab, supervised by Professor Junaed Sattar. His research interests include computer vision, 3D geometry, underwater robot state estimation.

Contact: hspark@umn.edu