UMN Visual Computing & AI Seminar

To subscribe VCAI, please join us here.

04/24/2018 Tue 2-3pm @ KHKH 4-192A

Speaker: Prof. Catherine Qi Zhao

Title: Learning complex markers from large-scale behavioral data

Abstract:

We develop computational and experimental methods to predict human behaviors and diagnose people with neuropsychiatric disorders. In this talk, I will discuss our recent innovations on data and models. As an example, I will talk about findings that decipher the attentional signature of autism. I will then demonstrate our deep learning models that are able to learn semantic attributes from complex natural scenes, leading to breakthrough performance in attention prediction and identifying people with autism.

Bio: Catherine Zhao is an assistant professor in the Department of Computer Science and Engineering at the University of Minnesota, Twin Cities. Her main research interests include computer vision, machine learning, cognitive neuroscience, and mental disorders.

04/17/2018 Tue 2-3pm @ KHKH 4-192A

Speaker: George Brown

Title: Accurate Dissipative Forces in Optimization Integrators

Abstract:

Dissipative forces are ubiquitous in the natural world. Frictional contact in solids and granular materials, air resistance, viscosity, and internal damping in deformable bodies are just a few examples. In physics-based animation, a long-standing goal has been to produce visually plausible representations of these phenomena.

We propose a method for accurately simulating dissipative forces in deformable bodies when using optimization-based integrators. We represent such forces using dissipation functions, which may be nonlinear in both positions and velocities, enabling us to model a range of dissipative effects. To improve accuracy and minimize artificial damping, we provide an optimization-based version of the second-order accurate BDF2 integrator, and propose a general method for incorporating dissipative forces with second-order accuracy into such an optimization. Finally, we present a method for modifying arbitrary dissipation functions to conserve angular momentum (exactly in the theoretical case, approximately after time discretization).

Bio: George is a third year PhD student advised by Prof. Rahul Narain. His research area is physics-based animation, with a focus on optimization-based methods for fast and efficient numerical simulation.

04/10/2018 Tue 2-3pm @ KHKH 4-192A

Speaker: Jie Li

Title: An Implicit Contact and Friction Solver for Adaptive Cloth Simulation

Abstract:

Abstract: Cloth dynamics plays an important role in the visual appearance of moving characters in the movie industry. Properly accounting for contact and friction is of utmost importance to avoid penetrations and capture typical folding and stick-slip behaviors due to the dry friction. We for the first time have introduced the exact implicit contact and friction model into cloth simulation and improved the model in both efficiency and capability. The base algorithm has been extended towards the handling of self-contacts and layered contacts. Since this friction model is vertex based, so we have applied an adaptive refinement method so that all contacts can be moved to the nearest vertices with negligible errors. Besides, we are able to easily achieve a sticky behavior of the cloth by adding an adhesion force to the model with trivial modification. Our method is both accurate and robust so that we are able to simulate more challenging and realistic demos than previous methods and maintain similar computation speed.

Bio: Jie is a third year PhD student advised by Rahul Narain. His research interest is in physics based animation and computational optimization. He is currently working on the issue of contact and frictional behaviors in cloth simulation.

04/03/2018 Tue 2-3pm @ KHKH 4-192A

Speaker: Jae Shin Yoon

Title: 3D Semantic Trajectory Reconstruction from 3D Pixel Continuum

Abstract:

A 3D trajectory representation of human interactions is a viable computational model that measures microscopic actions at high spatial resolution without prior scene assumptions. Unfortunately, the representation is a lack of semantics, which fundamentally prevents from the computational behavioral analysis. It is important to know not only where a 3D point is but also what it means and how associated with other points. In this talk, I will first present how to reconstruct 3D trajectories using multiple cameras system that emulates the 3D pixel continuum. Given 3D trajectories, I will mainly focus on how to optimally associate the semantics from 2D with 3D trajectories. Lastly, I will briefly introduce my ongoing project on dense dynamic reconstruction using multiple cameras system.

Bio: Jae Shin is a first-year Ph.D. student advised by Prof. Hyun Soo Park. Currently, he is working on the Trajectory Reconstruction in the 3D space and its application using multiple cameras system. His research interests include Computer Vision, 3D Vision, and Machine Learning. For more information about him: cs.umn.edu/~jsyoon

03/27/2018 Tue 2-3pm @ KHKH 4-192A

Speaker: Zhihang Deng

Title:

Abstract:

Providing 3D natural interaction using bare hands with haptics feedback is difficult but is in demand for many VR applications such as architecture design, medical training, education, etc. Passive haptics provides a compelling haptics experience by using haptics-physical props to which virtual objects are registered. Meanwhile, using self-avatar and passive haptics together can enhance VR's immersed experience, but it is hard to apply due to the difficulties in acquiring hand model, registering virtual hands registration to the real hands, and building an analogous physical prop for providing passive haptics. In this talk, I will talk about how we use Pseudo-haptics, an illusion caused by inconsistency of visual cue and haptics cue, to provide an illusion of object's length. And we also study different types of self-avatar hand representations influence on the length illusion in VR.

Bio: Zhihang is a Ph.D. student. He is working on the Human Perception of Virtual Reality with Professor Victoria Interrante. Currently, he is working on providing haptics feedback using passive haptics technology, pseudo haptics. Specifically, he is working on mapping simple physical props to a various shape of virtual objects in VR and studying the influence of different self-avatar hands representations on pseudo haptics in VR. He also interested in Computer Vision and Computer Graphics.

03/06/2018 Tue 2-3pm @ KHKH 4-192A

Speaker: Prof. Junaed Sattar

Title: The vagaries of robot sea trials: from Matlab to Madness

Abstract:

Field robotics is all about deploying robotic systems in natural, and often hostile, conditions to evaluate their performance in realistic settings. In the case of our Interactive Robotics and Vision Lab, it involves deploying autonomous underwater robots in open-water environments -- open seas and lakes. This talk will try to give some insights into the journey from the drawing board to the dive board, with a focus on highlighting the process of conceiving algorithms for underwater robotics, specifically for visual perception, learning, human-robot interaction, and navigation, to field testing the entire system.

Bio: I'm an assistant professor at the Department of Computer Science and Engineering at the University of Minnesota, and a MnDrive faculty. I am the founding director of the Interactive Robotics and Vision Lab, where we investigate problems in field robotics, robot vision, human-robot communication, assisted driving and applied (deep) machine learning, not to mention developing rugged robotic systems. My graduate degrees are from McGill University in Canada, and I have a BS in Engineering degree from the Bangladesh University of Engineering and Technology. Before coming to the UoM, I worked as a post-doctoral fellow at the University of British Columbia where I worked on service and assistive robotics, and at Clarkson University in upstate New York as an Assistant Professor. Find me at junaedsattar.org, and the IRV Lab at irvlab.cs.umn.edu or @irvlab on Twitter.

02/20/2018 Tue 2-3pm @ KHKH 4-192A

Speaker: Bobby Davis

Title: Motion Planning Under Uncertainty

Abstract:

Predictive planning approaches have revolutionized the ability of robots to move through complex environments with dynamic obstacles. This talk will cover our recent work on robotic motion planning under varying sources of uncertainty. Specifically, we'll focus on how we can include knowledge about how uncertainty evolves to improve local planning in two different contexts. The first context will discuss on how uncertainty can be used to guide and improve information acquisition for UAVs and other robots in human environments. The second context will discuss how we can exploit structure in the uncertainty of the predicted future states of other agents to create safer and faster trajectories.

Bio: Bobby is a fifth year Ph.D. student working with Stephen J. Guy. His research focuses on robot motion planning, especially when there is uncertainty in the robot state or environment.

02/13/2018 Tue 2-3pm @ KHKH 4-192A

Speaker: Prof. Richard Linares (Aerospace Engineering and Mechanics Department)

Title: Hamiltonian Monte Carlo Approach to Bayesian Inversion, Nonlinear Filtering, and Optimal Control

Abstract:

This presentation investigates the application of Hamiltonian Monte Carlo (HMC) samplers for solving Bayesian inversion and nonlinear filtering problems that arise in many engineering applications. The HMC approach provides an improvement over Metropolis-Hastings based algorithms by reducing the correlation between successive sampled states by using a Hamiltonian dynamical system evolution. Furthermore, the HMC sampler can be formulated as a dynamical system which is ergodic with respect to the target density. Using the theory of HMC samplers the nonlinear filtering problem for continuous dynamics and discrete measurements is formulated using two stochastic differential equations, one equation for the dynamics of the system and one equation for the measurement update. Furthermore, the filtering problem can be stated as the solution of two versions of the Fokker Planck Kolmogorov Equation (FPKF) one in physical time and one using an auxiliary time variable. Therefore, any existing method for solving the uncertainty propagation problem for nonlinear stochastic systems can now be used to also perform the measurement update. The proposed approach is shown to overcome the Degeneracy Phenomenon associated with particle filters. The use of stochastic optimal control theory to improve the performance of such a method is discussed. A few simple examples are presented to highlight the application of this theory to practical problems.

Bio: Professor Linares's research interests are state and parameter estimation, uncertainty quantification theory, and information fusion for Space Situational Awareness. Dr. Linares joined the faculty of University of Minnesota's Department of Aerospace Engineering and Mechanics as a Professor (Assistant) in 2015 after a short tenure as a research associate at the US Naval Observatory. Prior to moving to Washington, D.C., he held a Director's Postdoctoral Fellow position at Los Alamos National Labs. He has co-authored over 45 conference and journal papers in areas related to space situational awareness, reinforcement learning, artificial intelligence, spacecraft systems, and information fusion. Dr. Linares is the recipient of the AFOSR Young Investigator Research Program Award.

02/06/2018 Tue 2-3pm @ KHKH 4-192A

Speaker: Michael Tetzlaff

Title: IBRelight: Image-Based 3D Renderering of Color Appearance for Cultural Heritage

Abstract:

Image-based rendering and relighting has been extensively studied by computer graphics researchers, but until now has not been made accessible to the cultural heritage community. IBRelight is a new 3D rendering tool that addresses this disparity. It leverages existing photogrammetry techniques, applied to camera-mounted flash photographs, to estimate the camera poses and the geometry of the object. IBRelight then reprojects and blends the photographs in a way that emulates the desired new lighting configuration, which may consist of point lights, an environment map, or both. The interface for specifying the camera and lighting is designed to be user-friendly and is based on other modern 3D applications.

Bio: Michael is a fifth year Ph.D. student working with Gary Meyer in 3D computer graphics, with a focus on image-based rendering and relighting for cultural heritage applications.

01/30/2018 Tue 2-3pm @ KHKH 4-192A

Speaker: Jung Who Nam

Title: Worlds-in-Wedges: Combining WIM and Portal VR Techniques to Support Comparative Scientific Visualization

Abstract:

We present Worlds-in-Wedges, a virtual reality visualization and 3D user interface technique for making simultaneous visual comparisons of multiple worlds (e.g., spatial 3D datasets, historical reconstructions, other data-driven 3D scenes). Comparison is a crucial task for data analysis, but it is not well understood how to facilitate comparative analysis for datasets best displayed in VR, where the tradition is to become immersed in a single world at a time. Our solution is to construct a visualization where it is possible to be in multiple worlds at once while also maintaining an understanding of how these worlds relate and being able to interact with each world. This is accomplished via a three-level visualization. The first level, worlds-in-context, visualizes the relationship between the different worlds (e.g., a map for worlds that are related in space, a timeline for worlds that are related in time). The second level, worlds-in-miniature, is a multi-instance version of the classic World-in-Miniature VR interface and visualizes the user’s position and orientation within each world. The third level, worlds-in-wedges, divides the virtual space surrounding the user into wedge-shaped volumes, where each wedge acts as a volumetric portal, displaying a portion of one world. This visualization is tightly integrated with a bimanual 3D user interface to control the processes of creating new wedges, adjusting their relative size, navigating through one or multiple worlds, and querying data. The new techniques are demonstrated and evaluated via an application to comparing plots from the US Forest Service’s Forest Inventory and Analysis dataset. An evaluation together with collaborating domain scientists suggests the technique usefully complements traditional analyses of these data and shows promise for use both by scientists and as a public-facing storytelling tool.

Bio: Jung Who is a third year Ph.D. student working with Daniel F. Keefe. His research focuses on interactive data visualization in Virtual Reality and Augmented Reality.

Contact: hspark@umn.edu