UMN Visual Computing & AI Seminar

To subscribe VCAI, please join us here.

12/12/2017 Tue 2-3pm @ KHKH 4-192A

Speaker: Prof. Rahul Narain

Title: Towards Accurate Dissipation in Physics-Based Animation

Abstract:

Computer animation has traditionally not paid a great deal of attention to dissipative forces, probably because the use of unconditionally stable numerical methods (necessary to prevent the animator or player from ever blowing up the simulation) already introduces too much artificial dissipation into the system. However, as the field evolves towards new applications outside entertainment and the methods in use become more accurate, it is becoming necessary to model dissipative forces such as friction with the same level of fidelity.

In this talk, I will share two projects our group has been working on in this direction. First, we are working to accurately compute frictional interactions of cloth with solid objects and with other cloth sheets, by combining our adaptive remeshing strategy with an efficient solver for frictional contact constraints. Second, we are extending the optimization-based formulation of simulation (which underlies our ADMM-based method and other popular techniques) to support general nonlinear dissipative forces without losing speed and accuracy, thus allowing fast, realistic animation of arbitrary soft materials.

12/05/2017 Tue 2-3pm @ KHKH 4-192A

Speaker: Prof. Hyun Soo Park

Title: 3D Pixel Continuum

Abstract:

Now cameras are deeply integrated in our daily lives, e.g., Amazon Cloud Cam and Nest Cam, reaching soon towards 3D pixel continuum---every 3D point in our space is observed in a form of multiple view pixels by a network of ubiquitous cameras. Such cameras open up a unique opportunity to quantitatively analyze our detailed interactions with scenes, objects, and people continuously, which will facilitate behavioral monitoring for the elderly, human-robot collaboration, and social tele-presence. In this talk, I will introduce our multicamera system built in Shepherd Laboratory that can emulate the 3D pixel continuum that consists of 69 HD synchronized cameras. This is still on-going work but I would like to share our direction and effort towards 3D behavioral modeling.

Specifically, I will address the following questions:

  • Why do we need many cameras?
  • What are the hardware challenges building the 3D pixel continuum?
  • How accurately can we measure our behaviors?
  • How can existing vision based systems benefit from the 3D pixel continuum?
  • What will be the future of the continuum?

11/28/2017 Tue 2-3pm @ KHKH 4-192A

Speaker: Cheng Peng

Title: View selection for optimal reconstruction

Abstract:

I focus on 3D reconstruction using cameras and laser sensors. In the orchard environment, in order to manage the crops more efficiently, obtaining the 3D model of the orchard can be beneficial to the grower to evaluate multiple important traits of the plants such as the yield information, tree height and volume, etc.

My talk will focuses on two parts.

I first started from using both the LIDAR and camera sensors to reconstruct the orchard. Using the idea of super pixels, we are able to find accurate association between image features (SIFT) and LIDAR points and reconstruct the orchard accurately.

Secondly, aside from the side views of the orchard, we started looking at the orchard from the top. Although structure from motion has been well studied, most of the algorithms rely on the heuristics to maintain high quality results. The top view presents a cleaner geometry, which enables us to approach the reconstruction problem from a geometrical point of view.To improve the reconstruction quality, we formulated our problem based on a more direct evaluation. By modeling feature uncertainty as a cone, we can find the uncertainty of any target in metric distance. Thus minimizing the metric distance immediately improves the reconstruction quality.

We presented the theoretical proof and an algorithm that helps determine the best subset of views to reconstruct the desired region optimally, where optimal is defined over the minimum metric error to the true locations of each point in the reconstruction.

Bio: Cheng is a PhD student advised by Prof. Volkan Isler. His research area is mainly on real-time 3D Reconstruction and Mapping, with a current focus on geometrical representation of agricultural environment.

11/21/2017 Tue 2-3pm @ KHKH 4-192A

Speaker: Zahra Forootaninia

Title: Uncertainty Models for TTC-Based Collision Avoidance

Abstract: Navigating multiple agents through an environment without any collisions is one of the challenges for robot development. Agents have to detect and avoid collision among several moving characters locally without having a knowledge of the entire environment. Therefore, local collision avoidance models play an important role in multi-agent navigation and planning. In this work, we tackle the problem of uncertainty in the sensing data for multi-agent navigation based on a collision avoidance model called Time-To-Collision (TTC). We propose two ways of assuming uncertainty in agent's path; the isotopic model that considers all possible uncertainties and the adversarial model has the uncertainty only in the direction of a head-on collision. We analyze our methods mathematically and experimentally to show that these two models can produce collision-free interaction between agents.

Bio: I am a Ph.D. student working with Prof. Rahul Narain on physics-based animation techniques. Currently, my focus is on fast and efficient numerical methods for animating crowd and fluid.

11/07/2017 Tue 2-3pm @ KHKH 4-192A

Speaker: Daniel Orban

Title: Interactive Visual Querying of Large Parameter Spaces: Shooting Things Accurately while Fluidly Controlling Stress Under Pressure

Abstract: Scientists love exploring the new frontiers of the universe. Engineers enjoy searching for solutions to extremely difficult problems. Unfortunately, due to the large, multidimensional, and nonlinear nature of their respective domains, it is extremely easy to get lost in the data. For example, a medical device designer might want to optimize how a device affects a patient’s specific anatomy, but there are many possible solutions and each solution’s data is large and complex, making searching and simultaneous visual comparison extremely hard. This talk asks the question, “How can we interactively explore a large sparse parameterized space where each data instance is also large?” To work towards a solution, I discuss the combination of two recent projects: Quest and Bento Box. Quest is an application in shock physics that allows scientists to discover which next experiments should be run in order to optimize the knowledge of a high-dimensional sparse continuous system. It uses regression and interpolation techniques on top of a large data ensemble to interactively predict results and estimate uncertainty. Quest also gives the scientist the ability to influence decisions via directly manipulating visually encoded parameters. Bento Box approaches this exploration problem from the other side, assuming each data instance is large and complex. It uses spatial and temporal sampling to visualize and compare specific user-defined features of interest. We use Bento Box to explore the complex fluid-structure interaction of cardiac leads in the right atrium of the heart. I argue that the methods used in Quest and Bento Box can aid in our approach to interactively interpolate and comparatively explore a large number of instances where each instance is also large.

Bio: Dan is a third year PhD student advised by Prof. Dan Keefe. His research focuses on interactive large-scale data visualization and visual parameter space analysis.

10/31/2017 Tue 2-3pm @ KHKH 4-192A

Speaker: Wenbo Dong

Title: 3D Computer Vision in Orchard Environments

Abstract:

Established 3D computer vision techniques often fail to perform well in orchard-like environments. In this talk, I will introduce three examples of our work on 3D computer vision in such environments. First, I will show how to accurately calibrate a 2D Laser-Rangefinder (LRF) with a camera such that we can build a colored 3D map of the orchard using the 2D-camera rig. Second, I will present how UAVs can obtain linear velocities using computer vision in order to navigate through an orchard at a low attitude. Last, I will briefly introduce my ongoing work on 3D reconstruction of orchard rows and extracting semantic information such as tree trunk diameter.

Bio: Wenbo is a PhD student advised by Prof. Volkan Isler. His research area is mainly on 3D computer vision, with a current focus on semantic reconstruction of agricultural environment.

10/24/2017 Tue 2-3pm @ KHKH 4-192A

Speaker: Matthew Overby

Title: ADMM ⊇ Projective Dynamics: Fast Simulation of Hyperelastic Models with Dynamic Constraints

Abstract:

Elastic deformation is an essential component of cinema and visual effects. Modern simulation techniques allow the creation of real and fantasy characters with a level of realism that would otherwise be impossible. We apply the alternating direction method of multipliers (ADMM) optimization algorithm to implicit time integration of elastic bodies. As ADMM is a general purpose optimization algorithm applicable to a broad range of objective functions, it permits the use of nonlinear constitutive models and hard constraints while maintaining a high level of speed, parallelizability, and robustness. We further extend the algorithm to improve the handling of dynamically changing constraints such as skin sliding and contact.

Bio: Matt is a third year PhD student advised by Dr. Rahul Narain. His research area is physics based animation, with a focus on elastic deformation.

10/10/2017 Tue 2-3pm @ KHKH 4-192A

Speaker: Prof. Dan Keefe

Title: Experiential Analytics: From Large-Scale Public Art to Immersive Visualization — When to Walk Inside Your Data

Abstract:

The technical barriers to immersion are all but gone today, but the question remains — is it a good idea to immersive yourself in data? I mean quite literally, does it ever actually make sense to stand inside a large-scale virtual (or physical) data visualization and walk through your data? In this short talk, I'll reflect on several recent immersive visualization projects by my group and our collaborators, some immersive virtual reality data visualizations as well as two large-scale public art installations. In each case, the special ingredient that makes each work as an example of immersive analytics is the need to not only analyze the data but also to experience it via a first-person perspective. Thus, I wonder if we should be calling immersive analytics something more like experiential analytics. How do we understand and design for this experience? How do we quantify it? When is it essential and for whom? Are there counter examples where it is unnecessary and slows down the analytical process? I hope this talk will inspire reflection and discussion on these topics and the exciting future of immersive analytics, as clearly evidenced by this year's workshop.

10/03/2017 Tue 2-3pm @ KHKH 4-192A

Speaker: Shan Su

Title: Planning from First-person View

Abstract:

First-person videos record human activities from camera wearer’s own perspective. It captures subtle social and physical interactions of the wearer as following her/his visual attention. In this talk, I will discuss how such first-person perception can be applied to predicting human behaviors and planning motion. First, I will present predicting the collective movement of basketball players from their first-person videos. We leverage two visual cues embedded in the first-person videos: visual semantics of spatial and social layout around the person, and joint attention that link the individuals to a group. Second, I will present creating a visual scene from a first person action. We introduce a concept called ActionTunnel: a 3D virtual tunnel that encodes the wearer's visual experience while moving into the scene. Such abstraction allows us to associate distinctive images w.r.t the afforded first person action.

09/19/2017 Tue 2-3pm @ KHKH 4-192A

Speaker: Prof. Volkan Isler

Title: Robotic Data Gathering in Agricultural and Environmental Monitoring

Abstract:

In this talk, I will give an overview of our efforts to build teams of autonomous aerial, ground and/or surface vehicles for data gathering. After a general overview, I will present an algorithm for collecting bearing data and analyze its performance.

For more information about our work: http://rsn.cs.umn.edu/



Contact: hspark@umn.edu