UMN Visual Computing & AI Seminar

To subscribe VCAI, please join us here.

4/25/2019 Thursday 2:30-3:30pm @ Shepherd Drone Lab (Room 164)

Speaker: Zack Xu

Title: Pathway Analysis of De Novo Variants Associated with Osteosarcoma

Abstract:

Osteosarcoma is a primary bone cancer that occurs often to youth between 10 and 20 years old. Its physiology is currently poorly understood. In this study of 92 family trios, each of which consists of the patient and his/her both non-patient parents, we have compared the genetic data of the child with that of the parents, to investigate the de novo variants associated with the disease. We have developed a variant calling approach that has lower false positives. Then we applied pathway enrichment analysis using databases of GO, Reactome and KEGG. We found de novo variants of the tumor suppressor gene TP53 in 2 trios, as well as that of the Adisintegrin and Metalloproteinase with Thrombospondin Motifs family genes ADAMTS7, ADAMTS12 and ADAMTS13 in another 2 trios.

Bio: Zack Xu is a PhD student advised by Chad Myers. His research interests are computational approaches in Cellular and Molecular Biology, cancer physiology and neuron disorders. He is working in HealthPartners Institute and had previously been with University of Minnesota Department of Lab Medicine and Pathology. He received his BEng degree in Electrical Engineering from South China University of Technology and a master’s degree in mathematics from University of Minnesota.

4/18/2019 Thursday 2:30-3:30pm @ Shepherd Drone Lab (Room 164)

Speaker: Jungseok Hong

Title: Towards the Detection and Localization of Underwater Trash by Autonomous Robotic Platforms

Abstract:

Underwater trash, particularly plastic, has already had damaging effects on the environment, but its long-term effects after its disintegration are unknown and predicted to be catastrophic. Although multiple approaches have been proposed to solve the problem, little research using autonomous robots to remove trash has been conducted. In this paper, we propose novel detection and localization algorithms for an AUV towards the removal of underwater trash. For the object detection module, we focused on detecting plastic debris and biological objects accurately and efficiently. To achieve our goal, we first curated an underwater trash dataset from a publicly available collection. We then selected four state-of-the-art object detection models and evaluated multiple training methods. For the localization module, we proposed and evaluated the performance of localization algorithms which fuse sensor measurements and bathymetry data with Bayesian information filters. Experimental evaluations demonstrated that the proposed algorithms can detect and localize trash effectively running on-board computational platforms typically found on physical AUVs, and specify best-performing algorithms for each of the two components.

Bio: Jungseok Hong is a second-year PhD student studying under Dr Junaed Sattar. His research interests include computer vision and deep learning for perception, and applications of underwater robotics.

4/4/2019 Thursday 2:30-3:30pm @ Shepherd Drone Lab (Room 164)

Speaker: Michael Tetzlaff

Title: Recovering Reflectance using Backscattering Flash Photogrammetry

Abstract:

I will be discussing methods for acquiring the reflectance of shiny surfaces from photographs illuminated by a camera-mounted flash. The technique can be seen as an extension of traditional photogrammetry. One use of flash imagery is to apply image-based relighting to render the 3D photogrammetry model as if it were illuminated by a set of spot lights or a high-dynamic-range photographic environment. Another approach is to fit the reflectance observations in the flash photographs to a parameter characterizing the microfacet profile of the surface. The use of flash photography is shown to be a promising technique for capturing the appearance of shiny materials for which traditional photogrammetry is insufficient.

Bio: Michael is a sixth year Ph.D. student working with Gary Meyer in 3D computer graphics. His current research focus is on image-based rendering and relighting, with an emphasis on cultural heritage applications.

3/28/2019 Thursday 2:30-3:30pm @ Shepherd Drone Lab (Room 164)

Speaker: Pravakar Roy

Title: Computer Vision for Precision Farming and Phenotyping in Specialty Farms

Abstract:

The world population is growing. We need to double our food production by 2030 to keep up with the current growth rate. Maximizing production while reducing labor, resources, costs, and crop loss are critical to reaching this landmark; and precision farming and phenotyping are the keys to this. Specialty crops (fruits, vegetables, flowers etc.) are particularly well suited for precision farming and phenotyping studies, because of their high value, management costs, and variability in growth. My research goal is to develop computer vision and planning algorithms to create a general infrastructure for automating these tasks. In this talk, I will present a few of these techniques that I developed. My designed methods can identify the fruits, monitor fruit count and size regularly and extract morphological data such as tree height and canopy volumes. More recently, I am focusing on developing deep learning techniques for many of these problems. Deep learning solutions are more general and can be translated easily to different domains. However, a trained network is often good as the training data and obtaining such data for specialty crops is hard. I will end my talk with an overview of my current work that aims at alleviating the data annotation problem.

Bio: Pravakar is a Ph.D. candidate in the Department of Computer Science and Engineering, advised by Professor Volkan Isler. He works on agricultural robotics and computer vision. He has designed multiple computer vision algorithms for yield estimation in apple orchards. His recent focus is on active vision and deep learning for learning from synthetic data. For details, please have a look at his personal website.


3/14/2019 Thursday 2:30-3:30pm @ Shepherd Drone Lab (Room 164)

Speaker: Fenix Chen

Title: GIGL: A Domain Specific Language for Procedural Content Generation with Grammatical Representations

Abstract:

We introduce a domain specific language for procedural content generation (PCG) called Grammatical Item Generation Language (GIGL). GIGL supports a compact representation of PCG with stochastic grammars where generated objects maintain grammatical structures. Advanced features in GIGL allow flexible customizations of the stochastic generation process. GIGL is designed and implemented to have direct interface with C++, in order to be capable of integration into production games. We showcase the expressiveness and flexibility of GIGL on several representative problem domains in grammatical PCG, and show that the GIGL-based implementations run as fast as comparable C++ implementation and with less code.

Bio:

Tiannan "Fenix" Chen is a 5th year PhD student in computer science advised by Prof. Stephen J. Guy, and had a Master's degree in chemistry earlier. His current research focus is on AIs about games and procedural content generation, and had publications on those topics in conferences including MIG and AIIDE.

3/7/2019 Thursday 2:30-3:30pm @ Shepherd Drone Lab (Room 164)

Speaker: Prof. Vicki Interrante

Title: Spatial Perception and Embodiment in Immersive Virtual Environments


2/28/2019 Thursday 2:30-3:30pm @ Shepherd Drone Lab (Room 164)

Speaker: Prof. David Crandall (Invited Speaker, Indiana University)

Title: Egocentric computer vision, for fun and for science

Abstract:

The typical datasets we use to train and test computer vision algorithms consist of millions of consumer-style photos. But this imagery is significantly different from what humans actually see as they go about our daily lives. Low-cost, light wearable cameras (like GoPro) now make it possible to record people's lives from a first-person, "egocentric" perspective that approximates their actual fields of view. What new applications are possible with these devices? How can computer vision contribute to and benefit from this embodied perspective on the world? What could mining datasets of first-person imagery reveal about ourselves and about the world in general? In this talk, I'll describe recent work investigating these questions, focusing on two lines of work on egocentric imagery as examples. The first is for consumer applications, where our goal is to develop automated classifiers to help organize first-person images across several dimensions. The second is an interdisciplinary project using computer vision with wearable cameras to study parent-child interactions in order to better understand child learning. Despite the different goals, these applications share common themes of robustly recognizing image content in noisy, highly dynamic, unstructured imagery.

Bio:

David Crandall is an Associate Professor and Director of Graduate Studies in the Department of Computer Science at Indiana University. He is also a member of the programs in Informatics, Cognitive Science, and Data Science, and co-directs the Center for Algorithms and Machine Learning. He received the Ph.D. in computer science from Cornell University in 2008 and the M.S. and B.S. degrees in computer science and engineering from the Pennsylvania State University in 2001. He was a Postdoctoral Research Associate at Cornell from 2008-2010, and a Senior Research Scientist with Eastman Kodak Company from 2001-2003. He is an Associate Editor of the IEEE Transactions on Pattern Analysis and Machine Intelligence and the IEEE Transactions on Multimedia. He received an NSF CAREER award (2013), a Google Faculty Research Award (2014), best paper awards or nominations at CVPR, CHI, ICDL, ICCV, and WWW, an Indiana University Trustees Teaching Award (2017), and is an IU Grant Thorton Scholar (2019).

2/21/2019 Thursday 2:30-3:30pm @ Shepherd Drone Lab

Speaker: Zahra Forootaninia

Title: Hair animation technology for animated movies

Abstract:

One of the main character effects in animated movies is the realistic way of hair modeling and animating. Therefore, physics-based models have a significant role in this area. In my talk, I am going to focus on one of the states of the arts for animating hair and exploring one of the key issues in hair simulation which is a collision problem. I will expand my presentation by talking about possible ways of improving the collision issue. In addition, I want to share my own experience of working with visual artists and the expectations of them from a good solver.

Bio: I am a Ph.D. candidate under the supervision of professor Rahul Narain. My current interest and focus are implementing physics-based models for animating complex systems such as crowds and fluid. I had been working at DreamWorks animation studio as an R&D character effects intern for eight months on their in-house hair solver.

2/7/2019 Thursday 2:30-3:30pm @ Shepherd Drone Lab

Speaker: Zhixuan Yu

Title: HUMBI 1.0: HUman Multiview Behavioral Imaging Dataset

Abstract:

HUMBI 1.0 is a large corpus of high fidelity models of behavioral signals in 3D from a diverse population measured by a massive multi-camera system. With our novel design of a portable imaging system (consists of 107 HD cameras), we collect human behaviors from 164 subjects across gender, ethnicity, age, and physical condition at a public venue. Using the multi-view image streams, we reconstruct high fidelity models of five elementary parts: gaze, face, hands, body, and cloth. As a byproduct, the 3D model provides geometrically consistent image annotation via 2D projection, e.g., body part segmentation. This dataset is a significant departure from the existing human datasets that suffers from subject diversity. We hope the HUMBI opens up a new opportunity for the development of behavioral imaging.

Bio: Zhixuan Yu is a second-year PhD student advised by Prof. Hyun Soo Park. Currently, he is working on a multiview behavioral imaging dataset. His research interests lies in 3D computer vision and robotics.

Contact: hspark@umn.edu