First Workshop on 

Imageomics

(Imageomics-AAAI-24)

Discovering Biological Knowledge from Images using AI


Held as part of AAAI 2024

February 26, 2024, 9 am to 12.30 pm

Room 203, Vancouver Convention Centre – West Building | Vancouver, BC, Canada

Overview

Imageomics is an emerging field of science that uses images, ranging from microscopic images of single-cell species to videos of charismatic megafauna such as giraffes and zebras, as the source of automatically extracted biological information, specifically traits, in order to gain insights into the evolution of function of living organisms.  A central goal of Imageomics is to make traits computable from images, including the morphology, physiology, behavior, and genetic make-up of organisms, by grounding AI models in existing scientific knowledge so that they produce generalizable and biologically meaningful explanations of their predictions. Spearheaded by our prior research and community building efforts in this area as part of the NSF HDR Imageomics Institute, Imageomics is ripe with opportunities to form foundational bridges between AI and biological sciences by enabling computers to see what biologists cannot see while addressing key challenges in AI, creating a virtuous cycle between the two fields.

The goal of this workshop is to nurture the community of researchers working at the intersection of AI and biology and shape the vision of the nascent yet rapidly growing field of Imageomics.

program

9 am to 9.30 am

Opening Remarks by Tanya Berger-Wolf

Introduction to Imageomics

Bio: Dr. Tanya Berger-Wolf is the lead PI of the NSF Harnessing Data Revolution (HDR) Institute on Imageomics, a new field of biology aimed at extracting biological information automatically from images. She is the Director of the Translational Data Analytics Institute  and a Professor of Computer Science Engineering, Electrical and Computer Engineering, as well as  Evolution, Ecology, and Organismal Biology at the Ohio State University.  As a computational ecologist, her research is at the unique intersection of computer science, wildlife biology, and social sciences. Berger-Wolf is also co-founder of the AI for wildlife conservation software non-profit Wild Me, home of the Wildbook project, which has been chosen by UNSECO as one of the top AI 100 projects worldwide supporting the UN Sustainable Development Goals.


9.30 am to 10.00 am

Keynote Talk by Abby Stylianou

Title: Computer Vision and Deep Learning for the Discovery of Novel Genotype x Phenotype Relationships in Plants

Abstract: In this talk, Dr. Abby Stylianou (Assistant Professor of Computer Science at Saint Louis University and Fellow of the Taylor Geospatial Institute) will discuss computer vision and machine learning approaches to different above ground phenotyping problems in the field, using both extremely expensive sensing platforms (like massive gantry systems) and low cost sensing platforms (like an iPhone). Specific topics will include the development of deep learning pipelines to better understand the relationship between plant genotypes and their expressed phenotypes in biomass sorghum from the TERRA-REF project and topological approaches to automatically extracting petal counts from smart phone collected field data in the NSF funded New Roots for Restoration Biology Integration Institute. Dr. Stylianou will additionally discuss opportunities for machine learning based approaches in both above and below ground phenotyping in the future.

Bio: Dr. Abby Stylianou’s research lies at the intersection of fine grained visual categorization, deep metric learning and image retrieval, and explainable AI, with a lens on impactful real world applications in social justice and in science. In recent years, the applications Dr. Stylianou has focused on include building models for hotel-specific image retrieval in order to locate victims of sex trafficking who have been photographed in hotels, learning descriptions of plant phenomics and how they relate to underlying genetics and environmental factors, and observing how individuals interact with the world around them in outdoor webcam images to support better design of the built environment. Dr. Stylianou is additionally interested in the development of machine learning benchmarks and competitions to broaden participation in machine learning for science, and in the design of visualization and interpretability tools to better understand machine learning algorithms and make their decisions accessible to non-experts.

10 am to 10.30 am

Poster Lightning Talks

10.30 am to 11.35 am

Poster Session + Coffee Break

11.35 am to 11.50 am

Invited Talk by Wenhao Chai

Title:  Towards Universal Animal Perception in Vision via Few-shot Learning

Abstract: Animal visual perception is an important technique for automatically monitoring animal health, understanding animal behaviors, and assisting animal-related research. However, it is challenging to design a deep learning-based perception model that can freely adapt to different animals across various perception tasks, due to the varying poses of a large diversity of animals, lacking data on rare species, and the semantic inconsistency of different tasks. We introduce UniAP, a novel Universal Animal Perception model that leverages few-shot learning to enable cross-species perception among various visual tasks. Our proposed model takes support images and labels as prompt guidance for a query image. Images and labels are processed through a Transformer-based encoder and a lightweight label encoder, respectively. Then a matching module is designed for aggregating information between prompt guidance and the query image, followed by a multi-head label decoder to generate outputs for various tasks. By capitalizing on the shared visual characteristics among different animals and tasks, UniAP enables the transfer of knowledge from well-studied species to those with limited labeled data or even unseen species. We demonstrate the effectiveness of UniAP through comprehensive experiments in pose estimation, segmentation, and classification tasks on diverse animal species, showcasing its ability to generalize and adapt to new classes with minimal labeled examples. See this AAAI 2024 paper for more details: https://arxiv.org/pdf/2308.09953.pdf 

Bio:  Wenhao Chai is currently a graduate student at University of Washington, with Information Processing Lab advised by Prof. Jenq-Neng Hwang. Previously, he was an undergradate student at Zhejiang University, with CVNext Lab advised by Prof. Gaoang Wang. He is fortunate to have internship at Multi-modal Computing Group, Microsoft Research Asia. His research primarily in embodied agent, video understanding, generative models, as well as human pose and motion.

11.50 am to 12.20 pm

Keynote Talk by Hoifung Poon

Title: Multimodal Generative AI: The Next Frontier in Precision Health

Abstract: The dream of precision health is to develop a data-driven, continuous learning system where new health information is instantly incorporated to optimize care delivery and accelerate biomedical discovery. The confluence of technological advances and social policies has led to rapid digitization of multimodal, longitudinal patient journeys, such as electronic medical records (EMRs), imaging, and multiomics. In this talk, I'll present our research progress on multimodal generative AI for precision health, where we leverage real-world data to pretrain powerful multimodal patient embedding, which can serve as a digital twin for the patient. This enables us to assimilate multimodal, longitudinal information for millions of cancer patients, and apply the population-scale real-world evidence to advancing precision oncology in deep partnerships with real-world stakeholders.

Bio: Hoifung Poon is General Manager at Health Futures in Microsoft Research and an affiliated faculty at the University of Washington Medical School. He leads biomedical AI research and incubation, with the overarching goal of structuring medical data to optimize delivery and accelerate discovery for precision health. His team and collaborators are among the first to explore large language models (LLMs) and multimodal generative AI in health applications. His research produces popular open-source foundation models such as PubMedBERT, BioGPT, BiomedCLIP, LLaVA-Med. He has led successful research partnerships with large health providers and life science companies, creating AI systems in daily use for applications such as molecular tumor board and clinical trial matching. He has given tutorials on these topics at top AI conferences such as ACL, AAAI, and KDD, and his prior work has been recognized with Best Paper Awards from premier AI venues such as NAACL, EMNLP, and UAI. He received his PhD in Computer Science and Engineering from the University of Washington, specializing in machine learning and NLP.

12.20 pm to 12.30 pm

Closing Remarks