Upcoming Webinars

Past Events

Dr. Shekoofeh Azizi

Google Research, Brain Team

Webinar #21 on September 19, 2023 at 10am EST

Title: Exploring Foundation Models for Generalist Biomedical AI

Talk recording available here!

Expand to read the talk abstract.

Abstract: The emergence of foundation AI models offer a significant opportunity to rethink development of medical AI, making it more accessible, safer and equitable. A foundation model is a large artificial intelligence model trained on a vast quantity of data at scale, often by self-supervised learning. This process results in a model that can be adapted to a wide range of downstream tasks with need for little labeled data. These models are thus generalist models that can rapidly adapt and maintain performance in new tasks and environments. In this talk we explore the potential of foundation models in medicine and highlight some major progress towards creating generalist medical foundation models.


Bio: Dr. Shekoofeh Azizi is a senior research scientist at Google Research, Brain Team and she completed her Ph.D. at the University of British Columbia (UBC), Vancouver, Canada in 2018.  Her research concentrated on developing simple and efficient machine learning algorithms that are broadly applicable to a range of computer vision applications. Specifically, over the past few years, she has been focused on developing methods to accelerate the translation of AI solutions to clinical impact. Her work has been covered in various media outlets and recognized by multiple awards including the Governor General’s Canada Academic Gold Medal for her contribution in improving diagnostic ultrasound.



Dr. Xiang Li

Massachusetts General Hospital and Harvard Medical School

Webinar #20 on June 13, 2023 at 9am EST (3pm CET)

Title: Application and Development of Foundational Models in Healthcare

Talk recording available here!

Expand to read the talk abstract.

Abstract: Recent advances in natural language processing (NLP) and artificial general intelligence (AGI) have led to the development of powerful large language models (LLM) such as the GPT (Generative Pre-trained Transformer) series. These models are pre-trained on vast amounts of text data with human feedback and have demonstrated exceptional performance in a wide range of NLP tasks. However, two challenges remain in fully leveraging the power of LLMs and, more generally, large AGI models: the first challenge is the model adaptivity to specific domains, such as healthcare, where data has a highly heterogenous distribution compared with general domain and are difficult to be accessed. The second challenge is the integration of multiple data modalities, where in most application scenarios, the task is characterized by more than one data modality (e.g., image) beyond text. Here we will summarize our current line of work addressing these two challenges, including different schemes for prompt designing and engineering, coarse-fine domain adaptation, model localization, and our ultimate goal of developing multi-modal healthcare foundational AI.


Bio: Dr. Xiang Li is an Instructor of Investigation at the Massachusetts General Hospital and Harvard Medical School. He received his bachelor’s degree from the School of Electronic Information and Electrical Engineering at Shanghai Jiaotong University and his Ph.D. degree from the Department of Computer Science at the University of Georgia, advised by Distinguished Research Professor and AIMBE Fellow Tianming Liu. After graduation, Dr. Xiang Li joined Massachusetts General Hospital and Harvard Medical School as Research Fellow, with mentorship from Dr. James Thrall, Chairman Emeritus of MGH Department of Radiology, along with Dr. Quanzheng Li, the director of MGH/HMS Center for Advanced Medical Computing and Analysis. Dr. Xiang Li's research focus includes developing artificial intelligence solutions for analyzing healthcare data, especially fusion across imaging and non-imaging data, and developing medical informatics systems for smart data management and AI deployment in the clinical workflow. He is the founding chair of the International Workshop on Multiscale Multimodal Medical Imaging. He has received multiple funding support from NIH for his research on multi-modal imaging fusion and clinical decision support. Dr. Xiang Li’s work has been awarded by MGH Thrall Innovation Grants Award (2022), NVIDIA Global Impact Award (2018), ISBI Best Paper Award (2011, 2013 and 2020), and IEEE TRPMS Best Paper (2022).



Prof. Aasa Feragen

Technical University of Denmark

Webinar #19 on May 9, 2023 at 9am EST (3pm CET)

Title: Uncertainties in image segmentation and beyond

Talk recording available here!

Expand to read the talk abstract.

Abstract: Quantification of image segmentation uncertainty has seen considerable attention over the past years, with particular emphasis on the aleatoric uncertainty stemming from ambiguities in the data. While there is a wealth of papers on the topic, the actual modelling objectives vary considerably, and as a consequence, the validation of segmentation uncertainty quantification also remains a partially open problem. In this talk we will take a birds-eye view on segmentation uncertainty, discussing fundamental modelling challenges, partial solutions, open problems, and links to related topics such as generative models and explainable AI.


Bio: Aasa Feragen is a Full Professor at the Technical University of Denmark. Her MSc and PhD in mathematics are both from the University of Helsinki, following which she held postdocs at the University of Copenhagen, and the MPI for Intelligent Systems in Tübingen. Aasa's research sits at the intersection of machine learning, applied geometry and medical imaging, where Aasa takes a particular interest in the modelling of data with geometric constraints or invariants. Such data includes uncertainties and probability distributions, fairness constraints, graphs and trees, curves, surfaces -- and a wealth of other examples. Aasa likes to contribute to community building and -maintenance, including as program chair of MICCAI (2024), IPMI (2021) and MIDL (2019).


Prof. Feragen's Google Scholar 

Prof. Feragen's Website



Prof. Zhen Qiu

Michigan State University

Webinar #18 on March 21, 2023 at 9am EDT (2pm CET, 9pm CST)

Title: Miniaturized Microscope for Molecular Imaging

Talk recording available here!

Expand to read the talk abstract.

Abstract: 

Wide-field fluorescent imaging for fluorescence molecular guidance has become a promising technique for use in imaging guided surgical navigation, but quick and intuitive microscopic inspection of fluorescent hot spots is still needed to confirm local disease states of tissues. To address these unmet needs, we have been focusing on developing clinically translatable micro-system enabled miniaturized microscope that incorporates both wide-field fluorescence imaging and high-resolution microscopic optical-sectioning with advanced imaging processing. The imaging system will become increasingly important for precise tumor resection in oncology as more optical molecular markers become approved for human use.

Bio: 

Dr. Qiu received his Bachelor degree from Tsinghua University, Beijing, China, and the Ph.D. degree from the Department of Biomedical Engineering, University of Michigan, Ann Arbor, MI. He finished his post-doctoral training in the Molecular Imaging Program, School of Medicine, Stanford University, CA. The Qiu Lab aims to study both cancer biology and translational medicine with custom-made micro-systems enabled ultra-thin in-vivo sensing/imaging tools. Supported by NSF, NIH and DOE, his current work is mainly focused on miniaturized optical imaging system development for early cancer detection and imaging guided surgical navigation, such as wide-field imaging guided micro-scanner based confocal microendoscope, multi-photon/SHG handheld microscope, surface-enhanced Raman spectroscopy.

Prof. Islem Rekik

Imperial College London

Webinar #17 on January 19, 2023 at 9am EST (3pm CET)

Title: Graph Neural Networks in Network Neuroscience

Talk recording available here!

Expand to read the talk abstract.

Abstract. This talk will cover ground-breaking predictive intelligence technologies in order to provide clinicians and neurologists with accurate, fast, and early diagnosis of neurologically disordered patients while using minimal medical imaging data (i.e., conventional T1-w MRI) acquired at baseline timepoint. In this talk, we will explore uncharted territories and set out to solve formidable challenges for cross-dimensional generative models in the field of network neuroscience. Specifically, we will see many opportunities to expand predictive intelligence along different data-specific dimensions, including resolution/scale, domain/modality and time. Specifically, the talk will cover recent works published in prestigious publication venues such as IPMI 2021, MICCAI and the journals of Medical Image Analysis, IEEE Transactions on Medical Imaging and Neural Networks addressing the problem of multi-dimensional data prediction from a limited observation.


Biography. Islem Rekik is the Director of the Brain And SIgnal Research and Analysis (BASIRA) laboratory (http://basira-lab.com/) and an Associate Professor at Imperial College London (Innovation Hub I-X).  Together with BASIRA members, she conducted more than 85 cutting-edge research projects cross-pollinating AI and healthcare —with a sharp focus on brain imaging and neuroscience. She is also a co/chair/organizer of more than 20 international first-class conferences/workshops/competitions (e.g., Affordable AI 2021-22, Predictive AI 2018-2022, Machine Learning in Medical Imaging 2021-22, WILL competition 2021-22).  In addition to her 130+ high-impact publications, she is a strong advocate of equity, inclusiveness and diversity in research. She is the former president of the Women in MICCAI (WiM) and the co-founder of the international RISE Network to Reinforce Inclusiveness & diverSity and Empower minority researchers in Low-Middle Income Countries (LMIC).




Prof. Yu-Ping Wang

Tulane University

Webinar #16 on December 15, 2022, at 9:00 am EST

Title: Interpretable multimodal deep learning with application to brain imaging and genomics data fusion

Talk recording available here!

Expand to read the talk abstract.

Abstract:

Deep network-based data fusion models have been developed to integrate complementary information from multi-modal datasets while capture their complex relationships. This is particularly useful in biomedical domain, where multi-modal data such as imaging and multi-omics are ubiquitous and the integration of these heterogenous data can lead to novel biological findings. However, deep learning models are often difficult to interpret, bringing about challenges for uncovering biological mechanisms using these models. In this work, we develop an interpretable multimodal deep learning-based fusion model to perform automated disease diagnosis and result interpretation simultaneously. We name it Grad-CAM guided convolutional collaborative learning (gCAM-CCL), which is achieved by combining intermediate feature maps with gradient-based weights in a multi-modal convolution network. The gCAM-CCL model can generate interpretable activation maps to quantify pixel-level contributions of the input fMRI imaging features. Moreover, the estimated activation maps are class-specific, which can therefore facilitate the identification of imaging biomarkers underlying different populations such as age, gender and cognitive groups. Finally, we apply and validate the gCAM-CCL model in the study of brain development with integrative analysis of multi-modal brain imaging and genomics data. We demonstrate its successful application to both the classification of cognitive function group and the discovery of underlying genetic mechanisms. 

 

Bio:

Dr. Yu-Ping Wang received the BS degree in applied mathematics from Tianjin University, China, in 1990, and the MS degree in computational mathematics and the PhD degree in communications and electronic systems from Xi’an Jiaotong University, China, in 1993 and 1996, respectively. After his graduation, he had visiting positions at the Center for Wavelets, Approximation and Information Processing of the National University of Singapore and Washington University Medical School in St. Louis. From 2000 to 2003, he worked as a senior research engineer at Perceptive Scientific Instruments, Inc., and then Advanced Digital Imaging Research, LLC, Houston, Texas. In the fall of 2003, he returned to academia as an assistant professor of computer science and electrical engineering at the University of Missouri-Kansas City. He is currently a Professor of Biomedical Engineering, Computer Sciences, Neurosciences, and Biostatistics & Data Sciences at Tulane University. Dr. Wang’s recent effort has been bridging the gap between biomedical imaging and genomics, where has over 300 peer reviewed publications. Dr. Wang is a fellow of AIMBE and has served for numerous program committees and NSF and NIH review panels. He is currently an associate editor for J. Neuroscience Methods, IEEE/ACM Trans. Computational Biology and Bioinformatics (TCBB) and IEEE Trans. Medical Imaging (TMI). More about his research can be found at his website: http://www.tulane.edu/~wyp/

Prof. Kayhan Batmanghelich

University of Pittsburgh

Webinar #15 on October 21, 2022, at 9:30 am EDT

Title: Bridging between AI Models and Medical Insights: Learning, Inference, and Model Explanation

Talk recording available here!

Expand to read the talk abstract.

Abstract:

The healthcare industry is arriving at a new era where the medical communities increasingly employ computational medicine and machine learning. Despite significant progress in the modern machine learning literature, adopting the new approaches has been slow in the biomedical and clinical research communities due to the lack of explainability and limited data. Such challenges present new opportunities to develop novel methods that address AI's unique challenges in medicine. This talk has three parts.

In the first part of the talk, I show examples of model explainability (XAI) tailored toward AI in Radiology applications. More specifically, I integrate ideas from causal inference for XAI (e.g., counterfactual, mediation analysis). The second part presents examples of incorporating medical insight for self-supervised learning of imaging phenotype. Finally, I address the issue of partial missingness (a common problem using clinical data) in imaging genetics for statistical independence tests.

 

Bio:

Kayhan Batmanghelich is an Assistant Professor of the Department of Biomedical Informatics and Intelligent Systems Program with secondary appointments in the Electrical and Computer Engineering and the Computer Science Department at the University of Pittsburgh. He received his Ph.D. from the University of Pennsylvania (UPenn) under the supervision of Prof. Ben Taskar and Prof. Christos Davatzikos. He spent three years as a postdoc in Computer Science and Artificial Intelligence Lab (CSAIL) at MIT, working with Prof. Polina Golland. His research is at the intersection of medical vision, machine learning, and bioinformatics. His group develops machine learning methods that address the interesting challenges of AI in medicine, such as explainability, learning with limited and weak data, and integrating medical image data with other biomedical data modalities. His research is supported by awards from NIH and NSF and industry-sponsored projects.

Prof. Dinggang Shen

ShanghaiTech University

Webinar #14 on July 12, 2022 at 9am EDT

Title: Deep Learning-based Medical Image Reconstruction

Talk recording is available here!

Expand to read the talk abstract.

Abstract:

This talk will introduce our developed deep learning methods for fast MR acquisition, low-dose CT reconstruction, and low-cost and low-dose PET acquisition. The implementation of these techniques in scanners for real clinical applications will be demonstrated. Also, comparisons with other state-of-the-art acquisition methods will be discussed.

 

Bio:

Dinggang Shen is a Professor and a Founding Dean with School of Biomedical Engineering, ShanghaiTech University, Shanghai, China, and also a Co-CEO of United Imaging Intelligence (UII), Shanghai. He is a Fellow of IEEE, The American Institute for Medical and Biological Engineering (AIMBE), The International Association for Pattern Recognition (IAPR), and The Medical Image Computing and Computer Assisted Intervention (MICCAI) Society. He was Jeffrey Houpt Distinguished Investigator and a Full Professor (Tenured) with The University of North Carolina at Chapel Hill (UNC-CH), Chapel Hill, NC, USA, directing The Center of Image Analysis and Informatics, The Image Display, Enhancement, and Analysis (IDEA) Lab, and The Medical Image Analysis Core. His research interests include medical image analysis, machine learning, deep learning, and computer vision. He has published more than 1500 peer-reviewed papers in the international journals and conference proceedings, with H-index 122 and over 60K citations. He serves as an Editor-in-Chief for Frontiers in Radiology, as well as an associate editor (or editorial board member) for eight international journals. Also, he has served in the Board of Directors, The Medical Image Computing and Computer Assisted Intervention (MICCAI) Society, in 2012-2015, and was General Chair for MICCAI 2019.

Prof. Jong Chul Ye

Korea Advanced Inst. of Science & Technology (KAIST)

Webinar #13 on May 17, 2022 at 10am EDT

Title: Accelerating Conditional Diffusion Models for Inverse Problems through Stochastic Contraction 

Talk recording is available here!

Expand to read the talk abstract.

Abstract:

Diffusion models have recently attained significant interest within the community owing to their strong performance as generative models. Furthermore, its application to inverse problems has demonstrated state-of-the-art performance. Unfortunately, diffusion models have a critical downside - they are inherently slow to sample from, needing a few thousand steps of iteration to generate images from pure Gaussian noise.  In this work, we show that starting from Gaussian noise is unnecessary. Instead, starting from a single forward diffusion with better initialization significantly reduces the number of sampling steps in reverse conditional diffusion. This phenomenon is formally explained by the contraction theory of the stochastic difference equations like our conditional diffusion strategy - the alternating applications of reverse diffusion followed by a non-expansive data consistency step. The new sampling strategy, dubbed Come-Closer-Diffuse-Faster (CCDF), also reveals new insight into how the existing feed-forward neural network approaches for inverse problems can be synergistically combined with the diffusion models. Experimental results with super-resolution, image inpainting, and compressed sensing MRI demonstrate that our method can achieve state-of-the-art reconstruction performance at significantly reduced sampling steps.

 

Bio:

Jong Chul Ye is a Professor at the Kim Jaechul Graduate School of Artificial Intelligence (AI), and an Adjunct Professor at Dept. of Bio/Brain Engineering and Dept. of Mathematical Sciences of Korea Advanced Institute of Science and Technology (KAIST), Korea. He received the B.Sc. and M.Sc. degrees from Seoul National University, Korea, and the Ph.D. from Purdue University, West Lafayette. Before joining KAIST, he worked at Philips Research and GE Global Research in New York. He has served as an associate editor of IEEE Trans. on Image Processing, and an editorial board member for Magnetic Resonance in Medicine. He is currently an associate editor for IEEE Trans. on Medical Imaging, and a Senior Editor of IEEE Signal Processing Magazine. He is an IEEE Fellow, was the  Chair of IEEE SPS Computational Imaging TC,  and IEEE EMBS Distinguished Lecturer. He was a General Co-chair (with Mathews Jacob) for IEEE Symp. On Biomedical Imaging (ISBI) 2020. His research interest is in machine learning for biomedical imaging and computer vision.

Dr. Holger Roth

NVIDIA

Webinar #12 on April 29, 2022 at 10am EDT

Title: Advanced Techniques for Collaborative Development of AI Models for Medical Imaging 

Talk recording is available here!

Expand to read the talk abstract.

Abstract: 

The COVID-19 pandemic has emphasized the need for large-scale collaborations by the clinical and scientific communities to tackle global healthcare challenges. However, regulatory constraints around data sharing and patient privacy might hinder access to genuinely representative patient populations on a global scale. Federated learning (FL) is a technology allowing us to work around such constraints while keeping patient privacy in mind. This talk will show how FL was used to predict clinical outcomes in patients with COVID-19 while allowing collaborators to retain governance over their data (Nature Medicine 2021). Furthermore, I will introduce several recent advances in FL, including quantifying potential data leakage, automated machine learning (AutoML) and neural architecture search (NAS), and personalization that can allow us to build more accurate and robust AI models.


Bio:

Holger Roth is a Sr. Applied Research Scientist at NVIDIA, focusing on deep learning for medical imaging. He has been working closely with clinicians and academics over the past several years to develop deep learning-based medical image computing and computer-aided detection models for radiological applications. He is an Associate Editor for IEEE Transactions of Medical Imaging and holds a Ph.D. from University College London, UK. In 2018, he was awarded the MICCAI Young Scientist Publication Impact Award.

Prof. Luping Zhou

The University of Sydney

Webinar #11 on March 22, 2022 at 7am EST (10pm Sydney Time)

Title: Explore Correlated Image-Text features for Automated Rediographical Report Generation 

Talk recording is available here!

Expand to read the talk abstract.

Abstract:

Automated radiographical report generation is a challenging task as it requires to generate paragraphs describing fine-grained visual differences of cases, especially for those between the diseased and the healthy. Existing image captioning methods commonly target at generic images, and they lack mechanisms to deal with this challenge. As a result, they tend to generate rigid reports that repeat frequently appearing phrases describing the common content of images while suppressing the less frequent but more informative disease-related words. In this talk, I will introduce our efforts at exploring fine-grained interactions of image and text features for radiographically report generation, and demonstrate the success on large benchmarks.

 

Bio:

Dr. Luping Zhou is an Associate Professor in School of Electrical and Information Engineering, The University of Sydney. She obtained her PhD from Australian National University and got her post-doctoral training in University of North Carolina at Chapel Hill. Dr. Zhou works on the interface of medical image analysis, machine learning, and computer vision, and has published 100+ research papers in these fields. Her current research is focused on medical image analysis with statistical graphical models and deep learning, as well as general visual recognition problems.  She was a recipient of the prestigious ARC (Australian Research Council) DECRA award (Discovery Early Career Researcher Award). Dr. Zhou is the Associate Editor of the journals IEEE Trans. on Medical Imaging (TMI) and Pattern Recognition. She is a Senior Member of IEEE.

Prof. Qi Dou

The Chinese University of Hong Kong

Webinar #10 on January 14, 2022 at 9am EST (10pm Hong Kong, Beijing)

Title: Image-based Robotic Surgery Intelligence 

Talk recording is available here!

Expand to read the talk abstract.

Abstract:

With rapid advancements in medicine and engineering technologies, the operating room has evolved to a highly complex environment, where surgeons take advantages of computers, endoscopes and robots to perform procedures with more precision while less incision. Intelligence, together with its authorized cognitive assistance, smart data analytics, and automation, is envisaged to be a core fuel to transform next-generation robotic surgery in numerous known or unknown exciting ways. In this talk, I will present ideas, methodologies and applications of image-based robotic surgery intelligence from three perspectives, i.e., AI-enabled surgical situation awareness to improve surgical procedures, AI-powered large-scale data analysis to enhance surgical education, AI-driven multi-sensory perception to achieve surgical subtask automation. To tackle these challenging topics, a wide variety of cutting-edge vision and learning techniques will be covered, including transformers, weakly-supervised learning, meta-learning model generalization, unsupervised video retrieval, stereo depth estimation, 3D scene reconstruction, reinforcement learning and augmented reality. With the limited view provided in this talk, I look forward to generating discussions, inspirations and interesting possibilities on the role of AI for the future of robotic surgery.

 

Bio:

Dr. Qi DOU is an Assistant Professor with the Department of Computer Science and Engineering at The Chinese University of Hong Kong. She is also an Associate Member of the T Stone Robotics Institute and Multi-Scale Medical Robotics Center at CUHK. Her research focuses on synergistic innovations across medical image analysis, machine learning, surgical data science and medical robotics, with an impact to support demanding clinical workflows for improving patient care. Dr. Dou has won the IEEE-EMBS TBME Best Paper Award 2nd Place 2021, IEEE ICRA Best Paper Award in Medical Robotics 2021, MICCAI Young Scientist Publication Impact Award Finalist 2021, MICCAI-Medical Image Analysis Best Paper Award 2017. Dr. Dou serves as the associate editor for Journal of Machine Learning for Biomedical Imaging, Computer Assisted Surgery, and serves as program co-chair of MICCAI 2022 and MIDL 2022. She also serves as an organization committee member for medical imaging workshops in conferences of NeurIPS, CVPR, ICCV and ICML.

Prof. Stefanie Speidel

National Center for Tumor Diseases (NCT) Dresden

Webinar #9 on December 7, 2021 at 11am EST (5pm CET)

Title: AI-assisted surgery – perspectives and challenges 

Talk recording is available here!

Expand to read the talk abstract.

Abstract:

In this talk, I’ll present our recent research regarding AI-assisted surgery with a specific focus on analysis of intraoperative video data. The goal is to bridge the gap between data science, sensors and robotics to enhance the collaboration between surgeons and cyber-physical systems and to democratize surgical skills by quantifying surgical experience and make it accessible to machines. Several examples to optimize the therapy of the individual patient by turning the available data into useful information are given. A focus of this talk will be soft-tissue registration and workflow analysis for context-aware assistance as well as sensor-based surgical training and data generation for machine learning applications. Finally, remaining challenges and strategies to overcome them are discussed.

 

Bio:

Dr. Stefanie Speidel is a professor for “Translational Surgical Oncology” at the National Center for Tumor Diseases (NCT) Dresden since 2017 and one of the speakers of the DFG Cluster of Excellence CeTI since 2019. She received her PhD from Karlsruhe Institute of Technology (KIT) in 2009 and had a junior research group “Computer-Assisted Surgery” from 2012 – 2016 at KIT. Her current research interests include image- and robot-guided surgery, soft-tissue navigation, sensor-based surgical training and intraoperative workflow analysis based on various sensor signals in the context of the future operating room. She regularly organizes workshops and challenges including the Endoscopic Vision Challenge@MICCAI and has been general chair and program chair for a number of international events including IPCAI and MICCAI conference.

Prof. Chao Chen

Stony Brook University

Webinar #8 on November 9, 2021 at 9am EDT

Title: Topology-Informed Biomedical Image Analysis 

Talk recording is available here!

Expand to read the talk abstract.

Abstract:

Thanks to decades of technology development, we are now able to visualize in high quality complex biomedical structures such as neurons, vessels, trabeculae and breast tissues. We need innovative methods to fully exploit these structures, which encode important information about underlying biological mechanisms. In this talk, we explain how topology, i.e., connected components, handles, loops, and branches, can be seamlessly incorporated into different parts of a learning pipeline. Under the hood is a formulation of the topological computation as a differentiable operator, based on the theory of topological data analysis. This leads to a series of novel methods for segmentation, generation, and analysis of these topology-rich biomedical structures. We will also briefly mention how topological information can be used in graph neural networks and noise/attack-robust machine learning. 

 

Bio:

Dr. Chao Chen is an assistant professor at Stony Brook University. His research interest spans topological data analysis (TDA), machine learning and biomedical image analysis. He develops principled learning methods inspired by the theory from TDA, such as persistent homology and discrete Morse theory. These methods address problems in biomedical image analysis, robust machine learning, and graph neural networks from a unique topological view. His research results have been published in major machine learning, computer vision, and medical image analysis conferences. He serves as an area chair for MICCAI, AAAI, CVPR and NeurIPS.

Prof. Adrian Dalca

Harvard Medical School

Webinar #7 on October 26, 2021 at 10am EDT

Title: Unsupervised Learning of Image Correspondences in Medical Image Analysis 

Talk recording is available here

Expand to read the talk abstract.

Abstract:

Image registration is fundamental to many tasks in image analysis. Classical image registration methods have undergone decades of technical development, but are often prohibitively slow since they solve an optimization problem for each 3D image pair. In this talk, I will introduce various models that leverage learning paradigms to enable deformable medical image registration more accurately and substantially faster than traditional methods, crucially enabling new research directions and applications. Based on these models I will discuss a learning framework for building deformable templates, which play a fundamental role in these analyses. This learning approach to template construction can yield a new class of on-demand conditional templates, enabling new analysis. I will also present recent or ongoing models, such as modality-invariant learning-based registration methods that work on unseen test-time contrasts, and hyperparameter-agnostic learning for image registration that removes the need to train different models for different hyperparameters.

Bio:

Adrian V. Dalca is Assistant Professor at Harvard Medical School, and research scientist at the Massachusetts Institute of Technology. He obtained his PhD from CSAIL, MIT, and his research focuses on probabilistic models and machine learning techniques to capture relationships between medical images, clinical diagnoses, and other complex medical data. His work spans medical image analysis, computer vision, machine learning and computational biology. He received his BS and MS in Computer Science from the University of Toronto.

Prof. Yuankai Huo

Vanderbilt University

Webinar #6 on September 8, 2021 at 9am EDT

Title: Scalable learning for large biomedical images 

Talk recording is available here!

Expand to read the talk abstract.

Abstract:

Biomedical image analysis is ubiquitous and indispensable in biology, healthcare, pharmacology, and education. Rapid development in data sharing and computational resources are reshaping the medical imaging research field from small-scale to large-scale (e.g., big data with thousands or more subjects). However, traditional medical image analysis techniques can be inadequate to overcome the new challenges in big data; including robustness of algorithm, inter-subject variabilities, computational resources etc. In this presentation, I will (1) present an end-to-end large-scale lifespan brain image analyses on more than 5000 patients, (2) introduce our scalable representation learning and self-supervised learning algorithms for gigapixel pathological images, and (3) present our recent works in high-dimensional structural microscopy image analytics. 

 

Bio:

Dr. Yuankai Huo is an Assistant Professor in Computer Science, Computer Engineering, and Data Science at Vanderbilt University, TN, USA. He received his B.S. degree in Electrical Engineering from Nanjing University of Posts and Telecommunications (NJUPT) in 2008, and Master degree in Electrical Engineering from Southeast University in 2011. After graduation, He worked in Columbia University and New York State Psychiatric Institute as a staff engineer and research officer from 2011 to 2014. He received his Master degree in Computer Science from Columbia University in 2014, and Ph.D. degree in Electrical Engineering from Vanderbilt University in 2018. Then, he had worked as a Research Assistant Professor at Vanderbilt University, and later, a Senior Research Scientist at PAII Labs. Since 2020, he has been a faculty member at the Department of Electrical Engineering and Computer Science, and Data Science Institute, Vanderbilt University. His research interests include medical image computing, knowledge-infused machine learning, and large-scale healthcare data analytics. His research aims to facilitate data-driven healthcare and improve patient outcomes through innovations in medical image analysis as well as multi-modal data representation and learning.

Prof. Faisal Mahmood

Harvard Medical School

Webinar #5 on July 21, 2021 at 11am EDT

Title: Data-efficient and multimodal computational pathology 

Talk recording is available here!

Expand to read the talk abstract.

Abstract:

Advances in digital pathology and artificial intelligence have presented the potential to build assistive tools for objective diagnosis, prognosis and therapeutic-response and resistance prediction. In this talk we will discuss: 1) Data-efficient methods for weakly-supervised whole slide classification with examples in cancer diagnosis and subtyping, allograft rejection etc. (Nature Biomedical Engineering, 2021). 2) Harnessing weakly-supervised, fast and data-efficient WSI classification for identifying origins for cancers of unknown primary (Nature, 2021). 3) Discovering integrative histology-genomic prognostic markers via interpretable multimodal deep learning (IEEE TMI, 2020). 4) Deploying weakly supervised models in low resource settings without slide scanners, network connections, computational resources and expensive microscopes. 5) Bias and fairness in computational pathology algorithms. 

 

Bio:

Dr. Mahmood is an Assistant Professor of Pathology at Harvard Medical School and the Division of Computational Pathology at the Brigham and Women's Hospital. He is also an Associate Member of the Broad Institute of Harvard and MIT, a member of the Harvard Bioinformatics and Integrative Genomics (BIG) faculty and a full member of the Dana-Farber / Harvard Cancer Center. His laboratory’s predominant focus is towards pathology image analysis, morphological feature, and biomarker discovery using data fusion and multimodal analysis.

Prof. Ulas Bagci

Northwestern University

Webinar #4 on June 9, 2021 at 10am EDT

Title: Trustworthy AI for Imaging-based Diagnoses 

Talk recording available here!

Expand to read the talk abstract.

In this talk, I will focus on the failures of deep learning / AI algorithms and propose several approaches to increase robustness of AI powered medical imaging systems. Roadmaps to such trustworthy systems will be analyzed: 1) algorithmic robustness,  2) interpretable / explainable machine learning systems, and 3) human in the loop machine learning system. For each of these I will give layout. For algorithmic robustness, I will introduce a success story of a deep network architecture, called capsule networks, and demonstrate its effectiveness and robustness compared to commonly used systems; hence, increasing its trustworthiness to be used in high-risk application. For human in the loop system, I will share our unique experience for developing a paradigm-shifting computer-aided diagnosis (CAD) system, called collaborative CAD (C-CAD), that unifies CAD and eye-tracking systems in realistic radiology room settings. Last, but not least, I will introduce our new algorithm developed to better localize regions where the algorithm learns. Compared to commonly used Grad-Cam algorithms, we obtain superior performance when depicting salient regions that are most informative. COVID19 examples will be demonstrated as a recent hot topic. Lastly, I will discuss about future directions that medical imaging physicians and scientists should think when AI comes into play.

Biography:

Ulas Bagci, Ph.D., is an Associate Professor at the Northwestern University's Radiology and Biomedical Engineering Department at Chicago, and courtesy professor at the Center for Research in Computer Vision (CRCV), department of computer science, University of Central Florida (UCF). His research interests are artificial intelligence, machine learning and their applications in biomedical and clinical imaging. Dr. Bagci has more than 230 peer-reviewed articles in these topics. Previously, he was a staff scientist and lab co-manager at the National Institutes of Health’s radiology and imaging sciences department, center for infectious disease imaging. Dr. Bagci holds two NIH R01 grants (as Principal Investigator) and serves as a steering committee member of AIR (artificial intelligence resource) at the NIH. Dr. Bagci also serves as an area chair for MICCAI for several years and he is an associate editor of top-tier journals in his fields such as IEEE Trans. on Medical Imaging, Medical Physics and Medical Image Analysis. Prof. Bagci teaches machine learning, advanced deep learning methods, computer and robot vision and medical imaging courses. He has several international and national recognitions including best paper and reviewer awards.

Prof. dr Marie Louise Groot

Vrije University

Webinar #3 on May 4, 2021 at 10am EDT

Title: Translation of higher harmonic generation microscopy into the clinic for tumor tissue assessment

Talk recording available here!

Expand to read the talk abstract.

For patients with lung cancer, a fast and accurate diagnosis is important for optimal treatment allocation. With the current lung tissue sampling techniques, multiple biopsies are taken which might result in prolonged procedures, patient discomfort, and increased risk of complications. Therefore, techniques that can assess fresh lung tissue with a speed that enables ‘live’ feedback to the endoscopists while they perform the procedure, are required.

Higher harmonic generation (HHG) microscopy is a novel promising imaging technique that meets these requirements. This technique is non-invasive, label-free, and provides 3D images with a high, sub-cellular resolution, within seconds. Before, we demonstrated that HHG microscopy can generate high quality images of freshly excised unprocessed lung and brain tissue, in less than a minute with information content comparable to that of the gold standard, histopathology [1,2].

In our most recent study we brought a mobile HHG microscope into the hospital to image freshly excised lung biopsies. The results so far show that HHG microscopy enables real-time 3D imaging of the biopsies and reveals pathological hallmarks which are important for lung tumor diagnosis, including fibrosis, elastosis, disruption of lung architecture, increased cellularity, and the presence of immune cells. The use of this technique may reduce the need for the number of biopsy samples, and reduce endoscopy time. 

Finally, automatic image analysis would eliminate the need for a pathologist to be present in the endoscopy suite or operation theatre. We will show our progress towards using convoluted neural networks in the assessment of brain tissue. 

1. van Huizen, L. M. G., Radonic, T., van Mourik, F., Seinstra, D., Dickhoff, C., Daniels, J. M. A., Bahce, I., Annema, J. T., and Groot, M. L. (2020) Compact portable multiphoton microscopy reveals histopathological hallmarks of unprocessed lung tumor tissue in real time, Translational Biophotonics, e202000009.

2. Zhang, Z. Q., de Munck, J. C., Verburg, N., Rozemuller, A. J., Vreuls, W., Cakmak, P., van Huizen, L. M. G., Idema, S., Aronica, E., Hamer, P. C. D., Wesseling, P., and Groot, M. L. (2019) Quantitative Third Harmonic Generation Microscopy for Assessment of Glioma in Human Brain Tissue, Advanced Science 6

Prof. Daniel L. Rubin

Stanford University

Webinar #2 on April 2, 2021

Title: Scaling AI to Develop Robust Applications in Medical Imaging

Talk recording available here!

Expand to read the talk abstract.

There are many exciting prospects of AI for applications in medical imaging, including image enhancement, automated disease detection, diagnosis, and clinical prediction. However, there are several major challenges to be addressed in order to develop robust and clinically useful AI models. First, training robust AI models requires tremendous amounts of labeled data, and while there are abundant images in the historical clinical archives of healthcare institutions, it is difficult to label the images in large scale, which we address through deep learning methods with clinical texts. Second, though federated learning is promising to access multi-institutional data for training AI models, variability in data across hospitals degrades performance of AI models trained this way. Third, it is challenging to evaluate the effectiveness of AI in actual clinical practice. In this talk I will highlight some of the exciting frontiers in AI in medical imaging and the implications for data-driven medicine, focusing on (1) upstream and downstream applications for AI in medical imaging, (2) the challenge of data variability to federated learning and ways to overcome that challenge, (3) infrastructure needed to enable evaluation of AI in the clinical workflow to ensure it improves clinical care as expected.

Prof. Elisa Konofagou

Columbia University

Webinar #1 on March 2, 2021

Title: Electromechanical Wave Imaging for Noninvasive and Direct Mapping of Arrhythmias in 3D

Talk recording available here!

Expand to read the talk abstract.

Arrhythmias refer to the disruption of the natural heart rhythm. This irregular heart rhythm causes the heart to suddenly stop pumping blood. Arrhythmias increase the risk of heart attack, cardiac arrest and stroke. Reliable mapping of the arrhythmic chamber stands to significantly improve these currently low treatment success rates by localizing arrhythmic foci before the procedure starts and following progression throughout. To this end, our group has pioneered Electromechanical Wave Imaging (EWI) that characterizes the electromechanical function throughout all four cardiac chambers. The heart is an electrically driven mechanical pump that adapts its mechanical and electrical properties to compensate for loss of normal mechanical and electrical function as a result of disease. During contraction, the electrical activation, or depolarization, wave propagates throughout all four chambers causing mechanical deformation in the form of the electromechanical wave. This deformation is extremely rapid and completes within 15-20 ms following depolarization. Therefore, fast acquisition and precise estimation is extremely important in order to properly map and identify the transient and minute mechanical events that occur during depolarization. Activation maps are generated based on the zero crossing of strain variation in the transition from end-diastole to systole. Our group has demonstrated that EWI yields 1) high precision electromechanical activation maps that include transmural propagation, 2) imaging of transient cardiac events (electromechanical strains within ~0.2-1 ms).  Our studies have shown the EWI capable of capability atrial fibrillation and atrial flutter, transmural atrial pacing, RF ablation lesions while more recently it has been shown more robust that 12-lead EKG in characterizing focal arrhythmias such as the Wolf-Parkinson-White (WPW) and Pre-ventricular Contraction (PVC) as well as macro-reentrant arrhythmias in patients.. In the last part of the lecture, two machine learning aspects will be described. The first entails the use of ML techniques to automate the zero crossing estimates in the generation of the EWI activation maps by using Logistic Regression and Random Forest methods. The second ML application will include EWI mapping at lower imaging framerates used so far (<500 Hz) in order to determine what percentage of the activation maps can be reconstructed based on unsupervised training data at higher framerates. The quality of performance of EWI can be further enhanced by ML methodologies.