Upcoming Webinars

Prof. Dinggang Shen

ShanghaiTech University

Webinar #14 on July 12, 2022 at 9am EDT

Title: Deep Learning-based Medical Image Reconstruction

Free registration is available!

Expand to read the talk abstract.

Abstract:

This talk will introduce our developed deep learning methods for fast MR acquisition, low-dose CT reconstruction, and low-cost and low-dose PET acquisition. The implementation of these techniques in scanners for real clinical applications will be demonstrated. Also, comparisons with other state-of-the-art acquisition methods will be discussed.

Bio:

Dinggang Shen is a Professor and a Founding Dean with School of Biomedical Engineering, ShanghaiTech University, Shanghai, China, and also a Co-CEO of United Imaging Intelligence (UII), Shanghai. He is a Fellow of IEEE, The American Institute for Medical and Biological Engineering (AIMBE), The International Association for Pattern Recognition (IAPR), and The Medical Image Computing and Computer Assisted Intervention (MICCAI) Society. He was Jeffrey Houpt Distinguished Investigator and a Full Professor (Tenured) with The University of North Carolina at Chapel Hill (UNC-CH), Chapel Hill, NC, USA, directing The Center of Image Analysis and Informatics, The Image Display, Enhancement, and Analysis (IDEA) Lab, and The Medical Image Analysis Core. His research interests include medical image analysis, machine learning, deep learning, and computer vision. He has published more than 1500 peer-reviewed papers in the international journals and conference proceedings, with H-index 122 and over 60K citations. He serves as an Editor-in-Chief for Frontiers in Radiology, as well as an associate editor (or editorial board member) for eight international journals. Also, he has served in the Board of Directors, The Medical Image Computing and Computer Assisted Intervention (MICCAI) Society, in 2012-2015, and was General Chair for MICCAI 2019.

Prof. Hayit Greenspan

Icahn School of Medicine at Mount Sinai

Webinar on TBD

Title:

Free registration available soon!

Expand to read the talk abstract.

Abstract:


Bio:

Co-director, Artificial Intelligence and Emerging Technologies in Medicine (AIET)

Professor, Graduate School of Biomedical Sciences

Icahn School of Medicine at Mount Sinai

Past Events

Prof. Jong Chul Ye

Korea Advanced Inst. of Science & Technology (KAIST)

Webinar #13 on May 17, 2022 at 10am EDT

Title: Accelerating Conditional Diffusion Models for Inverse Problems through Stochastic Contraction

Talk recording is available here!

Expand to read the talk abstract.

Abstract:

Diffusion models have recently attained significant interest within the community owing to their strong performance as generative models. Furthermore, its application to inverse problems has demonstrated state-of-the-art performance. Unfortunately, diffusion models have a critical downside - they are inherently slow to sample from, needing a few thousand steps of iteration to generate images from pure Gaussian noise. In this work, we show that starting from Gaussian noise is unnecessary. Instead, starting from a single forward diffusion with better initialization significantly reduces the number of sampling steps in reverse conditional diffusion. This phenomenon is formally explained by the contraction theory of the stochastic difference equations like our conditional diffusion strategy - the alternating applications of reverse diffusion followed by a non-expansive data consistency step. The new sampling strategy, dubbed Come-Closer-Diffuse-Faster (CCDF), also reveals new insight into how the existing feed-forward neural network approaches for inverse problems can be synergistically combined with the diffusion models. Experimental results with super-resolution, image inpainting, and compressed sensing MRI demonstrate that our method can achieve state-of-the-art reconstruction performance at significantly reduced sampling steps.

Bio:

Jong Chul Ye is a Professor at the Kim Jaechul Graduate School of Artificial Intelligence (AI), and an Adjunct Professor at Dept. of Bio/Brain Engineering and Dept. of Mathematical Sciences of Korea Advanced Institute of Science and Technology (KAIST), Korea. He received the B.Sc. and M.Sc. degrees from Seoul National University, Korea, and the Ph.D. from Purdue University, West Lafayette. Before joining KAIST, he worked at Philips Research and GE Global Research in New York. He has served as an associate editor of IEEE Trans. on Image Processing, and an editorial board member for Magnetic Resonance in Medicine. He is currently an associate editor for IEEE Trans. on Medical Imaging, and a Senior Editor of IEEE Signal Processing Magazine. He is an IEEE Fellow, was the Chair of IEEE SPS Computational Imaging TC, and IEEE EMBS Distinguished Lecturer. He was a General Co-chair (with Mathews Jacob) for IEEE Symp. On Biomedical Imaging (ISBI) 2020. His research interest is in machine learning for biomedical imaging and computer vision.

Dr. Holger Roth

NVIDIA

Webinar #12 on April 29, 2022 at 10am EDT

Title: Advanced Techniques for Collaborative Development of AI Models for Medical Imaging

Talk recording is available here!

Expand to read the talk abstract.

Abstract:

The COVID-19 pandemic has emphasized the need for large-scale collaborations by the clinical and scientific communities to tackle global healthcare challenges. However, regulatory constraints around data sharing and patient privacy might hinder access to genuinely representative patient populations on a global scale. Federated learning (FL) is a technology allowing us to work around such constraints while keeping patient privacy in mind. This talk will show how FL was used to predict clinical outcomes in patients with COVID-19 while allowing collaborators to retain governance over their data (Nature Medicine 2021). Furthermore, I will introduce several recent advances in FL, including quantifying potential data leakage, automated machine learning (AutoML) and neural architecture search (NAS), and personalization that can allow us to build more accurate and robust AI models.


Bio:

Holger Roth is a Sr. Applied Research Scientist at NVIDIA, focusing on deep learning for medical imaging. He has been working closely with clinicians and academics over the past several years to develop deep learning-based medical image computing and computer-aided detection models for radiological applications. He is an Associate Editor for IEEE Transactions of Medical Imaging and holds a Ph.D. from University College London, UK. In 2018, he was awarded the MICCAI Young Scientist Publication Impact Award.

Prof. Luping Zhou

The University of Sydney

Webinar #11 on March 22, 2022 at 7am EST (10pm Sydney Time)

Title: Explore Correlated Image-Text features for Automated Rediographical Report Generation

Talk recording is available here!

Expand to read the talk abstract.

Abstract:

Automated radiographical report generation is a challenging task as it requires to generate paragraphs describing fine-grained visual differences of cases, especially for those between the diseased and the healthy. Existing image captioning methods commonly target at generic images, and they lack mechanisms to deal with this challenge. As a result, they tend to generate rigid reports that repeat frequently appearing phrases describing the common content of images while suppressing the less frequent but more informative disease-related words. In this talk, I will introduce our efforts at exploring fine-grained interactions of image and text features for radiographically report generation, and demonstrate the success on large benchmarks.

Bio:

Dr. Luping Zhou is an Associate Professor in School of Electrical and Information Engineering, The University of Sydney. She obtained her PhD from Australian National University and got her post-doctoral training in University of North Carolina at Chapel Hill. Dr. Zhou works on the interface of medical image analysis, machine learning, and computer vision, and has published 100+ research papers in these fields. Her current research is focused on medical image analysis with statistical graphical models and deep learning, as well as general visual recognition problems. She was a recipient of the prestigious ARC (Australian Research Council) DECRA award (Discovery Early Career Researcher Award). Dr. Zhou is the Associate Editor of the journals IEEE Trans. on Medical Imaging (TMI) and Pattern Recognition. She is a Senior Member of IEEE.

Prof. Qi Dou

The Chinese University of Hong Kong

Webinar #10 on January 14, 2022 at 9am EST (10pm Hong Kong, Beijing)

Title: Image-based Robotic Surgery Intelligence

Talk recording is available here!

Expand to read the talk abstract.

Abstract:

With rapid advancements in medicine and engineering technologies, the operating room has evolved to a highly complex environment, where surgeons take advantages of computers, endoscopes and robots to perform procedures with more precision while less incision. Intelligence, together with its authorized cognitive assistance, smart data analytics, and automation, is envisaged to be a core fuel to transform next-generation robotic surgery in numerous known or unknown exciting ways. In this talk, I will present ideas, methodologies and applications of image-based robotic surgery intelligence from three perspectives, i.e., AI-enabled surgical situation awareness to improve surgical procedures, AI-powered large-scale data analysis to enhance surgical education, AI-driven multi-sensory perception to achieve surgical subtask automation. To tackle these challenging topics, a wide variety of cutting-edge vision and learning techniques will be covered, including transformers, weakly-supervised learning, meta-learning model generalization, unsupervised video retrieval, stereo depth estimation, 3D scene reconstruction, reinforcement learning and augmented reality. With the limited view provided in this talk, I look forward to generating discussions, inspirations and interesting possibilities on the role of AI for the future of robotic surgery.

Bio:

Dr. Qi DOU is an Assistant Professor with the Department of Computer Science and Engineering at The Chinese University of Hong Kong. She is also an Associate Member of the T Stone Robotics Institute and Multi-Scale Medical Robotics Center at CUHK. Her research focuses on synergistic innovations across medical image analysis, machine learning, surgical data science and medical robotics, with an impact to support demanding clinical workflows for improving patient care. Dr. Dou has won the IEEE-EMBS TBME Best Paper Award 2nd Place 2021, IEEE ICRA Best Paper Award in Medical Robotics 2021, MICCAI Young Scientist Publication Impact Award Finalist 2021, MICCAI-Medical Image Analysis Best Paper Award 2017. Dr. Dou serves as the associate editor for Journal of Machine Learning for Biomedical Imaging, Computer Assisted Surgery, and serves as program co-chair of MICCAI 2022 and MIDL 2022. She also serves as an organization committee member for medical imaging workshops in conferences of NeurIPS, CVPR, ICCV and ICML.

Prof. Stefanie Speidel

National Center for Tumor Diseases (NCT) Dresden

Webinar #9 on December 7, 2021 at 11am EST (5pm CET)

Title: AI-assisted surgery – perspectives and challenges

Talk recording is available here!

Expand to read the talk abstract.

Abstract:

In this talk, I’ll present our recent research regarding AI-assisted surgery with a specific focus on analysis of intraoperative video data. The goal is to bridge the gap between data science, sensors and robotics to enhance the collaboration between surgeons and cyber-physical systems and to democratize surgical skills by quantifying surgical experience and make it accessible to machines. Several examples to optimize the therapy of the individual patient by turning the available data into useful information are given. A focus of this talk will be soft-tissue registration and workflow analysis for context-aware assistance as well as sensor-based surgical training and data generation for machine learning applications. Finally, remaining challenges and strategies to overcome them are discussed.

Bio:

Dr. Stefanie Speidel is a professor for “Translational Surgical Oncology” at the National Center for Tumor Diseases (NCT) Dresden since 2017 and one of the speakers of the DFG Cluster of Excellence CeTI since 2019. She received her PhD from Karlsruhe Institute of Technology (KIT) in 2009 and had a junior research group “Computer-Assisted Surgery” from 2012 – 2016 at KIT. Her current research interests include image- and robot-guided surgery, soft-tissue navigation, sensor-based surgical training and intraoperative workflow analysis based on various sensor signals in the context of the future operating room. She regularly organizes workshops and challenges including the Endoscopic Vision Challenge@MICCAI and has been general chair and program chair for a number of international events including IPCAI and MICCAI conference.

Prof. Chao Chen

Stony Brook University

Webinar #8 on November 9, 2021 at 9am EDT

Title: Topology-Informed Biomedical Image Analysis

Talk recording is available here!

Expand to read the talk abstract.

Abstract:

Thanks to decades of technology development, we are now able to visualize in high quality complex biomedical structures such as neurons, vessels, trabeculae and breast tissues. We need innovative methods to fully exploit these structures, which encode important information about underlying biological mechanisms. In this talk, we explain how topology, i.e., connected components, handles, loops, and branches, can be seamlessly incorporated into different parts of a learning pipeline. Under the hood is a formulation of the topological computation as a differentiable operator, based on the theory of topological data analysis. This leads to a series of novel methods for segmentation, generation, and analysis of these topology-rich biomedical structures. We will also briefly mention how topological information can be used in graph neural networks and noise/attack-robust machine learning.

Bio:

Dr. Chao Chen is an assistant professor at Stony Brook University. His research interest spans topological data analysis (TDA), machine learning and biomedical image analysis. He develops principled learning methods inspired by the theory from TDA, such as persistent homology and discrete Morse theory. These methods address problems in biomedical image analysis, robust machine learning, and graph neural networks from a unique topological view. His research results have been published in major machine learning, computer vision, and medical image analysis conferences. He serves as an area chair for MICCAI, AAAI, CVPR and NeurIPS.

Prof. Adrian Dalca

Harvard Medical School

Webinar #7 on October 26, 2021 at 10am EDT

Title: Unsupervised Learning of Image Correspondences in Medical Image Analysis

Talk recording is available here!

Expand to read the talk abstract.

Abstract:

Image registration is fundamental to many tasks in image analysis. Classical image registration methods have undergone decades of technical development, but are often prohibitively slow since they solve an optimization problem for each 3D image pair. In this talk, I will introduce various models that leverage learning paradigms to enable deformable medical image registration more accurately and substantially faster than traditional methods, crucially enabling new research directions and applications. Based on these models I will discuss a learning framework for building deformable templates, which play a fundamental role in these analyses. This learning approach to template construction can yield a new class of on-demand conditional templates, enabling new analysis. I will also present recent or ongoing models, such as modality-invariant learning-based registration methods that work on unseen test-time contrasts, and hyperparameter-agnostic learning for image registration that removes the need to train different models for different hyperparameters.

Bio:

Adrian V. Dalca is Assistant Professor at Harvard Medical School, and research scientist at the Massachusetts Institute of Technology. He obtained his PhD from CSAIL, MIT, and his research focuses on probabilistic models and machine learning techniques to capture relationships between medical images, clinical diagnoses, and other complex medical data. His work spans medical image analysis, computer vision, machine learning and computational biology. He received his BS and MS in Computer Science from the University of Toronto.

Prof. Yuankai Huo

Vanderbilt University

Webinar #6 on September 8, 2021 at 9am EDT

Title: Scalable learning for large biomedical images

Talk recording is available here! (Please contact the speaker directly for slides.)

Expand to read the talk abstract.

Abstract:

Biomedical image analysis is ubiquitous and indispensable in biology, healthcare, pharmacology, and education. Rapid development in data sharing and computational resources are reshaping the medical imaging research field from small-scale to large-scale (e.g., big data with thousands or more subjects). However, traditional medical image analysis techniques can be inadequate to overcome the new challenges in big data; including robustness of algorithm, inter-subject variabilities, computational resources etc. In this presentation, I will (1) present an end-to-end large-scale lifespan brain image analyses on more than 5000 patients, (2) introduce our scalable representation learning and self-supervised learning algorithms for gigapixel pathological images, and (3) present our recent works in high-dimensional structural microscopy image analytics.

Bio:

Dr. Yuankai Huo is an Assistant Professor in Computer Science, Computer Engineering, and Data Science at Vanderbilt University, TN, USA. He received his B.S. degree in Electrical Engineering from Nanjing University of Posts and Telecommunications (NJUPT) in 2008, and Master degree in Electrical Engineering from Southeast University in 2011. After graduation, He worked in Columbia University and New York State Psychiatric Institute as a staff engineer and research officer from 2011 to 2014. He received his Master degree in Computer Science from Columbia University in 2014, and Ph.D. degree in Electrical Engineering from Vanderbilt University in 2018. Then, he had worked as a Research Assistant Professor at Vanderbilt University, and later, a Senior Research Scientist at PAII Labs. Since 2020, he has been a faculty member at the Department of Electrical Engineering and Computer Science, and Data Science Institute, Vanderbilt University. His research interests include medical image computing, knowledge-infused machine learning, and large-scale healthcare data analytics. His research aims to facilitate data-driven healthcare and improve patient outcomes through innovations in medical image analysis as well as multi-modal data representation and learning.

Prof. Faisal Mahmood

Harvard Medical School

Webinar #5 on July 21, 2021 at 11am EDT

Title: Data-efficient and multimodal computational pathology

Talk recording is available here!

Expand to read the talk abstract.

Abstract:

Advances in digital pathology and artificial intelligence have presented the potential to build assistive tools for objective diagnosis, prognosis and therapeutic-response and resistance prediction. In this talk we will discuss: 1) Data-efficient methods for weakly-supervised whole slide classification with examples in cancer diagnosis and subtyping, allograft rejection etc. (Nature Biomedical Engineering, 2021). 2) Harnessing weakly-supervised, fast and data-efficient WSI classification for identifying origins for cancers of unknown primary (Nature, 2021). 3) Discovering integrative histology-genomic prognostic markers via interpretable multimodal deep learning (IEEE TMI, 2020). 4) Deploying weakly supervised models in low resource settings without slide scanners, network connections, computational resources and expensive microscopes. 5) Bias and fairness in computational pathology algorithms.

Bio:

Dr. Mahmood is an Assistant Professor of Pathology at Harvard Medical School and the Division of Computational Pathology at the Brigham and Women's Hospital. He is also an Associate Member of the Broad Institute of Harvard and MIT, a member of the Harvard Bioinformatics and Integrative Genomics (BIG) faculty and a full member of the Dana-Farber / Harvard Cancer Center. His laboratory’s predominant focus is towards pathology image analysis, morphological feature, and biomarker discovery using data fusion and multimodal analysis.

Prof. Ulas Bagci

Northwestern University

Webinar #4 on June 9, 2021 at 10am EDT

Title: Trustworthy AI for Imaging-based Diagnoses

Talk recording available here!

Expand to read the talk abstract.

In this talk, I will focus on the failures of deep learning / AI algorithms and propose several approaches to increase robustness of AI powered medical imaging systems. Roadmaps to such trustworthy systems will be analyzed: 1) algorithmic robustness, 2) interpretable / explainable machine learning systems, and 3) human in the loop machine learning system. For each of these I will give layout. For algorithmic robustness, I will introduce a success story of a deep network architecture, called capsule networks, and demonstrate its effectiveness and robustness compared to commonly used systems; hence, increasing its trustworthiness to be used in high-risk application. For human in the loop system, I will share our unique experience for developing a paradigm-shifting computer-aided diagnosis (CAD) system, called collaborative CAD (C-CAD), that unifies CAD and eye-tracking systems in realistic radiology room settings. Last, but not least, I will introduce our new algorithm developed to better localize regions where the algorithm learns. Compared to commonly used Grad-Cam algorithms, we obtain superior performance when depicting salient regions that are most informative. COVID19 examples will be demonstrated as a recent hot topic. Lastly, I will discuss about future directions that medical imaging physicians and scientists should think when AI comes into play.

Biography:

Ulas Bagci, Ph.D., is an Associate Professor at the Northwestern University's Radiology and Biomedical Engineering Department at Chicago, and courtesy professor at the Center for Research in Computer Vision (CRCV), department of computer science, University of Central Florida (UCF). His research interests are artificial intelligence, machine learning and their applications in biomedical and clinical imaging. Dr. Bagci has more than 230 peer-reviewed articles in these topics. Previously, he was a staff scientist and lab co-manager at the National Institutes of Health’s radiology and imaging sciences department, center for infectious disease imaging. Dr. Bagci holds two NIH R01 grants (as Principal Investigator) and serves as a steering committee member of AIR (artificial intelligence resource) at the NIH. Dr. Bagci also serves as an area chair for MICCAI for several years and he is an associate editor of top-tier journals in his fields such as IEEE Trans. on Medical Imaging, Medical Physics and Medical Image Analysis. Prof. Bagci teaches machine learning, advanced deep learning methods, computer and robot vision and medical imaging courses. He has several international and national recognitions including best paper and reviewer awards.

Prof. dr Marie Louise Groot

Vrije University

Webinar #3 on May 4, 2021 at 10am EDT

Title: Translation of higher harmonic generation microscopy into the clinic for tumor tissue assessment

Expand to read the talk abstract.

For patients with lung cancer, a fast and accurate diagnosis is important for optimal treatment allocation. With the current lung tissue sampling techniques, multiple biopsies are taken which might result in prolonged procedures, patient discomfort, and increased risk of complications. Therefore, techniques that can assess fresh lung tissue with a speed that enables ‘live’ feedback to the endoscopists while they perform the procedure, are required.

Higher harmonic generation (HHG) microscopy is a novel promising imaging technique that meets these requirements. This technique is non-invasive, label-free, and provides 3D images with a high, sub-cellular resolution, within seconds. Before, we demonstrated that HHG microscopy can generate high quality images of freshly excised unprocessed lung and brain tissue, in less than a minute with information content comparable to that of the gold standard, histopathology [1,2].

In our most recent study we brought a mobile HHG microscope into the hospital to image freshly excised lung biopsies. The results so far show that HHG microscopy enables real-time 3D imaging of the biopsies and reveals pathological hallmarks which are important for lung tumor diagnosis, including fibrosis, elastosis, disruption of lung architecture, increased cellularity, and the presence of immune cells. The use of this technique may reduce the need for the number of biopsy samples, and reduce endoscopy time.

Finally, automatic image analysis would eliminate the need for a pathologist to be present in the endoscopy suite or operation theatre. We will show our progress towards using convoluted neural networks in the assessment of brain tissue.

1. van Huizen, L. M. G., Radonic, T., van Mourik, F., Seinstra, D., Dickhoff, C., Daniels, J. M. A., Bahce, I., Annema, J. T., and Groot, M. L. (2020) Compact portable multiphoton microscopy reveals histopathological hallmarks of unprocessed lung tumor tissue in real time, Translational Biophotonics, e202000009.

2. Zhang, Z. Q., de Munck, J. C., Verburg, N., Rozemuller, A. J., Vreuls, W., Cakmak, P., van Huizen, L. M. G., Idema, S., Aronica, E., Hamer, P. C. D., Wesseling, P., and Groot, M. L. (2019) Quantitative Third Harmonic Generation Microscopy for Assessment of Glioma in Human Brain Tissue, Advanced Science 6

Prof. Daniel L. Rubin

Stanford University

Webinar #2 on April 2, 2021

Title: Scaling AI to Develop Robust Applications in Medical Imaging

Expand to read the talk abstract.

There are many exciting prospects of AI for applications in medical imaging, including image enhancement, automated disease detection, diagnosis, and clinical prediction. However, there are several major challenges to be addressed in order to develop robust and clinically useful AI models. First, training robust AI models requires tremendous amounts of labeled data, and while there are abundant images in the historical clinical archives of healthcare institutions, it is difficult to label the images in large scale, which we address through deep learning methods with clinical texts. Second, though federated learning is promising to access multi-institutional data for training AI models, variability in data across hospitals degrades performance of AI models trained this way. Third, it is challenging to evaluate the effectiveness of AI in actual clinical practice. In this talk I will highlight some of the exciting frontiers in AI in medical imaging and the implications for data-driven medicine, focusing on (1) upstream and downstream applications for AI in medical imaging, (2) the challenge of data variability to federated learning and ways to overcome that challenge, (3) infrastructure needed to enable evaluation of AI in the clinical workflow to ensure it improves clinical care as expected.

Prof. Elisa Konofagou

Columbia University

Webinar #1 on March 2, 2021

Title: Electromechanical Wave Imaging for Noninvasive and Direct Mapping of Arrhythmias in 3D

Expand to read the talk abstract.

Arrhythmias refer to the disruption of the natural heart rhythm. This irregular heart rhythm causes the heart to suddenly stop pumping blood. Arrhythmias increase the risk of heart attack, cardiac arrest and stroke. Reliable mapping of the arrhythmic chamber stands to significantly improve these currently low treatment success rates by localizing arrhythmic foci before the procedure starts and following progression throughout. To this end, our group has pioneered Electromechanical Wave Imaging (EWI) that characterizes the electromechanical function throughout all four cardiac chambers. The heart is an electrically driven mechanical pump that adapts its mechanical and electrical properties to compensate for loss of normal mechanical and electrical function as a result of disease. During contraction, the electrical activation, or depolarization, wave propagates throughout all four chambers causing mechanical deformation in the form of the electromechanical wave. This deformation is extremely rapid and completes within 15-20 ms following depolarization. Therefore, fast acquisition and precise estimation is extremely important in order to properly map and identify the transient and minute mechanical events that occur during depolarization. Activation maps are generated based on the zero crossing of strain variation in the transition from end-diastole to systole. Our group has demonstrated that EWI yields 1) high precision electromechanical activation maps that include transmural propagation, 2) imaging of transient cardiac events (electromechanical strains within ~0.2-1 ms). Our studies have shown the EWI capable of capability atrial fibrillation and atrial flutter, transmural atrial pacing, RF ablation lesions while more recently it has been shown more robust that 12-lead EKG in characterizing focal arrhythmias such as the Wolf-Parkinson-White (WPW) and Pre-ventricular Contraction (PVC) as well as macro-reentrant arrhythmias in patients.. In the last part of the lecture, two machine learning aspects will be described. The first entails the use of ML techniques to automate the zero crossing estimates in the generation of the EWI activation maps by using Logistic Regression and Random Forest methods. The second ML application will include EWI mapping at lower imaging framerates used so far (<500 Hz) in order to determine what percentage of the activation maps can be reconstructed based on unsupervised training data at higher framerates. The quality of performance of EWI can be further enhanced by ML methodologies.