Overview and Objective


Along with the quick evolvement of artificial intelligence (AI), deep/machine learning, and big data in healthcare, medical AI research goes beyond methodological/algorithm development. As of Feb. 2023, the FDA has authorized about 220 medical AI software, and many new research questions are emerging in the practical and applied aspects of medical AI, such as translational study, clinical evaluation, real-world use cases of AI systems, ethical, legal, and social issues (ELSI), etc. Clinicians are playing an increasingly stronger role in the frontiers of applied AI through collaboration with AI experts, data scientists, informatics officers, and the industry workforce.


Practical applications of medical AI bring in new challenges and opportunities. Today is the best time to strengthen the connections between the arising aspects of AI translation and applications and classic methodological/algorithmic research. The AMAI workshop aims to engage medical AI practitioners and bring more application flavor in clinical, evaluation, human-AI collaboration, new technical strategy, trustfulness, etc., to augment the research and development on the application aspects of medical AI, on top of pure technical research. 


The goal of AMAI is to create a forum to bring together researchers, clinicians, data scientists, domain experts, AI practitioners, industry, and students to investigate and discuss various aspects related to applications of medical AI. AMAI will 1) introduce emerging medical AI research topics and novel methodology towards applications, 2) showcase the evaluation, translation, use case, success, ELSI considerations of AI in healthcare, 3) develop multi-disciplinary collaborations and academic-industry partnerships, and 4) provide educational, networking, and career opportunities for attendees including clinicians, scientists, trainees, and students. 


AMAI 2023 will be composed of invited talks, contribution paper/abstract presentations, and expert panel discussions. Submissions will include 2 tracks, full papers and abstracts. Among all the accepted full papers and abstracts, the workshop will give a Best Student Paper award, a Best Workshop Paper award, and a Best Abstract Award, all with certificates. 


The first AMAI workshop was held on September 18, 2022 in Singapore as a Satellite Event of MICCAI 2022 and it was a big success. 

Call for Submissions


As medical AI is a multi-disciplinary subject, AMAI calls for submissions from multiple aspects of research topics, such as, but not limited to, those listed below. AMAI is agnostic to medical data modalities and encourages submissions using imaging or non-imaging data.


Submissions may be in two tracks:


Track 1: Full papers: Submissions must be new work. All submissions will be reviewed by at least two experts with experience of relevance. Accepted papers will be assigned as oral or poster presentations primarily based on merit. Each paper will allow a maximum of 8 pages (including texts, figures, and tables) for scientific content and up to 2 additional pages for references. Submissions should be formatted in Lecture Notes in Computer Science (LNCS) style (Please use Springer LaTeX or Word templates) and anonymized for double-blind review. Supplemental materials are not allowed. For accepted papers, the corresponding/senior authors will need to complete and sign a Consent-to-Publish form on behalf of all the authors. For papers invited for publication in the partnering journals, the authors will be asked to convert the accepted papers to align with the format requirements of the journals.


Note: Within the last section (i.e., Discussion or Conclusion) of your paper, it is required to include a separate paragraph at the end of the section to briefly describe the prospect of application of your work. Please refer to below format (note that the phrase of "Prospect of application" needs to be bold font):

Prospect of application: Use maximum 60 words to describe the prospect and envisioned contexts, scenarios, or circumstances on the potential application/deployment of your work. 

 

Track 2: Abstracts: Submissions may be new work or recently published/accepted papers (including posted preprints). All submissions will be reviewed in terms of scientific merit, relevance to the workshop, and significance to the field. Accepted abstracts may be assigned primarily as poster presentations. Submissions will allow a maximum of 1 page (including figures/tables, if any), following specified formats in this template: AMAI Abstract Template. Abstract submissions do not need to be anonymized. The accepted abstracts will be made publicly accessible on this website.


Submissions for both tracks should be submitted via the CMT system: https://cmt3.research.microsoft.com/AMAI2023



Camera-ready Submission Instructions

Full papers: Please follow the MICCAI 2023 main conference's general guidelines (if applicable) for camera-ready submissions:https://conferences.miccai.org/2023/en/CAMERA-READY-GUIDELINES.html. Paper length: a maximum of 8.5 pages (including texts, figures, and tables) for scientific content and up to 2 additional pages for references (this is consistent with the MICCAI main conference). Supplemental materials are not allowed.


The License to Publish form needs to be signed by the corresponding/senior author on behalf of all the authors. The corresponding author The corresponding author signing the copyright form should match the corresponding author marked on the paper. Conference Name (i.e., AMAI 2023) and the Volume Editors' Names are already entered for you at the first page, where you need to fill in the title and names of the all authors and corresponding authors of your paper. This form must be signed in wet-ink. Digital signature is not acceptable. Please scan your signed form and save it as a PDF file (file name format: AMAI2023_License-to-Publish_PaperID.pdf) and upload it to the CMT system.


The corresponding author must be available to carry out a final proof check of the typeset paper before publishing in the LNCS proceedings. He or she will be given a 72-hour time-slot to do so. The corresponding author should be clearly marked as such in the header of the paper. He or she is also the one who signs the license-to-publish form on behalf of all of the authors. Please note that the corresponding author cannot be changed after the camera ready submission deadline. We encourage the inclusion of all of the authors’ email addresses in the header, but at the very least, the email address of the corresponding author should be present.


For papers invited for publication in the partnering journals, the authors will be asked to convert the accepted papers to align with the format requirements of the journal. This will be a separate process and details will be communicated individually with the authors.


Abstracts: Final version of the accepted abstracts should be formatted strictly following the AMAI Abstract Template. The maximum length of an abstract is 1 page (including everything). Both a Word source document (.docx) and a corresponding PDF document (.pdf) are required as final files to submit to the CMT system. File names should be formatted as: AMAI2023_Abstract_submissionID.docx and AMAI2023_Abstract_submissionID.pdf. Supplemental materials are not allowed. No copyright form will need to be signed for accepted abstracts. The final version of the accepted abstracts will be made publicly accessible on this website.


Please use the same CMT system link (see above) to submit the camera-ready papers or final abstracts.



Presentation Instructions

1) The workshop will start at 8am on Sunday Oct. 8 2023 (Vancouver Canada local time) at Meeting Room 15, Vancouver Convention Center East Building Level 1.

2) Presentations are assumed to be in person, except for those who had been communicated individually for virtual attendance due to critical needs.

3) Your presentation mode (oral or poster) has been notified to you in the acceptance emails.

4) Oral presentations will be a total of 11-12 minutes per paper (including 1-2 minutes for Q&A). There are no specific formatting requirements on the presentation slides. On Oct. 8 morning, presenters are asked to show up a bit earlier in the workshop room to copy the presentation files to the computer before the start of the workshop. For virtual presenters, you will present through sharing your screen in the Zoom room (the entry will be available a short time before the start of the workshop time and you will need to log into the virtual conference platform using your registration).

5) For poster presentations, please make your physical posters referring to the MICCAI main conference formats, size, and requirements (see this link:https://conferences.miccai.org/2023/en/INFORMATION-FOR-MAIN-CONFERENCE-PRESENTERS.html). Posters will be staying up throughout the entire workshop hours (possibly longer before they need to be removed) on Oct. 8 and please display your posters early before the workshop starts. There will be a dedicated poster session in the workshop agenda for authors to present their posters, but authors can and are encouraged to present/interact anytime while the posters are in display. The Poster sessions for Satellite Event will be at Ground Level Exhibition B-C where the coffee break and lunches will be served. There will be labels on the poster boards with the acronyms of each Satellite Event. Authors please just find an empty board labeled for our workshop (i.e., AMAI) to use for your posters (no specific numbers will be assigned to specific posters). 

6) In order to improve the discussion, for the full papers accepted as poster presentation and for all accepted abstracts, we ask the authors to upload their posters (in PDF format; maximum file size 5 MB) to the CMT system (use the Camera Ready submission link to upload a separate single file of your poster in PDF format) by Oct. 1 (11:59pm PST). Those digital posters will be shared at this website for access during the workshop time period and afterwards. The final version of all the accepted abstracts will be posted at this website too prior to the workshop.

7) Virtual Attendance: At this link, https://conferences.miccai.org/2023/en/default.asp, you will see a Virtual Platform button at the top right corner to login into the online conference system. You will need your conference registration information to log in. The satellite event organizers, as well as participants, will be able to access their Zoom room using the ConFLUX platform, so they will need to log into ConFLUX from the computer in the meeting room. Specifically, five minutes before their event begins, a button will appear on their event's page on ConFLUX which anyone can press that will open Zoom and connect them to the appropriate room. As always, we will want to remind you that your virtual presenters/participants must be registered for the appropriate day of satellite events in order to see their event on ConFLUX and thus access their Zoom room.

Publishing Plan and Awards


By default, accepted full papers will be published by Springer Nature as a part of the MICCAI Satellite Events joint LNCS proceedings. Depending on the quality and topics, a number of accepted full papers may be selected to publish as individual articles or as a Special Issue in the Journal of Digital Imaging (JDI), the official journal of the Society for Imaging Informatics in Medicine (SIIM). Authors will be asked whether they would like their accepted papers to be considered for potential publishing in JDI. The publishing of the selected papers will follow JDI's editorial and review processes, and more details will be announced here in due course. Papers to be published in JDI will not be published in the Springer Nature LNCS proceedings.

Accepted abstracts will not be formally published by publishers (neither in the MICCAI Satellite Events joint LNCS proceedings nor in the partnering journals); they will be made publicly accessible on this website.

Among all the accepted full papers and abstracts, AMAI will give a Best Student Paper award, a Best Workshop Paper award, and a Best Abstract Award, all with certificates. 

Important Dates


Workshop time: October 8 morning, 2023


Submissions open: April 10, 2023

Submissions close: 11:59pm, Pacific Time, June 27, 2023

Submissions close: 11:59pm, Pacific Time, July 5, 2023

Notification of acceptance: July 18, 2023

Notification of acceptance: July 27, 2023

Camera ready submission due: 11:59pm, Pacific Time, August 3, 2023

Camera ready submission due: 11:59pm, Pacific Time, August 10, 2023


Program Committee

(In alphabetical order)

Agenda


Location: Meeting Room 15, Vancouver Convention Center East Building Level 1


Date: Oct. 8 2023 Sunday morning (Vancouver Canada local time)


** Paper presenters please report to the organizers before the start of the workshop and copy your presentation slides to the onsite computer.

** Virtual/online conference link, https://conferences.miccai.org/2023/en/default.asp  (top right corner, Virtual Platform button)

 

8:00-8:05am: Introductory remarks: AMAI Organizers

 

8:05-8:30: Keynote talk

Charles E. Kahn Jr., MD, Professor, University of Pennsylvania, Editor, Radiology: Artificial Intelligence

  Title: “AI applications in radiology:  Successes, challenges, and the road ahead”

 

8:30-9:30: Full Paper Oral Presentation Session I: 5 papers

  (11-12 mins /paper; including ~10-min presentation and ~1-2 min Q&A) 

 

Investigating the Impact of Image Quality on Endoscopic AI Model Performance

Tim J.M. Jaspers, Tim G.W. Boers, Carolus H.J. Kusters, Martijn R. Jong, Jelmer B. Jukema, Albert J. de Groof, Jacques J. Bergman, Peter H.N. de With, and Fons van der Sommen

 

Single-cell Spatial Analysis of Histopathology Images for Survival Prediction via Graph Attention Network

Zhe Li, Yuming Jiang, Leon Liu, Yong Xia, and Ruijiang Li

 

Video-based gait analysis for assessing Alzheimer’s Disease and Dementia with Lewy Bodies

Diwei Wang, Chaima Zouaoui, Jinhyeok Jang, Hassen Drira, and Hyewon Seo

 

Enhancing Clinical Support for Breast Cancer with Deep Learning Models using Synthetic Correlated Diffusion Imaging

Chi-en Amy Tai, Hayden Gunraj, Nedim Hodzic, Nic Flanagan, Ali Sabri, and Alexander Wong

 

Image-Based 3D Reconstruction of Cleft Lip And Palate Using a Learned Shape Prior

Lasse Lingens, Baran Gözcü, Till Schnabel, Yoriko Lill, Benito K. Benitez, Prasad Nalabothu, Andreas A. Mueller, Markus Gross, and Barbara Solenthaler

 

9:30-10:30 Poster presentation (8 full papers and 12 abstracts) and coffee break

The Poster sessions for Satellite Event will be at Ground Level Exhibition B-C where the coffee break and lunches will be served. There will be labels on the poster boards with the acronyms of each Satellite Event. Authors please just find an empty board labeled for our workshop (i.e., AMAI) to use for your posters (no specific numbers will be assigned to specific posters). 

MICCAI coffee break time: 10-10:30am

 

10:30-10:45: Invited talk

Behrouz Shabestari, PhD, Director, National Technology Centers Program, National Institute of Biomedical Imaging and Bioengineering (NIBIB)

Title: “NIH funding opportunities for AI and imaging bioinformatics research and translation”

 

10:45-11:30: Full Paper Oral Presentation Session II: 4 papers

 (11-12 mins /paper; including presentation and Q&A)

 

CNNs vs. Transformers: Performance and Robustness in Endoscopic Image Analysis

Carolus H.J. Kusters, Tim G.W. Boers, Tim J.M. Jaspers, Jelmer B. Jukema, Martijn R. Jong, Kiki N. Fockens, Albert J. de Groof, Jacques J. Bergman, Fons van der Sommen, and Peter H.N. de With

 

Ultrafast Labeling for Multiplexed Immunobiomarkers from Label-free Fluorescent Images       

Zixia Zhou, Yuming Jiang, Ruijiang Li, Lei Xing

 

Accessible Otitis Media Screening with a Deep Learning-Powered Mobile Otoscope   

Omkar Kovvali and Lakshmi Sritan Motati

 

Enhancing Cardiac MRI Segmentation via Classifier-Guided Two-Stage Network and All-Slice Information Fusion Transformer

Zihao Chen, Xiao Chen, Yikang Liu, Eric Z. Chen, Terrence Chen, and Shanhui Sun

 

11:30-11:45: Award announcement, attendee feedbacks/discussions, and closing remarks.

 

11:45: Adjourn


Accepted Papers and Abstracts

Full paper track (oral presentation): 9 papers

See those listed in Agenda


Full paper track (poster presentation): 8 papers  [Link to posters]


Clinical Trial Histology Image based End-to-End Biomarker Expression Levels Prediction and Visualization using Constrained GANs 

Wei Zhao, Bozhao Qi, Yichen Li, Roger Trullo, Elham Attieh, Anne-Laure Bauchet, Qi Tang, and Etienne Pochet

 

More Than Meets the Eye: Physicians’ Visual Attention in the Operating Room

Sapir Gershov, Fadi Mahameed, Aeyal Raz, and Shlomi Laufer

 

Ensembling voxel-based and box-based model predictions for robust lesion detection

Noëlie Debs, Alexandre Routier, Clément Abi Nader, Arnaud Marcoux, Alexandre Bône, and Marc-Michel Rohé

 

Advancing Abdominal Organ and PDAC Segmentation Accuracy with Task-Specific Interactive Models     

Sanne E. Okel, Christiaan G.A. Viviers, Mark Ramaekers, Terese A.E. Hellström, Nick Tasios, Dimitrios Mavroeidis, Jon Pluyter, Igor Jacobs, Misha Luyer, Peter H.N. de With, and Fons van der Sommen

 

Anatomical Location-Guided Deep Learning-Based Genetic Cluster Identification of Pheochromocytomas and Paragangliomas From CT Images     

Bikash Santra, Abhishek Jha, Pritam Mukherjee, Mayank Patel, Karel Pacak, and Ronald M. Summers

 

Breaking Down the Hierarchy: A New Approach to Leukemia Classification

Ibraheem Hamdi, Hosam El-Gendy, Ahmed Sharshar, Mohamed Saeed, Muhammad Ridzuan, Shahrukh K. Hashmi, Naveed Syed, Imran Mirza, Shakir Hussain, Amira Mahmoud Abdalla, and Mohammad Yaqub

 

Enhancing Cardiac MRI Segmentation via Classifier-Guided Two-Stage Network and All-Slice Information Fusion Transformer 

Zihao Chen, Xiao Chen, Yikang Liu, Eric Z. Chen, Terrence Chen, and Shanhui Sun

 

Feature Selection for Malapposition Detection in Intravascular Ultrasound - A Comparative Study 

Satyananda Kashyap, Neerav Karani, Alexander Shang, Niharika D'Souza, Neel Dey, Lay Jain, Ray Wang, Hatice Akakin, Qian Li, Wenguang Li, Corydon Carlson, Polina Golland, and Tanveer Syeda-Mahmood


Abstract track (poster presentation): 12 abstracts   [Link to abstracts and posters]

 

Self-supervised Learning for Quantitative Biomarker Discovery in Cancer Imaging

Suraj Pai, Dennis Bontempi, Vasco Prudente, Ibrahim Hadzic, Mateo Sokač, Tafadzwa L. Chaunzwa, Simon Bernatz, Ahmed Hosny, Raymond H Mak, Nicolai J Birkbak, Hugo JWL Aerts

 

Liver 3D segmentation and volume measurement using follow-up data of living donors who underwent hepatectomy

Sae Byeol Mun, Young Jae Kim, Won Suk Lee, Kwang Gi Kim

 

Prediction of Response to Neoadjuvant Chemotherapy in Breast Cancer with DCE-MRI Images using 3D CNN Model

Jinsu Lee, Soo-Yeon Kim, Young-Gon Kim

 

Vertebral Bone Metastasis Detection on Dual Energy CT using YOLO v8 Object Detection Model

Yu-Ching Chan, Chin-Hua Yang, Meng-En Lian, Hui-Yu Tsai

 

TVnet: a deep-learning approach for enhanced right ventricular function analysis through tricuspid valve motion tracking

Ricardo A. Gonzales, Jérôme Lamy, Katharine E. Thomas, Kit Yiu, Qiang Zhang, Mayooran Shanmuganathan, Einar Heiberg, Vanessa M. Ferreira, Stefan K. Piechnik, Dana C. Peters

 

Exploring Out-of-Distribution Detection and Predictive Uncertainty for Segmentation Failure Detection: Comparative Analysis and Application Implications

Maximilian Zenk, David Zimmerer, Fabian Isensee, Paul F. Jäger, Klaus Maier-Hein

 

Prediction of severe exacerbation in COPD patients using multimodal machine learning models

Javid Abderezaei, Qazaleh Mirsharif, Claudia Irionde, Alexandre Coimbra


Enabling Real-World Federated AI Applications with Kaapana: Bridging the Gap from Simulated to Real-world Solutions 

Markus Bujotzek, Klaus Kades, Jonas Scherer, Maximilian Zenk, Stefan Denner, Ünal Akünal, Philipp Schader,Klaus Maier-Hein

Lighter: A Configuration-driven Framework for Streamlined, Transparent Deep Learning

Ibrahim Hadzic, Suraj Pai, Keno Bressem, Hugo JWL Aerts

 

Automated Quantification of Fat in Liver Donors Using a Cascade-based Convolutional Neural Network in Whole Slide Images

Youngbin Ahn, Youmin Shin, Choyeon Hong, Binna Yu, Kyungbun Lee, and Young-Gon Kim

 

Genomap: reconfiguration of tabular genomics data into images enables deep data exploration

Md Tauhidul Islam, Lei Xing

 

Predicting neoadjuvant chemotherapy response and high-grade serous ovarian cancer from CT images in ovarian cancer with multitask deep learning: a multicenter study

Rui Yin, Yijun Guo, Yanyan Wang, Qian Zhang, Zhaoxiang Dou, Yigeng Wang, Lisha Qi, Ying Chen, Chao Zhang, Huiyang Li, Xiqi Jian , Wenjuan Ma

 

Awards

Best Paper Award 

Image-Based 3D Reconstruction of Cleft Lip And Palate Using a Learned Shape Prior   

Lasse Lingens, Baran Gözcü, Till Schnabel, Yoriko Lill, Benito K. Benitez, Prasad Nalabothu, Andreas A. Mueller, Markus Gross, and Barbara Solenthaler

ETH Zurich, University Hospital Basel and University of Basel, University of Basel, Switzerland 


Best Student Paper Award 

Video-based gait analysis for assessing Alzheimer’s Disease and Dementia with Lewy Bodies

Diwei Wang, Chaima Zouaoui, Jinhyeok Jang, Hassen Drira, and Hyewon Seo

University of Strasbourg, France; Ecole Polytechnique de Tunisie; ETRI, South Korea


Best Student Paper Award - Honorable Mention 

Accessible Otitis Media Screening with a Deep Learning-Powered Mobile Otoscope 

Omkar Kovvali and Lakshmi Sritan Motati

Thomas Jefferson High School for Science and Technology, VA, USA


Best Abstract Award 

Automated Quantification of Fat in Liver Donors Using a Cascade-based Convolutional Neural Network in Whole Slide Images

Youngbin Ahn, Youmin Shin, Choyeon Hong, Binna Yu, Kyungbun Lee, and Young-Gon Kim

Seoul National University Hospital, South Korea

Organizers and Sponsors


 

Sponsors

Pittsburgh Center for Artificial Intelligence Innovation in Medical Imaging

Previous Workshops