The Fourth Workshop on
Applications of Medical AI (AMAI)
September 23, 2025, Daejeon, Republic of Korea
As a satellite event of MICCAI2025
The Fourth Workshop on
Applications of Medical AI (AMAI)
September 23, 2025, Daejeon, Republic of Korea
As a satellite event of MICCAI2025
Along with the quick evolvement of artificial intelligence (AI), deep learning, and big data in healthcare, medical AI research goes beyond methodological/algorithm development. FDA has authorized numerous medical AI software, and many new research questions are emerging in the translational and applied aspects of medical AI, such as translational study, clinical evaluation, real-world practical use cases of AI systems, etc. Clinicians are playing an increasingly stronger role in the frontiers of applied AI through collaboration with AI experts, data scientists, clinical staff, informatics officers, and the industry workforce.
Practical applications of medical AI bring in new challenges and opportunities. Today is the best time to strengthen the connections between the arising aspects of AI translation and applications and classic methodological/algorithmic research. The AMAI workshop aims to engage medical AI practitioners and bring more application flavor in clinical, evaluation, human-AI collaboration, new technical strategy, trustfulness, etc., to augment the research and development on the application aspects of medical AI, on top of pure technical research.
The goal of AMAI is to create a forum to bring together researchers, clinicians, data scientists, domain experts, AI practitioners, industry, and students to investigate and discuss various aspects related to applications of medical AI. AMAI will 1) introduce emerging medical AI research topics and novel methodology towards applications, 2) showcase the evaluation, translation, use case, success, ELSI considerations of AI in healthcare, 3) develop multi-disciplinary collaborations and academic-industry partnerships, and 4) provide educational, networking, and career opportunities for attendees including clinicians, scientists, trainees, and students.
AMAI 2025 will be composed of invited talks, contribution paper/abstract presentations, and expert panel discussions. Submissions will include 2 tracks, full papers and abstracts. Among all the accepted full papers and abstracts, the workshop will give a Best Student Paper award, a Best Workshop Paper award, and a Best Abstract Award, all with certificates.
The first AMAI workshop was held on September 18, 2022 in Singapore, as a Satellite Event of MICCAI 2022 and it was a big success.
The second AMAI workshop was held on October 8, 2023 in Vancouver, Canada, as a Satellite Event of MICCAI 2023 and the success was continued.
The third AMAI workshop was held on October 6, 2024 in Marrakesh, Morocco, as a Satellite Event of MICCAI 2024 and the success was continued.
AMAI calls for submissions from multiple aspects of research topics, such as, but not limited to, those listed below. AMAI is agnostic to medical data modalities and encourages submissions using imaging and/or non-imaging data.
Clinical and Translational AI/ML Applications: Focused on specific diseases or medical contexts.
Evaluation and Validation: Testing medical AI/ML in simulated or real-world settings (prevention, screening, risk assessment, diagnosis, prognosis, treatment, etc.).
Data Innovation: Curation of medical data, including generative AI and synthetic data applications.
Advanced AI Models: Development and use of multi-modal AI models, foundation models, large language models, vision-language models, in medical domains.
Novel Approaches: Strategies, methodologies, tools, and software aimed at practical and clinical applications of AI/ML.
Human-AI Collaboration: Observer studies, human-AI interactions, synergistic integration of AI/ML with human/medical intelligence, and AI model uncertainty quantification.
Case Studies: Successful use cases, challenges, opportunities, lessons learned, and future prospects for medical AI/ML.
Trust and Usability: Strategies and opinions on usability, explainability, trustworthiness, safety, regulations, acceptance, limitations, bias, fairness, disparities, and ethical, legal, and social issues (ELSI).
Lifecycle Management: Post-market evaluation, performance monitoring, and continuous learning of medical AI systems in practice.
Stakeholder Perspectives: Exploring the acceptance of medical AI/ML among clinicians, healthcare providers, patients, and society at large.
Interdisciplinary Collaboration: Promoting collaboration between data scientists, clinicians, and domain experts.
Submissions may be in two tracks:
Track 1: Full papers: Submissions must be new work. All submissions will be reviewed by at least two experts with experience of relevance. Accepted papers will be assigned as oral or poster presentations primarily based on merit. Each paper will allow a maximum of 8 pages (including texts, figures, and tables) for scientific content and up to 2 additional pages for references. Submissions should be formatted in Lecture Notes in Computer Science (LNCS) style (Please use Springer LaTeX or Word templates) and anonymized for double-blind review. Supplemental materials are not allowed. For accepted papers, the corresponding/senior authors will need to complete and sign a Consent-to-Publish form on behalf of all the authors. For papers invited for publication in the partnering journals, the authors will be asked to convert the accepted papers to align with the format requirements of the journals.
Note: Within the last section (i.e., Discussion or Conclusion) of your paper, it is required to include a separate paragraph at the end of the section to briefly describe the prospect of application of your work. Please refer to below format (note that the phrase of "Prospect of application" needs to be bold font):
Prospect of application: Use maximum 60 words to describe the prospect and envisioned contexts, scenarios, or circumstances on the potential application/deployment of your work.
Track 2: Abstracts: Submissions may be new work or recently published/accepted papers (including posted preprints). All submissions will be reviewed in terms of scientific merit, relevance to the workshop, and significance to the field. Accepted abstracts may be assigned primarily as poster presentations. Submissions will allow a maximum of 1 page (including figures/tables, if any), following specified formats in this template: AMAI Abstract Template. Abstract submissions do not need to be anonymized. The accepted abstracts will be made publicly accessible on this website.
Submissions for both tracks should be submitted via the CMT system: https://cmt3.research.microsoft.com/amai2025
Camera-ready Submission Instructions
Full papers: Please follow the MICCAI 2025 main conference's general guidelines (if applicable) for camera-ready submissions: https://conferences.miccai.org/2025/en/CAMERA-READY-SUBMISSION-GUIDELINES.html. Note that the new requirement on Disclosure of Interests for 2025. Paper length: a maximum of 8.5 pages (including texts, figures, and tables) for scientific content and up to 2 additional pages for references (this is consistent with the MICCAI main conference). Both the source files (Word or Latex) and PDF files are required to upload to the CMT system (format: AMAI2025_PaperID.FileExtension, where PaperID is the two-digit numeric ID of your paper). Supplemental materials are not allowed.
The License to Publish form needs to be signed by the corresponding/senior author on behalf of all the authors. The corresponding author signing the copyright form should match the corresponding author marked on the paper. Conference Name (i.e., AMAI 2025) and the Volume Editors' Names are already entered for you at the first page, where you need to fill in the title and names of the all authors and corresponding authors of your paper. This form must be signed in wet-ink. Digital signature is not acceptable. Please scan your signed form and save it as a PDF file (file name format: AMAI2025_License-to-Publish_PaperID.pdf, where PaperID is the two-digit numeric ID of your paper) and upload it to the CMT system.
The corresponding author must be available to carry out a final proof check of the typeset paper before publishing in the LNCS proceedings. He or she will be given a 72-hour time-slot to do so. If the corresponding author does not respond within the timeslot given, the paper is automatically considered approved. The publisher will not accommodate any late correction requests. The corresponding author should be clearly marked as such in the header of the paper. He or she is also the one who signs the license-to-publish form on behalf of all of the authors. Please note that the corresponding author cannot be changed after the camera ready submission deadline. We encourage the inclusion of all of the authors’ email addresses in the header, but at the very least, the email address of the corresponding author should be present.
Abstracts: Final version of the accepted abstracts should be formatted strictly following the AMAI Abstract Template. The maximum length of an abstract is 1 page (including everything). Both a Word source document (.docx) and a corresponding PDF document (.pdf) are required as final files to submit to the CMT system. File names should be formatted as: AMAI2025_Abstract_submissionID.docx and AMAI2025_Abstract_submissionID.pdf. Supplemental materials are not allowed. No copyright form will need to be signed for accepted abstracts. The final version of the accepted abstracts will be made publicly accessible on this website.
Please use the same CMT system link (see above) to submit the camera-ready papers or final abstracts.
Presentation Instructions
The workshop will be during 13:30-18:00 on Sep. 23, 2025 at the DCC2-3F-302 room.
Presentations will be in person. No virtual or hybrid presentations except pre-approved.
The presentation mode (oral or poster) of your paper/asbtract is shown in the portal of the CMT system.
Oral presentations will be a total of 10 minutes per paper (Q&A time included). There are no specific formatting requirements on the presentation slides. Presenters are asked to report 30 minutes earlier to the workshop room to copy the presentation files to the computer before the start of the workshop.
For poster presentations, please make your physical posters referring to the MICCAI main conference poster formats. Posters will be staying up throughout the full day (not just the workshop hours) on Sep. 23 and please display your posters before the workshop starts and as early as you can. There will be a dedicated poster session in the workshop agenda for authors to present their posters, but authors can and are encouraged to present/interact anytime while the posters are in display. There will be labels on the poster boards. Authors please just find an empty board labeled for our workshop (i.e., AMAI) to use for your posters (no specific numbers will be assigned to specific posters).
By default, accepted full papers will be published in a Springer Nature LNCS proceeding as a part of the MICCAI Satellite Events. Depending on the quality and topics, a number of accepted full papers may be recommended to a partnering journal (the papers will need to go through additional editorial and peer-review process run by the journal, and authors will be asked whether they would like their accepted papers to be considered for recommendation; more details will be announced here in due course). Papers to be published in the partnering journal will not be published in the Springer Nature LNCS proceedings.
Accepted abstracts will not be formally published by publishers (neither in the MICCAI Satellite Event LNCS proceedings nor in the partnering journals); they will be made publicly accessible on this website.
Among all the accepted full papers and abstracts, AMAI will give a Best Student Paper award, a Best Workshop Paper award, and a Best Abstract Award, all with electronic certificates.
Submissions open: April 15, 2025
Submissions close: 11:59pm, Pacific Time, June 25, 2025
Submissions close: 11:59pm, Pacific Time, June 30, 2025
Notification of acceptance: July 16, 2025
Notification of acceptance: July 22, 2025
Camera ready submission due: 11:59pm, Pacific Time, July 30, 2025
Camera ready submission due: 11:59pm, Pacific Time, Aug 5, 2025 (firm deadline)
Workshop: 13:30-18:00, Sep. 23, 2025 (Location/meeting room: DCC2-3F-302)
(In alphabetical order)
Mohd Anwar, PhD, National Institute of Biomedical Imaging and Bioengineering (NIBIB), USA
Dooman Arefan, PhD, University of Pittsburgh, USA
Sixian Chan, PhD, Zhejiang University of Technology, China
Niketa Chotai, MD, RadLink Imaging Centre and National University of Singapore, Singapore
Dania Daye, MD, PhD, University of Wisconsin, USA
Susanne Gaube, PhD, University College London, UK
Degan Hao, PhD, Morgan Stanley, USA
Douglas Hartman, MD, University of Cincinnati Medical Center, USA
Michail Klontzas, MD, University of Crete, Greece
Zhicheng Jiao, PhD, Brown University, USA
Fabian Laqu, MD, University Hospital Würzburg, Germany
Anh Le, PhD, Cedars-Sinai Medical Center, USA
Ines Prata Machado, PhD, University of Cambridge, UK
Masahiro ODA, PhD, Nagoya University, Japan
Mireia Crispin Ortuzar, PhD, University of Cambridge and Cancer Research, UK
Chang Min Park, MD, PhD, Seoul National University Hospital, South Korea
Matthew Pease, MD, Indiana University, USA
Nicholas Petrick, PhD, U.S. Food and Drug Administration, USA
Bhanu Prakash K.N. PhD, National University of Singapore, Singapore
Parisa Rashidi, PhD, University of Florida, USA
Zaid Siddiqui, MD, Baylor College of Medicine, USA
Tao Tan, PhD, Macao Polytechnic University, Macau
Zhiyong (Sean) Xie, PhD, Xellar Biosystems, USA
Qi Yang, PhD, Genentech, Inc., USA
Xiaofeng Yang, PhD, Emory University & Georgia Institute of Technology, USA
Yudong Zhang, MD, PhD, First Affiliated Hospital, Nanjing Medical University, China
Jian Zheng, PhD, Suzhou Institute of Biomedical Engineering and Technology of the Chinese Academy of Sciences, China
Location: DCC2-3F-302
Time: 13:30-18:00, Sep. 23, 2025
** Presenters please report to the organizers before the start of the workshop and copy your presentation slides to the onsite computer.
13:30: Introductory remarks: AMAI Organizers
13:35-13:55: Keynote talk
Xiaofeng Yang, PhD
Paul W. Doetsch Professor in Cancer Research
Emory University and Georgia Institute of Technology
Title: Application of Generative AI in Radiation Oncology
13:55-14:45: Oral Presentation Session I: 5 papers
(10 mins /paper; including presentation and Q&A)
#11 Evaluating Foundation Models with Pathological Concept Learning for Kidney Cancer
Shangqi Gao, Sihan Wang, Yibo Gao, Boming Wang, Xiahai Zhuang, Anne Warren, Grant Stewart, James Jones, and Mireia Crispin-Ortuzar
#45 Evaluating Generative Models for Open-Ended Medical Diagnosis in Realistic Clinical Scenarios [Abstract]
Kyungmin Jeon, Gihun Cho, Dabin Min, Jiyoung Lee, Donguk Kim, Chang Min Park
#26 MIRAGE: Retrieval and Generation of Multimodal Images and Texts for Medical Education
Miguel Díaz Benito, Cecilia Diana-Albelda, Álvaro García Martín, Jesús Bescós, Marcos Escudero Viñolo, Juan Carlos SanMiguel
#47 Evaluating Large Language Models for Automated Clinical Abstraction in Pulmonary Embolism Registries: Performance Across Model Sizes, Versions, and Parameters
Mahmoud Alwakeel, Emory Buck, Jonathan G. Martin, Imran Aslam, Sudarshan Rajagopal, Jian Pei, Mihai V. Podgoreanu, Christopher J. Lindsell, An-Kwok Ian Wong
#50 Improving Fracture Risk Prediction via Deep Learning on DXA Report Images [Abstract]
Yisak Kim, Chang Min Park, Sung Hye Kong
14:45-15:45: Poster presentation and coffee break (All attendees will walk to the poster area for discussion. Please make sure to go back to the workshop room on time to continue the remaining agenda)
15:45-16:25: Oral Presentation Session II: 4 papers
(10 mins /paper; including presentation and Q&A)
#14 Joint Task Network for Integrating Cognitive Scores and Image Feature in AD Diagnosis
Yanteng Zhang, Songheng Li, Yi Wu, Chuanyi Zhang, Congyu Zou, and Vince Calhoun
#40 Assessment of Systemic Health Using Retinal Age Gap: Development of a Predictive Model and Clinical Applicability [Abstract]
Boa Jang, Richul Oh, Tae-Hoon Lee, Chang Ki Yoon, Hyuk Jin Choi, Kunho Bae, Young-Gon Kim
#15 Interpretable Rheumatoid Arthritis Scoring via Anatomy-aware Multiple Instance Learning
Zhiyan Bo, Laura C. Coates, and Bartłomiej W. Papież
#69 Towards Automatic Diagnosis of Pediatric Obstructive Sleep Apnoea-Hypopnoea Syndrome using Facial Features
Sara García-de-Villa, Navid Rabbani, Nicolas Saroul, Alexandre Laville and Adrien Bartoli
16:25-17:10: Student Panel
Translational Research to Advance Medical AI Application
17:10-17:50: Oral Presentation Session III: 4 papers
(10 mins /paper; including presentation and Q&A)
#16 PanDx: AI-assisted Early Detection of Pancreatic Ductal Adenocarcinoma on Contrast-enhanced CT
Han Liu, Riqiang Gao, Eileen Krieg, and Sasa Grbic
#4 Sequential Organ Motion Prediction via Autoregressive Modeling
Yuxiang Lai, Jike Zhong, Vanessa Su, and Xiaofeng Yang
#33 HU-based Foreground Masking for 3D Medical Masked Image Modeling
Jin Lee, Vu Dang, Gwang-Hyun Yu, Anh Le, Zahid Rahman, Jin-Ho Jang, Heonzoo Lee, Kun-Yung Kim, Jin-Sul Kim, and Jin-Young Kim
#55 A modular deep-learning pipeline for automated aorta characterization on CT
Loris Giordano, Jakub Ceranka, Selene De Sutter, Kaoru Tanaka, Gert Van Gompel, Tom Lenaerts, and Jef Vandemeulebroucke
17:50-18:00: Award announcement and closing remarks
18:00: Adjourn
Boa Jang, PhD Student
Bioengineering at Seoul National University and Seoul National University Hospital, Republic of Korea. Research interests: Deep learning for clinical applications, particularly retinal image–based diagnostic tools to improve accuracy and support clinical decision-making.
Zhengbo Zhou, PhD Student
Intelligent Systems Program at the University of Pittsburgh, USA. Research interests: Longitudinal spatio-temporal learning frameworks and vision–language models for medical image analysis, with a particular emphasis on breast cancer risk prediction.
Dmitrii Seletkov, PhD Student
Institutes of Radiology and Artificial Intelligence and Informatics in Medicine at the Technical University of Munich, Germany. Research interests: Risk assessment of chronic diseases, time-to-event prediction, and in-context learning, with a strong emphasis on translational medicine.
Shandong Wu, PhD, Associate Professor, University of Pittsburgh, USA; Email: wus3@upmc.edu
Behrouz Shabestari, PhD, Director, National Technology Centers Program, National Institute of Biomedical Imaging and Bioengineering (NIBIB), USA; Email: behrouz.shabestari@nih.gov
Lei Xing, PhD, Jacob Haimson & Sarah S. Donaldson Professor, Stanford University, USA; Email: lei@stanford.edu
Sponsors: Pittsburgh Center for Artificial Intelligence Innovation in Medical Imaging
Full papers (37)
4 Sequential Organ Motion Prediction via Autoregressive Modeling
Yuxiang Lai, Jike Zhong, Vanessa Su, and Xiaofeng Yang
7 Seeing More with Less: Video Capsule Endoscopy with Multi-Task Learning
Julia Werner, Oliver Bause, Julius Oexle, Maxime Le Floch, Franz Brinkmann, Jochen Hampe, and Oliver Bringmann
8 TUBA: AI-Assisted Nasogastric Tube Placement Assessment System
GwiSeong Moon, Kyoung Min Moon, Inseo Park, Kanghee Lee, Doohee Lee, Woo Jin Kim, Yoon Kim, and Hyun-Soo Choi
11 Evaluating Foundation Models with Pathological Concept Learning for Kidney Cancer
Shangqi Gao, Sihan Wang, Yibo Gao, Boming Wang, Xiahai Zhuang, Anne Warren, Grant Stewart, James Jones, and Mireia Crispin-Ortuzar
14 Joint Task Network for Integrating Cognitive Scores and Image Feature in AD Diagnosis
Yanteng Zhang, Songheng Li, Yi Wu, Chuanyi Zhang, Congyu Zou, and Vince Calhoun
15 Interpretable Rheumatoid Arthritis Scoring via Anatomy-aware Multiple Instance Learning
Zhiyan Bo, Laura C. Coates, and Bartłomiej W. Papież
16 PanDx: AI-assisted Early Detection of Pancreatic Ductal Adenocarcinoma on Contrast-enhanced CT
Han Liu, Riqiang Gao, Eileen Krieg, and Sasa Grbic
18 MVMIL: Multi-view Multiple Instance Learning for Whole Slide Image Classification of Bladder Cancer
Shen Liu, Yihuang Hu, Weiping Lin, Ying Huang, Jun Hou, Baptiste Magnier, Liansheng Wang
19 Multi-stage Multi-resolution Fusion for Accurate and Efficient Whole Slide Image Segmentation in Colorectal Cancer
Shen Liu, Weiping Lin, Wentai Hou, Yuanzheng Lou, Baptiste Magnier, Yanqing Ding, Liansheng Wang
24 Transformer-Based Instance Detection in 3D Medical Images
Luka Skrlj, Samuel Kadoury, and Tomaž Vrtovec
26 MIRAGE: Retrieval and Generation of Multimodal Images and Texts for Medical Education
Miguel Díaz Benito, Cecilia Diana-Albelda, Álvaro García Martín, Jesús Bescós, Marcos Escudero Viñolo, Juan Carlos SanMiguel
29 Multitask Deep Learning Model for Liver Segmentation and Lesion Classification from Multisequence MRI
Dongdong Gu, Yuzhong Chen, Xuejian Li, Xi Ouyang, Zhong Xue, and Dinggang Shen
33 HU-based Foreground Masking for 3D Medical Masked Image Modeling
Jin Lee, Vu Dang, Gwang-Hyun Yu, Anh Le, Zahid Rahman, Jin-Ho Jang, Heonzoo Lee, Kun-Yung Kim, Jin-Sul Kim, and Jin-Young Kim
34 FLOw-Loss: A Hybrid Loss for Centerline-Aware Segmentation in XCA
Miriam Gutiérrez Fernández, Laura Valeria Perez-Herrera, Nerea Arrarte Terreros, and Karen López-Linares Román
37 XTag-CLIP: Robust and Reliable Thyroid Scar Analysis with Limited Data via Cross-Attention
Eunju Lee, SeungHoon Lee, YoungBin Kim, and Jong-hyuk Ahn
38 Text2Organ: Text-Driven Multimodal Organ Segmentation for CT scans
JinGyo Jeong, Younghyun Park, Sejung Yang
41 Towards Field-Ready AI-based Malaria Diagnosis: A Continual Learning Approach
Louise Guillon, Soheib Biga, Yendoube E. Kantchire, Mouhamadou Lamine Sane, Grégoire Pasquier, Kossi Yakpa, Stéphane E. Sossou, Marc Thellier, Laurent Bonnardot, Laurence Lachaud, Renaud Piarroux, and Ameyo M. Dorkenoo
42 Privacy-Centric Seizure Diagnosis via Relation-Aware Fusion of Minimally-Invasive Modalities
Talha Ilyas, Deval Mehta, Shobi Sivathamboo, Ilma Wijaya, Rob Steele, Hugh Simpson,
Lyn Millist, Terence O’Brien, Patrick Kwan, and Zongyuan Ge
44 Whole-body Representation Learning For Competing Preclinical Disease Risk Assessment
Dmitrii Seletkov, Sophie Starck, Ayhan Can Erdur, Yundi Zhang, Daniel Rueckert, and Rickmer Braren
46 Multimodal Sheaf-based Network for Glioblastoma Molecular Subtype Prediction
Shekhnaz Idrissova and Islem Rekik
47 Evaluating Large Language Models for Automated Clinical Abstraction in Pulmonary Embolism Registries: Performance Across Model Sizes, Versions, and Parameters
Mahmoud Alwakeel, Emory Buck, Jonathan G. Martin, Imran Aslam, Sudarshan Rajagopal, Jian Pei, Mihai V. Podgoreanu, Christopher J. Lindsell, An-Kwok Ian Wong
49 MFG Sampling: Solving Inverse Problems in Multi-Level High-Frequency Guidance via Diffusion Models
Jungwoo Bae and Jitae Shin
51 Vision-Language Sliding Cross Attention for Text-guided Pneumonia Segmentation
Fei Yao, Xiang Zhang, Xinang Jiang, Yi Xiao, Li Fan, S. Kevin Zhou, and Shiyuan Liu
53 Flexible Multimodal Neuroimaging Fusion for Alzheimer’s Disease Progression Prediction
Benjamin Burns, Yuan Xue, Douglas W. Scharre, and Xia Ning
54 Dynamic Robot-Assisted Surgery with Hierarchical Class-Incremental Semantic Segmentation
Julia Hindel, Ema Mekic, Enamundram Naga Karthik, Rohit Mohan, Daniele Cattaneo, Maria Kalweit, and Abhinav Valada
55 A modular deep-learning pipeline for automated aorta characterization on CT
Loris Giordano, Jakub Ceranka, Selene De Sutter, Kaoru Tanaka, Gert Van Gompel, Tom Lenaerts, and Jef Vandemeulebroucke
57 Echo-Path: Pathology-Conditioned Echo Video Generation
Kabir Hamzah Muhammad, Marawan Elbatel, Yi Qin, and Xiaomeng Li
58 Domain-Specific Pretraining and Fine-Tuning with Contrastive Learning for Fluorescence Microscopic Image Segmentation
Yunheng Wu, Liangyi Wang, Zhongyang Lu, Masahiro Oda, Yuichiro Hayashi, Shuntaro Kawamura, Takanori Takebe, and Kensaku Mori
59 Automatic Segmentation of Lower-Limb Arteries on CTA for Pre-surgical Planning of Peripheral Artery Disease
Lisa Guzzi, Maria A. Zuluaga, Fabien Lareyre, Gilles Di Lorenzo, Sébastien Goffart, Andrea Chierici, Juliette Raffort, and Hervé Delingette
61 Feature-space Kernel Prediction Network for Denoising of Low-dose Brain CT
Jiwoo Song, Jaeseok Jang, Soohwa Song, Dong Hoon Shin, and Dohyun Kim
67 Disentanglement of Biological and Technical Factors via Latent Space Rotation in Clinical Imaging Improves Disease Pattern Discovery
Jeanny Pan, Philipp Seeböck, Christoph Fürböck, Svitlana Pochepnia, Jennifer Straub, Lucian Beer, Helmut Prosch, Georg Langs
69 Towards Automatic Diagnosis of Paediatric Obstructive Sleep Apnoea-Hypopnoea Syndrome using Facial Features
Sara García-de-Villa, Navid Rabbani, Nicolas Saroul, Alexandre Laville and Adrien Bartoli
71 Benchmarking MRISegmenter++ for Splenomegaly: A Comprehensive Comparative Study
Jianfei Liu, Langston Locke, Pritam Mukherjee, Tejas Sudharshan Mathai, Yan Zhuang, Brandon Khoury, Lauren Eckhardt, Christina T. Kozycki, and Ronald M. Summers
74 3D CT-Based Coronary Calcium Assessment: A Feature-Driven Machine Learning Framework
Ayman Abaid, Gianpiero Guidone, Sara Alsubai, Foziyah Alquahtani, Talha Iqbal, Ruth Sharif, Hesham Elzomor, Emiliano Bianchini, Naeif Almagal, Michael G. Madden, Faisal Sharif, and Ihsan Ullah
80 MoERad: Mixture of Experts for Radiology Report Generation from Chest X-ray Images
Sriram Gnana Sambanthan and Monika Sharma
81 Tabular Data-enhanced Multi-modal Alignment and Synthesis for Alzheimer’s Disease Diagnosis
Weilin Zhou, Yuxiao Liu, Yuanwang Zhang, Kaicong Sun, Fan Li, Shilun Zhao, Yuanbo Wang, and Dinggang Shen
83 Bias-Resilient Feature Learning for Robust Domain Adaptation in Mammography
Degan Hao, Dooman Arefan, Jun Luo, Margarita L. Zuley, Na Du, Shandong Wu
Abstracts (14)
12 Artificial Intelligence-Based Models for Automated Bone Age Assessment from Posteroanterior Wrist X-Rays: A Systematic Review
Isidro Miguel Martín Pérez, Sofia Bourhim, Sebastián Eustaquio Martín Pérez
17 Deep Learning-Assisted Embryo Selection from Static Blastocyst Images to Improve Live Birth Outcomes in IVF
Xuan Lam Bui, Thi My Trang Luong, Hoang Bach Dat Le, Nguyen Quoc Khanh Le
23 Edge-Optimized Cascaded Deep Learning for Automated Malaria Diagnosis: Enhancing Sensitivity and Efficiency on Resource-Constrained Devices
Hyunghun Cho, Adam Balint, Creto Kanyemba
25 Exploring the Applicability of Visual Question Answering for Image Finding Generation and Lesion Classification: A Case Study on Lung Nodule CT Images
Maiko Nagao, Kaito Urata, Atsushi Teramoto, Kazuyoshi Imaizumi, Masashi Kondo, Hiroshi Fujita
27 Automated iRECIST scoring in CT using graph neural networks with Bayesian constraints for patients with metastatic renal cell carcinoma
R.C.J. Kraaijveld, B. de Keizer, B.B.M. Suelmann, F.J. Wessels, M.A.A. Ragusi, K.G.A. Gilhuijs
28 Radiology-guided Generation of Lung Nodule CT Images Using Latent Diffusion Models
Kaito Urata, Maiko Nagao, Atsushi Teramoto, Kazuyoshi Imaizumi, Masashi Kondo, Hiroshi Fujita
30 Multimodal Large Language Models Show Potential for Colorectal Polyp Analysis:
A Many-Shot Prompting Feasibility Study
Seunghyun Jang, Minji Oh, Byeong Gwan Kim, Jongpil Lim, Chang Min Park, Sihyun Kim
31 EchoMamba: A State Space Model for Acute Myocardial Infarction Detection in Echocardiographic Imaging
Junya Eguchi, Atsushi Teramoto, Keiko Sugimoto, Akira Yamada, Kazuhiro Nakamura
39 A Lightweight and Clinically-Aware Metric for Automated Chest X-ray Report Evaluation
Gihun Cho, Seunghyun Jang, Hanbin Ko, Inhyeok Baek, Chang Min Park
40 Assessment of Systemic Health Using Retinal Age Gap: Development of a Predictive Model and Clinical Applicability
Boa Jang, Richul Oh, Tae-Hoon Lee, Chang Ki Yoon, Hyuk Jin Choi, Kunho Bae, Young-Gon Kim
43 UltraFlwr - An Efficient Federated Surgical Object Detection Framework on the Edge
Yang Li, Soumya Snigdha Kundu, Maxence Boels, Toktam Mahmoodi, Sebastien Ourselin, Tom Vercauteren, Prokar Dasgupta, Jonathan Shapey, and Alejandro granados
45 Evaluating Generative Models for Open-Ended Medical Diagnosis in Realistic Clinical Scenarios
Kyungmin Jeon, Gihun Cho, Dabin Min, Jiyoung Lee, Donguk Kim, Chang Min Park
50 Improving Fracture Risk Prediction via Deep Learning on DXA Report Images [Abstract]
Yisak Kim, Chang Min Park, Sung Hye Kong
77 Generative Data-Augmented CAD for Ampullary Lesions
Jangho Kwon, Kihwan Choi
Microsoft CMT service was used for managing the peer-reviewing process for this conference. This service was provided for free by Microsoft and they bore all expenses, including costs for Azure cloud services as well as for software development and support.