This project was archived and it is currently under review.
https://arxiv.org/pdf/2105.11333.pdf
Abstract
Recently a number of studies demonstrated impressive performance on diverse vision-language multimodal tasks such as image captioning and visual question answering by extending the self-attention based Transformer architecture with multimodal pre-training objectives. Despite its huge potential, vision-language multimodal pretraining in the medical domain has only recently received attention, and only demonstrated improved diagnosis accuracy of vision-language pre-trained models. In this work we explore a broad set of multimodal representation learning tasks in the medical domain, specifically using radiology images and the unstructured report. We propose a new model which adopts a Transformer based architecture combined with a novel multimodal attention masking scheme to maximize generalization performance for both visionlanguage understanding task (e.g., diagnosis classification) and vision-language generation task (e.g., radiology report generation). By rigorously evaluating the proposed model on four downstream tasks with three radiographic image-text datasets (MIMIC-CXR, Open-I, and VQA-RAD), we empirically demonstrate the superior downstream task performance and generality of our model against various baselines including task specific architectures. In addition, we qualitatively analyze our model by showing the results of retrieved image-report pairs, the attention map visualization, and generated reports. Our proposed multimodal pretraining model could flexibly adapt to multiple downstream tasks of vision-language understanding and generation with a novel self-attention scheme. We believe that our approach can provide the basis for a wide range of interpretations of vision-language multimodal in the medical domain
Code:github.com/SuperSupermoon/MedViLL
Keywords
Healthcare, Medical, Multimodal Learning, Representation Learning, Self-Supervised Learning, Vision-and-Language