Multi-institutional Collaborations for Improving Deep Learning-based Magnetic Resonance Image Reconstruction Using Federated Learning

Pengfei Guo, Puyang Wang, Jinyuan Zhou, Shanshan Jiang, Vishal M. Patel

School of Medicine, Johns Hopkins University

Whiting School of Engineering, Johns Hopkins University

Abstract of the project

Fast and accurate reconstruction of magnetic resonance (MR) images from under-sampled data is important in many clinical applications. In recent years, deep learning-based methods have been shown to produce superior performance on MR image reconstruction. However, these methods require large amounts of data which is difficult to collect and share due to the high cost of acquisition and medical data privacy regulations. In order to overcome this challenge, we propose a federated learning (FL) based solution in which we take advantage of the MR data available at different institutions while preserving patients’ privacy. However, the generalizability of models trained with the FL setting can still be suboptimal due to domain shift, which results from the data collected at multiple institutions with different sensors, disease types, and acquisition protocols, etc. With the motivation of circumventing this challenge, we propose a cross-site modeling for MR image reconstruction in which the learned intermediate latent features among different source sites are aligned with the distribution of the latent features at the target site. Extensive experiments are conducted to provide various insights about FL for MR image reconstruction. Experimental results demonstrate that the proposed framework is a promising direction to utilize multi-institutional data without compromising patients’ privacy for achieving improved MR image reconstruction.

The code is available at this repository

arXiv preprint

Top row: (a) ground truth, (b) zero-filled images, and (c) reconstructed images from the fastMRI, HPKS, IXI, and BraTS datasets from left to right, respectively. Bottom row: t-SNE plots. The distribution of (d) latent features without cross-site modeling, and (e) latent features corresponding to the proposed cross-site modeling. In each plot, green, blue, yellow, and red dots represent data from fastMRI, HPKS, IXI, and BraTS datasets, respectively

An overview of the proposed FL-MR framework. Through several rounds of communication between data centers and server, the collaboratively trained global model parameterized can be obtained in a data privacy-preserving manner.


An overview of the proposed FL-MR framework with cross-site modeling in a source site.


The schematic of different training strategies in (a) Scenario 1, and (b) Scenario 2. Note that for FL-MRCM, the source sites are the institutions that provide training data and the target site is the institution that provides testing data.


Qualitative results of different methods that correspond to Scenario 1. For results of T1-weighted images on, (a) fastMRI, (b) HPKS, (c) IXI, (d) BraTS. For results of T2-weighted images on, (e) fastMRI, (f) HPKS, (g) IXI, (h) BraTS. The second row of each sub-figure shows the absolute image difference between reconstructed images and the ground truth.


Contact

Contact pguo4@jhu.edu to get more information on our project