X-rays are one of the most common imaging modalities used in modern medicine. It is a quick and painless imaging procedure used for a wide variety of diagnostic purposes. During an X-ray, electromagnetic waves are sent through the patient and absorbed at rates specific to the physical properties of different tissues (radiodensity). This projects the radiodensity of the patient's volume on the exposed photographic film, creating a 2D image.
Computed tomography (CT) imaging, on the other hand, uses rotating X-ray source and detector to produce a series of cross-sectional images of the body at a defined slice interval. The resulting images can be stacked together to produce a 3D volume of radiodensities.
CT scans can provide detailed three-dimensional information of the human body, eliminating problem of overlapping structures in X-rays. However, this extra dimensionality comes at both a monetary and safety cost. CT imaging is more expensive and exposes patient to an order of magnitude greater level of radiation than X-ray imaging. For these reasons, X-rays are preferred to CTs in cases where the extra dimensionality doesn't provide a large enough clinical benefit to offset the costs.
When radiologists interpret X-rays, they use their internal knowledge of 3D human anatomy to guide their interpretation. They are in essence combining image inputs and a priori knowledge of human anatomy to formulate their diagnosis. Intuition would suggest that providing some kind of 3D understanding of a 2D X-ray film (via the embeddings from a 3D reconstruction process of that X-ray film) could improve accuracy on standard machine learning image tasks like recognition and segmentation on 2D X-rays. While testing this hypothesis is outside the scope of our project, it is a potential avenue for future research.
It is possible that an "accurate enough" 3D reconstruction of X-rays could be used instead of a CT in certain clinical scenarios, which could also allow patient to avoid unnecessary exposure to CT-sourced radiation.
To our best knowledge, no literature exists for 2D-to-3D reconstruction of chest X-rays and chest CTs. However, there exist several works that can perform 2D-to-3D reconstruction in somewhat analogous domains.
Since we are interested in 2D-to-3D reconstruction, our training datasets, consisting of 3D CT scans, would also need an equivalent of the input X-ray images.
We have decided to generate synthetic X-rays for each of the 3D CT scans in order to ensure matching pairs of 3D-2D dataset. The following are resources we have used to make this decision:
The dataset used in this project is the Lung Image Database Consortium image collection (LIDC-IDRI), available at The Cancer Imaging Archive (TCIA).
The database contains a total of 1010 patients.