[1] C. Zhang, Y. Li, and G. Chen, “Accurate and robust sparse‐view angle CT image reconstruction using deep learning and prior image constrained compressed sensing (DL‐PICCS),” Medical Physics, vol. 48, no. 10, pp. 5765–5781, Oct. 2021, doi: 10.1002/mp.15183.
[2] H. Kudo, T. Suzuki, and E. A. Rashed, “Image reconstruction for sparse-view CT and interior CT— introduction to compressed sensing and differentiated backprojection,” Quantitative Imaging in Medicine and Surgery, vol. 3, no. 3, 2013.
[3] W. Xia, W. Cong, and G. Wang, “Patch-Based Denoising Diffusion Probabilistic Model for Sparse-View CT Reconstruction.” arXiv, 2022. arXiv preprint arXiv:2211.10388 [Online]. Available: http://arxiv.org/abs/2211.10388
[4] J. Bai, Y. Liu, and H. Yang, “Sparse-View CT Reconstruction Based on a Hybrid Domain Model with Multi-Level Wavelet Transform,” Sensors, vol. 22, no. 9, p. 3228, Apr. 2022, doi: 10.3390/s22093228.
[5] Z. Qu, X. Yan, J. Pan, and P. Chen, “Sparse View CT Image Reconstruction Based on Total Variation and Wavelet Frame Regularization,” IEEE Access, vol. 8, pp. 57400–57413, 2020, doi: 10.1109/ACCESS.2020.2982229.
[6] Z. Fu, H. W. Tseng, S. Vedantham, A. Karellas, and A. Bilgin, “A residual dense network assisted sparse view reconstruction for breast computed tomography,” Sci Rep, vol. 10, no. 1, p. 21111, Dec. 2020, doi: 10.1038/s41598-020-77923-0.
[7] Z. Zhang, X. Liang, X. Dong, Y. Xie, and G. Cao, “A Sparse-View CT Reconstruction Method Based on Combination of DenseNet and Deconvolution,” IEEE Trans. Med. Imaging, vol. 37, no. 6, pp. 1407–1417, Jun. 2018, doi: 10.1109/TMI.2018.2823338.
[8] W. Xia, Z. Yang, Q. Zhou, Z. Lu, Z. Wang, and Y. Zhang, “A Transformer-Based Iterative Reconstruction Model for Sparse-View CT Reconstruction,” in Medical Image Computing and Computer Assisted Intervention – MICCAI 2022, vol. 13436, L. Wang, Q. Dou, P. T. Fletcher, S. Speidel, and S. Li, Eds. Cham: Springer Nature Switzerland, 2022, pp. 790–800. doi: 10.1007/978-3-031-16446-0_75.
[9] B. Zhou, X. Chen, S. K. Zhou, J. S. Duncan, and C. Liu, “DuDoDR-Net: Dual-domain data consistent recurrent network for simultaneous sparse view and metal artifact reduction in computed tomography,” Medical Image Analysis, vol. 75, p. 102289, Jan. 2022, doi: 10.1016/j.media.2021.102289.
[10] Y. Han and J. C. Ye, “Framing U-Net via Deep Convolutional Framelets: Application to Sparse-view CT.” arXiv, 2018. arXiv preprint arXiv:1708.08333 [Online]. Available: http://arxiv.org/abs/1708.08333
[11] H. Chen, Y. Zhang, M. K. Kalra, F. Lin, Y. Chen, and P. Liao, “Low-Dose CT with a Residual Encoder-Decoder Convolutional Neural Network (RED-CNN)”.
[12] Li, Y, Tie, X, Li, K, et al. A quality-checked and physics-constrained deep learning method to estimate material basis images from single-kV contrast-enhanced chest CT scans. Med Phys. 2023; 1- 21. https://doi.org/10.1002/mp.16352
[13] Schlemper J, Oktay O, Schaap M, et al. Attention gated networks: learning to leverage salient regions in medical images. arXiv, 2018. arXiv preprint arXiv: 1808.08114 [Online]. Available https://arxiv.org/abs/1808.08114.
[14] Kearney V, Ziemer BP, Perry A, Wang T, Chan JW, Ma L, Morin O, Yom SS, Solberg TD. Attention-Aware Discrimination for MR-to-CT Image Translation Using Cycle-Consistent Generative Adversarial Networks. Radiol Artif Intell. 2020 Mar 25;2(2):e190027. doi: 10.1148/ryai.2020190027. PMID: 33937817; PMCID: PMC8017410.
[15] R. Matteo, “TorchRadon: Fast Differentiable Routines for Computed Tomography.” arXiv, 2020. arXiv preprint arXiv:2009.14788 [Online]. Available: http://arxiv.org/abs/2211.10388