With the rise of quantitative and qualitative information in healthcare, hospitals and medical professionals have begun to rely more and more on data science, specifically machine learning (ML), a form of artificial intelligence (AI). It involves a collection of methods that uses experiences and existing data to continuously improve the intelligence of a certain computer or program. Methods such as transfer learning, regression, and deep learning allow machines to find patterns in data, similar to how the brain uses information. One important form of ML in healthcare is medical imaging.
Figure 1: certain physicians interpret images generated by computer science to diagnose and treat specific conditions in patients.
Radiologists use different kinds of images such as x-ray, magnetic resonance (MR), computed tomography (CT), or positron emission tomography (PET) scans as a means of diagnosing and treating human diseases (see Figure 1). The analysis done is called Medical imaging, a process by which scientists “develop methods, tools and pipelines to fully utilize imaging data,” according to the Berkeley Institute for Data Science. The aforementioned “methods, tools and pipelines” may include complex algorithms and neural networks. These are the major components of deep learning, which itself is the most important ML method for generating medical imaging data. Deep learning can be used to isolate certain components of an MRI, generate an MR dataset from a CT dataset, or even calculate one’s risk of dying based on chest x-rays.
In one study in IEEE Transactions on Medical Imaging from May of 2016, a group of scientists used a convolutional neural network (CNN)-based segmentation method to automatically separate brain tumors from MR images. Another study uses a recurrent fully-convolutional network (RFCN) to automatically segment cardiac MRIs (see Figure 2). Manual segmentation takes a much longer time and the measurements would not be as precise. Compared to AI, manual segmentation reduces the quality of treatment from radiologists and fellow physicians due to longer wait times and inaccurate tumor measurements.
Figure 2: RFCN segmenting tumor from heart images
Aside from segmentation, networks and deep learning algorithms can generate entirely new images of the same patient and area, just in a different modality (MR, CT, etc.). Scientists in Korea used a generative adversarial network (GAN) to simulate 2D brain MRIs from 2D brain CT images. The results of this experiment were only published in May of 2019. Therefore, more research must be done before this can be implemented into clinical practice; however, the potential gains from such innovations are too grand to ignore. In instances where costs are too high for MR images or when MR scanners are unavailable, being able to generate an MR image from a CT image is a boon for hospitals and radiologists.
Klaus Mueller, PhD, is a computer science professor at Stony Brook University. He has a great deal of experience in the field, receiving a PhD from Ohio State University, being a senior Scientist at Brookhaven National Laboratory, serving as the founding chair of SUNY Korea’s Computer Science Department, and being inducted into the US National Academy of Inventors among his other achievements. He has graciously offered to provide some insight into medical imaging and his research:
Q: What fields of computer science do you teach, study, and research?
A: Visual analytics, explainable AI, medical imaging
Q: Could you give some insight into the integration of AI/Machine learning into medicine, specifically radiology? Can you provide an example of something you previously researched that relates to this?
A: Image processing has long been a part of imaging and radiology. There is a package NIH Image that’s been around for decades. In fact the 3D visualization technique volume rendering has been initially invented to see data obtained by 3D CT scanners as a computer graphics image. Ari Kaufman, a professor at the SBU computer science department has been one of the pioneers and wrote several groundbreaking books. My PhD advisor Roni Yagel was a student of Prof. Kaufman and it was a topic of my own PhD dissertation.
Image processing has always borrowed techniques from computer vision and the whole field is now dominated by deep learning. It is mainly because the best image processor is the human brain and so anything that gets closer to it is valuable. The goal is to find features automatically so humans don’t have to do it manually. When I was a student I worked in a lab that employed dozens of students to outline structures in medical images, stained histology slides of coronary arteries to show the impact of smoking. I wrote interactive software that helped make this faster. I tried to automate it but it was too hard with conventional techniques. Now with deep learning you can train a network to do this just like back then when you trained student workers. It still is not as accurate as humans but it gets better all the time.
You can find similar examples in radiology, find tumors in chest x-rays in lungs, find COVID-19 related lesions and distinguish them from other lesions like pneumonia using deep learning networks. Takes just a few hundreds of training images and is much more reliable than the PCR tests.
Q: There is a notion that radiologists are the "data scientists of medicine." Do you believe there is any merit to this, or do you believe that both fields of study are mutually exclusive? Why?
A: I am not sure if I’d go that far unless you exclude any genomics or proteomics based diagnostics. There is the big field of *omics that is part of bioinformatics. If you exclude this, then yes, there is a lot of data generated by imaging, more than one can look at and you need machines to help you.
Q: What is one example of an advancement or breakthrough in medical imaging that can be credited to deep learning and possibly other machine learning methods?
A: Low-dose (low data) CT has made progress through machine learning, I wrote a paper about that with colleagues. You can check it out. Essentially it can help you “see through the noise” because low-dose CT images are very low quality. Experienced doctors can still see through the noise but ML helps clean it up and make the best out of the limited data. Eventually it will see more than an experienced doctor. As has been the case in other ML-assisted computer vision tasks, like face and object recognition and image classification.
Q: What does some of your current research entail; what are some of the current questions and problems you are solving related to medical imaging? Where do you see Medical informatics going in the future?
A: I work on creating datasets that can challenge ML algorithms to test their robustness, but still are plausible and would not challenge humans. These same datasets can also be used for training of deep neural networks and so make them more robust.
Q: What is the scope of imaging science beyond the medical field?
A: There is also deep space imaging, imaging for security, industrial imaging for quality assurance and material science. A significant field is high energy imaging for battery research, mostly to accelerate the development of electric vehicles with longer ranges. This is a world-wide race. I work with national labs to visualize these multi-channel datasets. You can see some of the results here. But even consumer photography has powerful ML algorithms to make the best out of your picture and help you manipulate it.
Q: What is your advice to the young men and women who want to study AI and imaging science in the future?
A: AI, machine learning, and data science are real growth areas worth studying, Each occupies a special segment. I would also study HCI human-computer interaction and visualization because you can never fully eliminate the human and you need a way to help humans to interact with the robot. The robot helps the human and vice versa.
The rapid integration of AI into medical imaging speaks to the current trend of automation and efficiency in healthcare and other industries. In the field of radiology specifically, ML and deep learning have contributed to major breakthroughs. As Dr Mueller emphasized in one of his answers, it is best said that AI is not here to replace healthcare workers, rather to aid them. The relationship between medicine and technology is the perfect example of multiple disciplines intertwining and revealing new horizons of scientific advancement and patient care.
Bibliography
Jin, Cheng-Bin et al. “Deep CT to MR Synthesis Using Paired and Unpaired Data.” Sensors (Basel, Switzerland) vol. 19,10 2361. 22 May. 2019, doi:10.3390/s19102361
S. Pereira, A. Pinto, V. Alves and C. A. Silva, "Brain Tumor Segmentation Using Convolutional Neural Networks in MRI Images," in IEEE Transactions on Medical Imaging, vol. 35, no. 5, pp. 1240-1251, May 2016, doi:10.1109/TMI.2016.2538465.
Poudel, Rudra P. K. et al. “Recurrent Fully Convolutional Neural Networks for Multi-slice MRI Cardiac Segmentation.” ArXiv abs/1608.03974 (2016): n. Pag.
Fu, Geng-Shen, et al. “Machine Learning for Medical Imaging.” Journal of Healthcare Engineering, Hindawi, 28 Apr. 2019, www.hindawi.com/journals/jhe/2019/9874591/.
“Ascent of Machine Learning in Medicine.” Nature News, Nature Publishing Group, 18 Apr. 2019, www.nature.com/articles/s41563-019-0360-1.
Reardon, Sara. “Rise of Robot Radiologists.” Scientific American, Scientific American, 1 Feb. 2020, www.scientificamerican.com/article/rise-of-robot-radiologists/.
Davidson, Leah. “Data Science in Healthcare: How It Improves Care.” Springboard Blog, 17 Apr. 2019, www.springboard.com/blog/data-science-in-healthcare/.
Mueller, Klaus. Interview. Conducted by Ayman Lone, 1 September 2020.
Mueller, Klaus. Klaus Mueller's Homepage, 2019, www3.cs.stonybrook.edu/~mueller/.