One of the most common diseases in society, cancer is known for affecting and killing large numbers of people yearly. In fact, The National Center for Health Statistics projected that in 2019 over 1.7 million new cases and over 600 thousand deaths would occur in the United States alone due to cancer. Furthermore, those of lower socioeconomic standing, residents of developing countries, and those without proper access to healthcare face higher mortality rates. As a result, millions of dollars are spent on cancer research and treatment each year.
A key hallmark of cancer that prevents researchers from finding cures is the sheer amount of heterogeneity that exists between populations. Even inside a single tumor, there can be a high degree of intratumor heterogeneity or variability inside of a single tumor. Under the umbrella of intratumor heterogeneity, there can also be spatial and temporal heterogeneity. Spatial heterogeneity is defined as the uneven distribution of genetically diverse tumor subpopulations within a disease site. Meanwhile, temporal heterogeneity is defined as the dynamic changes in the genetic diversity of a tumor over time. As a result of these types of heterogeneity, tumor populations can develop resistance to various drugs. Furthermore, as the tumors accumulate genetic alterations, genetically distinct subclonal populations can co-exist, resulting in intratumor genetic heterogeneity (ITH). ITH is a prognostic marker in multiple cancers and is a significant obstacle to cancer treatment. Furthermore, evidence also suggests that certain kinds of therapeutics may also contribute to cancer heterogeneity.
In order to study cancer in vitro, researchers have conventionally utilized cell culture in a 2D environment. Doing so has provided incalculable benefits and has helped to advance the current state of cancer research. However, 2D cell culture is often criticized as not representative of the anatomy and physiology of an organism. For example, cells exhibit varied sensitivity to drugs under different culturing environments, and thus using 3D culturing conditions present an advantage to new drug discoveries. In order to combat these limitations, 3D cell culture using collagen as a substrate has been developed. 3D cell culture offers many distinct advantages over 2d cell culture in tumor analysis. Among these advantages include more realistic systems and more complex scaffolds and matrices.
The advances in imaging have enabled researchers to analyze the temporal and spatial changes in living systems, allowing for the dynamic measurement across a range of spatial scales from single molecules to whole organisms. Moreover, the advent of high-throughput, automated microscopy has dramatically increased the production of large scale, image-based data. The demand to analyze complex, large scale data has given rise to machine learning and deep learning, sophisticated and efficient data analysis techniques that can perform a variety of tasks that include image classification, cell segmentation, and object tracking. Machine learning is a branch of artificial intelligence that gives machines (i.e. computers) the capability to learn without being explicitly programmed. Deep learning is a subset of machine learning that structures algorithms in layers to perform more complex tasks, creating an artificial neural network. These types of data analytical techniques have revolutionized image analysis, helping researchers uncover many discoveries.
Results from machine learning and image segmentation could be applied to the NIS-Elements software for automatic labeling. There are several ways to achieve this: First, by applying machine learning or open-cv methods to the images, it is easy to find the coordinates of points of interest (POI). These coordinates can, therefore, be converted to pixel coordinates to the NIS-Elements software for further processing. So far, the exact software we are going to use remains undecided (it could be python or C++), but what is important is grabbing the coordinates first. A program will be designed to take coordinates as input and automatically select that region. Meanwhile, it is legitimate to think that machine learning can be directly applied in NIS-Elements as well. Image segmentation, or machine learning, could be directly connected to NIS-Elements by designing a program, and the points of interest would be automatically displayed.
An interesting application of segmentation and classification was developed using deep learning and convolution neural networks, and both segmentation and classification results achieved high accuracy scores. Another segmentation application was developed with optimized boundary detection by directly analyzing the cell images.
Imaging is an important tool in understanding the morphology and underlying biological principles of cells. Many methods of taking images exist, with bright field imaging and fluorescence imaging being the most prominent ones.
Bright-field imaging directly uses a white light as the light source for image taking. This method is the easiest way to observe samples. For example, one study uses white light microscopy to observe live human cornea, providing meaningful information about the morphology, proliferation rate, and health of the tissue.
Fluorescence microscopy, on the other hand, uses a fluorescent burner as the light source. Fluorescent small chemicals and fluorescent proteins, upon excitation by fluorescence of a specific wavelength, emits fluorescence of another wavelength. By manipulating the excitation and emission wavelengths of the microscope, much information can be obtained about the sample. For example, it has been reported that genetically encoded Fluorescence Resonance Energy Transfer (FRET) biosensors can provide spatio-temporal information about the enzymatic activity, epigenetic regulations, and mechanobiological events in live cells.
Combining both techniques can provide valuable information in studying cancer heterogeneity. In one study, researchers first observed heterogeneity of cancer cells using bright field imaging, then laser-tagged specific cancer cells in order to perform genomic analysis on those cells and understand the underlying principles. Through this understanding, they proposed an image-guided genomics technique termed spatiotemporal genomic and cellular analysis (SaGA) that allows for precise selection and amplification of living and rare cells.
When taking images from 3D Collagen gel, one common way is to take z-stack images. Z-stack imaging refers to the method where, by adjusting the focus of the microscope, multiple images with the same horizontal (x) and vertical (y) direction are taken at a different height (z). The images are then stacked together for further analysis.
The main issue with the current method of image taking is the massive human labor and extensive training required. For example, in the aforementioned SaGA sample, researchers had to manually find the leader and follower cells to photo-convert and separate them using fluorescence-activated cell sorting (FACS). On the other hand, other researchers developed various ways to automate this process. Selinnumi, et al. present a method to automatically detect cell population outlines directly from bright-field images, which provides some insight into the convenience of automated image acquisition and analysis.