Download Fiji from https://fiji.sc/
The program is fully contained within its folder. No installer is needed.
Start Fiji. Got to Help -> Update. Select Manage Update Sites. For this exercise we will need Connected Component Analysis and Hough Transfrom. Select "Biomedgroup" and "UCB Vision Sciences". Update the program. Close the Program and Start it again.
BTW Image J is programmed in Java. It uses multi threading in some of its core functions. It is often faster than similar routine in Matlab. But it does not use the GPU. There are commercial packages for microscope image analysis with hardware optimization that can run 10-100 times faster on certain tasks.
To open your image: File -> Open. If you have large or time series microscopy files you might want to use Plugins -> Bioformats to open the files.
JPG is not a scientific image standard unless you can guarantee that loss less compression was used to save the image. Use TIFF or one of the new large image file standards. Using a cell phone we dont a choice on image compression.
Image -> Adjust -> Brightness - Contrast
In Image J this only adjusts the display values, not the actually measured values in your image.
Often you want to subtract the background or adjust the image for the fact the images are less bright in the corners than the center of the image. There are three methods to consider. Likely if you captured the image under good lightning conditions you don't need either. Such correction is not considered fraudulent as long as they are intended to remove imperfections of the imaging system.
For scientific imaging, you should collect the negative control (image with light source off) as well as a positive control which is an image of a white uniform target. The negative control is your background which should be subtracted from your image. The white uniform target in a microscope would be the image through a glass slide without specimen. The positive control is used to color balance but more importantly to compute the smooth curve through the image. The curve will be set to 1.0 at the maximum and every collected image will be divided by that image. This will correct for non uniform intensity response of your system. Image J has Process -> Image Calculator to accomplish subtraction and divisions. Once you divide your image will be represented in floating point numbers and likely consumes 4 times more memory.
We likely dont have positive and negative controls as the image subsystem in your cell phone does not allow to keep the imaging parameters constant. We can not measure a dark image. We dont know if the corerction for the non uniform nature of the imaging system has already been applied.
Process -> Subtract Background computes a low pass smooth image and subtracts this from the original image. You can use this setting if you dont have a background measured. Play with the settings which basically set how much the image is smoothed before it is subtracted. Also make the correct choice for light or dark background.
Local contrast enhancement computes the intensity histogram over subsets of the image and then interpolates for each point in the image adjustments so that he histogram is stretched. The maximum slope is the amount of stretch allowed. This technique is similar to HDR settings on a camera, where images at different exposure times are measured and the bright areas are composed of the short exposures and the dark areas of the long exposures making the whole image appear more dynamic.
In order to identify an object you will need to define the outline of the object. This is done either by selecting the color or intensity or finding edges or contours. There are convolution approaches where objects are detected directly through trained filters. One of them is the Face Detection Algorithm based on Haar cascades https://towardsdatascience.com/computer-vision-detecting-objects-using-haar-cascade-classifier-4585472829a9 , or the more general approach of convolutional neural networks https://en.wikipedia.org/wiki/Convolutional_neural_network which are used to identify animals, vehicles and other objects https://www.ted.com/talks/joseph_redmon_how_computers_learn_to_recognize_objects_instantly. These techniques are not used in this example.
For best color thresholding the image will need to be transformed from RGB to HSV color space. That is because the color is encoded as Hue and remains the same regardless whether the image is taken under bright or dark conditions.
BTW: If you want to classify whether an area of an image belongs to one object or an other based on its color, you can measure the object under several conditions and compute an average for the three values H,S,V. You then can predict whether your object is present by computing the distance of an area's pixel's H,S,V values to that known objects as Eucledian Distance. If you have several objects, the shortest distance is likely the object you see.
For color thresholding in Image J you can use the Image -> Adjust -> Color Threshold. You need to select the correct background. You can have the program calculate suggested thresholds. In our case Entropy works well. Regardless, you should still tune the range of the values by hand using the sliders. When the sliders are at the min and max the whole image is selected.
Once you have a good threshold you can convert it into black/white Process -> Binary -> Make Binary. We will take care of the outlayers in the next chapter.
After an image has been thresholded, the binary image will need to be processed. The binary image usually contains areas that are not of interest. The thresholded areas might also have holes or protrusions that are only connected by the corner of two pixels.
In a binary image, connected areas can be identified using connected component analysis. This analysis provides definition of an edge or a corner which is the number of adjacent neighbors a pixel has.
Holes can be removed by filling them: Process -> Binary -> Fill Holes.
Those binary functions modify the shape of binary object outlines. Repeating the steps is not uncommon. The math behind those functions is linear algebra.
Connected component analysis is available through Plugins -> Shape Filter. Mathematical operations to identify connected areas, the size, length and width and objects main axes are fast. We will use the Shape Filter to remove areas on the image boundary and areas that are too small to be a fluid channel.
Contour analysis is an other technique to identify polygons and polynominal shapes around a connected component. Such techniques are used to identify symbols or objects with certain shape properties. It requires fitting shapes to a connected area. These take a little longer to compute.
In medical imaging, circles (nucleus, pupil), fibers (tubules, connective tissue) and vessels are of great interest for analysis.
Binary thresholded image on the left, Connected Component analyzed and filterd image on the right.
There are many techniques to enhance features of an image for later thresholding.
In an edge detector gradients are computed and whenever you have an intensity change over a short distance, you can compare it with the average change in the neighborhood and mark the location of the largest change. This gives you a good outline of a piece of white paper or a dark spot.
One such filter is well explained here: https://en.wikipedia.org/wiki/Canny_edge_detector
There are techniques to enhance lines, circles, plates (3D) and tubes (in 3D)
The Hough Transformation finds circular objects in your image, even when the circle is incomplete. It is implemented for ImageJ so that the center and radius is provided as well as a quality measure indicating the likelyhood of a circular object. We will apply it Plugins -> UCB Vision Sciences - Hough Circle Transform on an image that is already thresholded. You can apply the transform also to a gray scale image, but computation will take 10-50 times as long.
The Frangi filter is a subset of the Hessian structure analysis which combines 3D gradient computations and Eigenvector analysis to approximate ellipsoidal structures. One can use the same filter to expand it as Blobness detector. You can experiment with the Frangi Filter in Process -> Filters -> Frangi Vesselness.
Detecting curvilinear structures can be accomplished with a ridge detector https://imagej.net/Ridge_Detection . You can define a curvilinear structure as an object having a positive and negative gradient perpendicular to the ridge and minimal gradient along the ridge. You will need to adjust the filter for the ridge width you wan to detect. Plugins -> Ridgedetection. A valley and a ridge are the same mathematically.
Skeletonization is a thinning algorithm and works in 2D and 3D. Images will nee to be binary. Process -> Binary -> Skeletonize. The advantage of skeletonization is that it will produce connected lines with branching points while ridges are not necessarily continuous.
Hough Transfrom left, Edgedetection on Color Image and Color Thresholded omn the right
We will use Analyze -> Skeleton to measure the length of the center line of our channels. If the skeleton includes branch points you will need to add the relevant branches.
Image can record measurements and store outlines as well as regions of interests.
Furthermore the Analysis can be recorded in a Macro and played back on different images.
Skeletonized channels (left) Unwanted skeleton sections were removed with the paintbrush and then Skeleton Analysis was performed.