We shall cover two coding environments: MATLAB and OpenCV-Python.
It is important to be able to handle images and videos in MATLAB. For videos, I recommend installing ffmpeg and extracting the video frames to a folder prior to your coding. Having the frames extracted to a place often helps in debugging because you don't have to deal with the entire video always.
The bash command to extract the video frames (say, from a .mp4 video) with ffmpeg is very simple:
$ ffmpeg -i input_video.mp4 -qscale:v 2 frame_dir/frames%4d.jpg
To help you play with this module I have enclosed a short video of stentors swimming in an aquatic medium. This video is captured with the microscope invented by Thomas Zimmerman, IBM Research, Almaden.
If you have a MATLAB license please use one or otherwise you can install Octave on your machine. In MATLAB/Octave environment one can read and write images from the memory with the following two commands:
im = imread('image_filename.jpg'); % reading an image
imwrite(im, 'image_filename.jpg'); % writing an image
What we are interested in here is not dealing with a single or few images (ImageJ-Fiji would be a better choice), rather several hundreds or thousands of images, or may be, the frames coming from a video. We shall learn working with a video in the following. Let us say we have placed all the video frames inside the 'frames_dir' in MATLAB. We write the following code to display the frames one after another just like a video plays:
folderpath = './frame_dir';
now_allframes = dir(fullfile(folderpath, '*.jpg'));
now_N = length(now_allframes);
for now_n = 1 : now_N % loop over all frames
% read each frame
now_im = imread(fullfile(folderpath, now_allframes(now_n).name));
% display the frame
figure(1), imagesc(now_im), colormap(gray), axis image, axis on;
% show the frame number in title
title(['Frame # : ' num2str(now_n) '/', num2str(now_N)]);
% pause for some time
pause(0.05);
end
You can also perform some image processing on each frame and then have it saved with a different filename as shown below.
clc, clear all, close all;
folderpath = 'frames_dir';
now_allframes = dir(fullfile(folderpath, '*.jpg'));
now_N = length(now_allframes);
sgm = 51;
blur_kernel = fspecial('gaussian', [sgm, sgm], sgm);
for now_n = 1 : now_N % loop over all frames
% read each frame
now_im = imread(fullfile(folderpath, now_allframes(now_n).name));
% blur the image frame
blur_frame = imfilter(now_im, blur_kernel);
% display the blurred frame beside original frame
figure(1), subplot(1, 2, 1), imagesc(now_im), colormap(gray), axis image, axis off;
figure(1), subplot(1, 2, 2), imagesc(blur_frame), colormap(gray), axis image, axis off;
% pause for some time
pause(0.05);
end
The following instructions are tested on Mac/Linux (Ubuntu) systems.
You should start by installing Anaconda 3 Scientific Python Package from here. For Mac, download the shell script by clicking 64-Bit Command Line Installer (435 MB). For the Linux system the shell script is the default installer. Open a terminal and execute the downloaded shell script by
$ bash Anaconda3-2019.07-MacOSX-x86_64.sh
Keep on hitting the enter key to accept all default configurations.
Create a virtual environment and install the following packages:
$ conda create --name bioimage
$ conda activate bioimage
$ which python # check your python package, it should show Anaconda3
$ conda install -c anaconda jupyter
$ conda install -c conda-forge jupyterlab
$ conda install ipykernel
$ python -m ipykernel install --user --name bioimage --display-name "bioimage"
$ conda install pip
$ pip install opencv-contrib-python
$ conda install -c conda-forge matplotlib
# From the terminal cd to your project directory
# Open jupyter lab in this directory
$ jupyter lab # this should open jupyter lab in your browser
In the Jupyter Lab IDE you should see the bioimage environment you created. Click on it as follows, a new new notebook will open up. You are now ready for writing codes.
In the following we shall write a small OpenCV-Python script to read frames from the video stentors.mp4. We shall read the 50th, 100th and 150th frame and display them.
% matplotlib inline
import numpy as np
import cv2
from matplotlib import pyplot as plt
cap = cv2.VideoCapture('stentors.mp4')
count = 0
while(cap.isOpened() and count <=150):
ret, frame = cap.read()
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
if ret==True:
count += 1
if count == 50 or count == 100 or count == 150:
plt.imshow(frame, cmap = 'gray')
plt.show()
For some reason if you are not comfortable with running local installation on your machine there is an alternate route to run codes on Google Colab (totally free). All you need is a google account (a gmail id). Go to Google Colab, spin off a notebook.
All the necessary python packages (like OpenCV cv2) are already installed. You don't need to install them separately. Please follow my demonstration in the class or ping me on slack if you need help for getting started.
To be able to work on your images/videos, you first need to upload the data on your Google drive. Unfortunately, you will not have direct access to your Google Drive files inside Google Colab because what Google Colab gives you is some sort of virtual machine. To fetch the your files living on Google Drive you need to follow this tutorial for copying them in your Colab environment.