Feel free to experiment with multiple videos for motion tracking. Use the following link for additional datasets - https://motchallenge.net/data/MOT15/ (Or optionally you can find your own videos - youtube etc...)
For the assignment and your experiments - pick up at least 2 different videos with static camera and multiple objects presented in the screen, moving different paths, crossing or moving close to each other.
Use background substraction methods to properly segment the moving objects from their background. Use one of the videos with static camera.
Use the following approaches:
Accumulated weighted image
Mixture of Gaussian (MOG2)
Implement also your own accumulator matrix using your own idea (math can be simple, moving average, median ...)
If your idea require some preprocessing on each frame you are welcome to experiment, just be sure to document each step in your final submission
At the end we would like to visualize the background image and at given frame also the foreground moving object of interest.
Visually compare the approaches.
Visualize trajectories of moving objects.
Optional Task: Identify each object using a bounding box and count them. (For example when they cross a drawn line)
Use following functions: cv::goodFeaturesToTrack, cv::calcOpticalFlowPyrLK
Consider the following issues and try to solve them (document how you proceeded to solve each of these issues)
Objects do not need to be present in the scene from the start, and can arrive later -> How do you detect those objects and start tracking them?
Feature points for tracking depending on the method for feature detection can be duplicates and tracking the same points is computationally expensive -> How do you make sure to not detect or track duplicate features?
Features might not be considered good for tracking (they can be stationary or object can leave the scene) -> How do you pick which features are worth tracking?
Identify moving objects in video and draw green rectangle around them.
Use downsampled video for this task if necessary for easier processing.
Use following functions: cv::calcOpticalFlowFarneback
OpenCV's tutorial on how to optical flow
Mark each moving objects with bounding box. Find in your dataset a scene, where objects are moving close to each other or passing through each other. Can you minimize the issue of detecting these objects as one? Document your approach where you try to distinguish if two objects are coliding while still being able to draw two separate bounding boxes (or one bounding box with different color) for objects, separating them.
Remember to copy 'opencv_videoio_ffmpeg451_64.dll' to your output directory - so the video IO works.
calcOpticalFlowPyrLK()
calcOpticalFlowFarneback()
goodFeaturesToTrack()
createBackgroundSubtractorMOG2()
accumulateWeighted()