Summary of Final Design
To achieve a virtual reality two main components are needed, the hardware and the software. On the hardware side of the spectrum the setup was designed to be simple, static, and easily assembled/dissembled. The software side of the design encompassed the majority of this project due to its basic nature of image processing and projection. To capture and analyze flies response to a given stimuli, to do this image processing using the OpenCV library and coding in C++ conducted to determine the trajectory of the fly within the virtual reality and orientate the virtual reality accordingly.
Final Design
Hardware:
The final hardware design was a projector setup which allows images to be warped around the sphere for approximately 270o which would give the fly enough stimuli to immerse it in the virtual reality. The fly is initially attached to the tether with UV glue, which is then constrained by the tether holder via a clamping mechanism. The micromanipulator then allows for fine precise adjustment of the tether, allowing the fly to be placed precisely in the middle of the camera display. To capture images, the high speed camera is used to allow for a wide field of view while also having a default of 150 frames per second. By cropping the image and focusing directly onto the fly the Region of Interest (RoI), which are the wings of the fly, the RoI can be dramatically reduced to raise the frame rate to the current rate of 420 fps. The frame rate is raised by reducing the total number of pixels in the image which is being processed. This high speed camera also operates at a shutter speed of 16,666.67 fps. The zoom lens allows for adjusting the focus of the image while also allowing the image to zoom in on the fly to maximize the frame rate due to the small ROI. The IR backlight allows for a clear contrast between the background and the fly which it illuminates. This assists in the background subtraction which in turn gives the software a better ellipse fit. Lastly, the acrylic sphere which is coated with “screen goo” provides a surface for the projected image to be displayed onto.
Software Performance:
To create a virtual reality a feedback controller was used and a block diagram of it can be seen below. This feedback controller, which was the fly, allows the virtual reality to give the fly the feeling that it is flying through a real environment so the neuroscientist can accurately analyze how a fly brain work.
Image Processing:
Image subtraction is done to isolate the moving part of an image from the static part.This is important to isolate the wings from the rest of the image, and analyze this region of interest. Image subtraction uses a Mixture of Gaussian (MOG) algorithm, which is able to identify the background from the foreground (moving object) by modeling each pixel in an image using a mixture of Gaussian distributions of different color spaces, such as RGB, grayscale, and light intensity color spaces. This indicates the probability whether or not a pixel is static or part of the background, and this isolates the wings from the rest of the image, as shown in the image below.
Now that the image is isolated the wings are analyzed to see how they have changed from the previous frame. To do this, the angle of the wings are found to determine the speed of the wing flap and indicate whether or not a flap has occurred. This could be done using ellipse fit, triangular fit, or linear fit of the wings as shown in the image below.
To identify the fly behavior, synchronous flapping must be assumed in order to conclusively determine a change in pitch or acceleration. Synchronous flapping is when the fly wings flap in sync with each other, which mean that the relative position of the wing flap on each side are similar to each other. This is important because when one wing angle is greater than the other, it will remain greater than the other throughout the flap (besides the center) and thus can be attributed to turning. Similarly, when both wings have increased their relative positions together, this indicates acceleration due to a greater wing beat, or range of motion. The Figure above shows how this assumption is used to conclusively state that the fly is trying to turn left, and its wing angle difference can be used to say by how much (refer to Fig. 1A in the Appendix for more information).
Projection Software:
When projecting on a sphere with two projectors, the projected image must be preprocessed to account for the curvature of the projected surface and be split when projecting an image through two projectors. Images produced through a projector are in the standard Cartesian coordinate system, so transforming the image beforehand is to a spherical coordinate system is crucial to meet the functional requirements. Taking into account these factors, the figure below shows the steps that an image must be processed before it can be projected.