For our dramatic performance for the CSC 2524 final project, we chose the play, “Come Good Rain,” by George Seremba. This work, based on a true story, shows the experiences of George Seremba, a man who lived in Ugunda during the oppressive regimes of Milton Obote and Idi Amin, and eventually left the country. When he comes back to his country, he is arrested and tortured. He is eventually shot and left for dead in Uganda's Namanve forest. However, it starts to rain, which cleans his wounds, and he survives and escapes to Canada.
For the performance we decided to divide it into several sections which describe the different stages in George’s life. This includes birth, growing up, and eventually the shooting. To integrate technology, we made use depth cameras, sound, and projectors. Depth cameras would capture an actress’ performance and trigger different sounds based on the actress’ movements. Furthermore, in the pivotal scene when George is shot and rain cleans his wounds, an actress lays on the floor while a projector displays a red rain fluid around her, which responds to her movements.
For the performance, we decided to present it as a movement piece without dialogue. We chose this format to make it easier for everyone to connect with the performance. Each section gives insight into the understanding the rain scene. To do this, we separated the performance into 5 major sections. Roxanne played George and Büke played a nameless character representing an average person in George’s situation. Büke’s character represents “a gear” in the society of Ugunda who follows the social norms, which contrasts with how George leaves. Thunder is used as a sound effect to clearly distinguish scene changes.
To utilize technology in the performance, we used depth cameras, sound, and projectors. A Microsoft Kinect was mounted to the ceiling to get a top-down view of the scene. Furthermore, a projector was mounted on the ceiling and a mirror was added so it could project down on approximately the same space. The kinect was used for sound as well as matching up the projector visuals. A projector was used to display the visual fluid simulation. For software, Processing and Max were used for triggering the sounds and computing the visuals, respectively. Technology was used as a way of improving communication the meaning of the movement and augment the experience as a whole.
For sound, the Microsoft Kinect depth feed was sent to a program in Max. This converted the depth information into 3D points and the 3D space was divided into voxels. 3D points were grouped by their corresponding voxels and different sounds would trigger based on the amount of activity of each voxel. Furthermore, each section of the performance had its own set of sounds. The sound, volume, location in space, and response sensitivity were all tweaked to allow for a balanced interaction.
For the visuals, a fluid simulation was used with the Processing software. Fluid simulation makes use of the Navier-Stokes equations, which determine fluid movement.
In these equations, p is pressure, t is time, and v is viscosity. u, w, and v are velocities in the x, y, and z directions respectively. g represents gravity, though another external force could replace it. The first 3 equations represent conservation of momentum. This incorporates terms for velocity, viscosity, external forces, and pressure. The last equation represents conservation of mass. Intuitively, when considering a cube of volume, this means that the amount of fluid flowing in must match the volume flowing out.
We utilized the Lily Pad library (https://github.com/weymouth/lily-pad) in Processing, which simulates fluids in 2D. This library uses the Boundary Data Immersion Method, which adjusts the fluid simulation equations to allow immersing solid objects in the fluid. These equations are solved over a numerical grid using the Semi-Lagrangian convection method. Full details are listed in [1] and [2]. To artificially simulate rain, each time step, 10 random positions on the field were chosen, and had their pressure values artificially increased by a constant value.
The visual displayed is the pressure field, as displayed in the image of the prototype below. White represents high pressure, black represents low pressure, and blue is neutral. The original plan was to project the visuals onto a floor, where an actress lies down. The actress should be able move and have the fluid simulation interactively respond to the movements. For example, moving hands should cause visual ripples around the hands. We also used a mask so that the fluid simulation was shown on the floor and not on the body.
Originally, we used keypoints from the Kinect to act as circular bodies in the fluid which could move and cause ripples. However, we discovered that keypoint detection in the desired space was unreliable and we changed approaches. Utilizing a mask of the person to interact with the environment was not tried as it would be unreliable and noisy. Furthermore, displaying the mask in the performance was also cut.
As an alternative input source, we used a Max program which does the following.
Step #3 is used instead of simple difference between consecutive frames to address the noise of the Kinect. For example, depth information for a particular frame and pixel can be missing, which could lead to noise in the prediction. Step #3 applies a smoothing operation to the input to address this, by interpolating between the current input and a history of previous outputs. Finally, to make use of this information, we artificially increased the corresponding pressure values (similar to the rain simulation) based on a linear scaling of the values from step #4.
To make the visuals more appealing, we take the absolute value of the pressure values, then apply linear scaling to the range [0, 255] for the red colour. That way, the neutral values were black, and other values were red. The visuals were faded in and out by multiplying the image color values by a constant which changed linearly from 0 to 1. Thus, a constant value of 0 yielded black and a constant value of 1 gave the original colour.
Here is a link to the source code https://github.com/herougo/VirtualRealityCourseProject.
For the performance, we were able to achieve responsive and high quality fluid simulation. After optimizing the code, the final visuals had a frame rate of 25 frames/sec. Furthermore, the sound map was fine-tuned to the scripted movement and flowed smoothly. Here are some images and video footage of the fluid simulation.
Here are some images and video footage from the performance.
We presented a dramatic performance of "Come Good Rain" by George Seremba. For the performance, we decided to present it through movement. Furthermore, we presented multiple sections representing different stages of George's life from birth to the aftermath of the rain scene. We chose this format to make it easier for everyone to identify with the performance. To use technology, we used a Kinect to capture depth data and use this to map sounds to different points in space. Furthermore, in the scene where George is shot, an actress lays on the floor while a projector displays a red fluid simulation around her. The fluid simulation and sound mapping were responsive and had high quality to fit the performance. The sounds helped communicate the meaning behind the movement as well as augment the experience as a whole. The only technological visuals were from a fluid simulation during the pivotal rain scene, which further emphasized its importance.
[1] Weymouth, G. D. (2015). Lily pad: Towards real-time interactive computational fluid dynamics. arXiv preprint arXiv:1510.06886.
[2] Maertens, A. P., & Weymouth, G. D. (2015). Accurate Cartesian-grid simulations of near-body flows at intermediate Reynolds numbers. Computer Methods in Applied Mechanics and Engineering, 283, 106-129.