Come Good Rain

Come Good Rain: An Interpretation Utilizing Technology

by: Henri Romel, Nick Feng, Büke Erkoç, Roxanne Converset, Boriša Bo

For our dramatic performance for the CSC 2524 final project, we chose the play, “Come Good Rain,” by George Seremba. This work, based on a true story, shows the experiences of George Seremba, a man who lived in Ugunda during the oppressive regimes of Milton Obote and Idi Amin, and eventually left the country. When he comes back to his country, he is arrested and tortured. He is eventually shot and left for dead in Uganda's Namanve forest. However, it starts to rain, which cleans his wounds, and he survives and escapes to Canada.

For the performance we decided to divide it into several sections which describe the different stages in George’s life. This includes birth, growing up, and eventually the shooting. To integrate technology, we made use depth cameras, sound, and projectors. Depth cameras would capture an actress’ performance and trigger different sounds based on the actress’ movements. Furthermore, in the pivotal scene when George is shot and rain cleans his wounds, an actress lays on the floor while a projector displays a red rain fluid around her, which responds to her movements.

Art Direction

For the performance, we decided to present it as a movement piece without dialogue. We chose this format to make it easier for everyone to connect with the performance. Each section gives insight into the understanding the rain scene. To do this, we separated the performance into 5 major sections. Roxanne played George and Büke played a nameless character representing an average person in George’s situation. Büke’s character represents “a gear” in the society of Ugunda who follows the social norms, which contrasts with how George leaves. Thunder is used as a sound effect to clearly distinguish scene changes.

  1. Birth: This featured George and the gear laying on their backside and moving their torsos up and down in unison. A heartbeat sound fades in and out with this movement. This represents how both characters are born the same, but diverge in their paths in life.
  2. Growing up and Leaving: Both characters get up and trigger various clips of a song about growing up in an African tribe. The characters also walk through the space and trigger different clips of interviews about Idi Amin. The gear is eventually shown throwing bricks as an act of violence and accepting it. George is shown slowly walking away representing leaving Uganda.
  3. Jail: Eventually, George returns and is hugged by the gear, which represents coming back to Uganda and being arrested. George stands as the gear, acting as the torturer, uses hand motions to represent putting up jail cell walls around George. Mechanical sounds also play during this scene. Furthermore, George’s body is stiff and robotic, and the gear acts as a puppet master moving George’s body for him.
  4. Shooting: This scene involves both the gear and the George who are shot in a forest. Gunshot sound effects are played while both characters collapse to the floor.
  5. Rain: A rain sound effect starts and a red fluid simulation fades in and is projected onto George’s body. The fluid is disturbed according to movement registered by the depth camera. George moves and eventually gets up. The rain stops and George is shown running to represent escaping Uganda.

Role of Technology

To utilize technology in the performance, we used depth cameras, sound, and projectors. A Microsoft Kinect was mounted to the ceiling to get a top-down view of the scene. Furthermore, a projector was mounted on the ceiling and a mirror was added so it could project down on approximately the same space. The kinect was used for sound as well as matching up the projector visuals. A projector was used to display the visual fluid simulation. For software, Processing and Max were used for triggering the sounds and computing the visuals, respectively. Technology was used as a way of improving communication the meaning of the movement and augment the experience as a whole.

Technical Details

For sound, the Microsoft Kinect depth feed was sent to a program in Max. This converted the depth information into 3D points and the 3D space was divided into voxels. 3D points were grouped by their corresponding voxels and different sounds would trigger based on the amount of activity of each voxel. Furthermore, each section of the performance had its own set of sounds. The sound, volume, location in space, and response sensitivity were all tweaked to allow for a balanced interaction.

For the visuals, a fluid simulation was used with the Processing software. Fluid simulation makes use of the Navier-Stokes equations, which determine fluid movement.

In these equations, p is pressure, t is time, and v is viscosity. u, w, and v are velocities in the x, y, and z directions respectively. g represents gravity, though another external force could replace it. The first 3 equations represent conservation of momentum. This incorporates terms for velocity, viscosity, external forces, and pressure. The last equation represents conservation of mass. Intuitively, when considering a cube of volume, this means that the amount of fluid flowing in must match the volume flowing out.

We utilized the Lily Pad library (https://github.com/weymouth/lily-pad) in Processing, which simulates fluids in 2D. This library uses the Boundary Data Immersion Method, which adjusts the fluid simulation equations to allow immersing solid objects in the fluid. These equations are solved over a numerical grid using the Semi-Lagrangian convection method. Full details are listed in [1] and [2]. To artificially simulate rain, each time step, 10 random positions on the field were chosen, and had their pressure values artificially increased by a constant value.

The visual displayed is the pressure field, as displayed in the image of the prototype below. White represents high pressure, black represents low pressure, and blue is neutral. The original plan was to project the visuals onto a floor, where an actress lies down. The actress should be able move and have the fluid simulation interactively respond to the movements. For example, moving hands should cause visual ripples around the hands. We also used a mask so that the fluid simulation was shown on the floor and not on the body.

Originally, we used keypoints from the Kinect to act as circular bodies in the fluid which could move and cause ripples. However, we discovered that keypoint detection in the desired space was unreliable and we changed approaches. Utilizing a mask of the person to interact with the environment was not tried as it would be unreliable and noisy. Furthermore, displaying the mask in the performance was also cut.

As an alternative input source, we used a Max program which does the following.

  1. Convert the depth data into a 3D point cloud.
  2. Create a voxel map (collapsed to have height dimension 1) by counting the number of points in each voxel. The area for the voxel map is chosen to correspond to the projection space.
  3. For every voxel, we compute the following for every frame. filter_coeff and filter_coeff2 are 2 different constants between 0 and 1.
    1. output = input * filter_coeff + previous_output * (1.0 - filter_coeff)
    2. output2 = input * filter_coeff2 + previous_output2 * (1.0 - filter_coeff2)
  4. For each frame, compute max(output - output2, 0) and send this to the fluid simulation.

Step #3 is used instead of simple difference between consecutive frames to address the noise of the Kinect. For example, depth information for a particular frame and pixel can be missing, which could lead to noise in the prediction. Step #3 applies a smoothing operation to the input to address this, by interpolating between the current input and a history of previous outputs. Finally, to make use of this information, we artificially increased the corresponding pressure values (similar to the rain simulation) based on a linear scaling of the values from step #4.

To make the visuals more appealing, we take the absolute value of the pressure values, then apply linear scaling to the range [0, 255] for the red colour. That way, the neutral values were black, and other values were red. The visuals were faded in and out by multiplying the image color values by a constant which changed linearly from 0 to 1. Thus, a constant value of 0 yielded black and a constant value of 1 gave the original colour.

Here is a link to the source code https://github.com/herougo/VirtualRealityCourseProject.

Technical Results

For the performance, we were able to achieve responsive and high quality fluid simulation. After optimizing the code, the final visuals had a frame rate of 25 frames/sec. Furthermore, the sound map was fine-tuned to the scripted movement and flowed smoothly. Here are some images and video footage of the fluid simulation.

Fluid Simulation Demo.mp4

Performance Results

Here are some images and video footage from the performance.

Section from the Performance - 720p.MP4

Conclusion

We presented a dramatic performance of "Come Good Rain" by George Seremba. For the performance, we decided to present it through movement. Furthermore, we presented multiple sections representing different stages of George's life from birth to the aftermath of the rain scene. We chose this format to make it easier for everyone to identify with the performance. To use technology, we used a Kinect to capture depth data and use this to map sounds to different points in space. Furthermore, in the scene where George is shot, an actress lays on the floor while a projector displays a red fluid simulation around her. The fluid simulation and sound mapping were responsive and had high quality to fit the performance. The sounds helped communicate the meaning behind the movement as well as augment the experience as a whole. The only technological visuals were from a fluid simulation during the pivotal rain scene, which further emphasized its importance.

References

[1] Weymouth, G. D. (2015). Lily pad: Towards real-time interactive computational fluid dynamics. arXiv preprint arXiv:1510.06886.

[2] Maertens, A. P., & Weymouth, G. D. (2015). Accurate Cartesian-grid simulations of near-body flows at intermediate Reynolds numbers. Computer Methods in Applied Mechanics and Engineering, 283, 106-129.