Community Engagement > Expo Projects
The AR Sandbox
Contributors:
Hamid Hussain
Hamid Hussain
This article covers the AR Sandbox EXPO Project, including instructions for mechanical assembly and setup, talking points, and troubleshooting information.
The Augmented Reality (AR) Sandbox is an EXPO project which showcases the usage of sensors and computer vision to visualize real-world data. It uses an XBOX One Kinect sensor to map out the terrain of the sandbox and creates a depth map of the surface. The computer processes this depth map to create a colored image, which can be a topographical map, a depiction of earth-like terrain, or more "artsy" depictions. The colored image is then projected back onto the sand, in a way which aligns the projected colors with features on the sandbox.
The Sandbox is based on the original AR Sandbox created by students at UC Davis. UC Davis' sandbox used real sand rather than Kinetic Sand, and ran on their own software. Our AR Sandbox used to use the same software as UC Davis, but is now using a more modern graphics pipeline (Python OpenCV2) and an upgraded Kinect.
The UC Davis software is slightly more advanced due to its ability to create nicer contour lines, terrain effects, and water simulations, but the new software allows for more ease of use, touchscreen operation, and the ability to add different colors easily by using specially formatted images.
Electronics Box
The AR Sandbox is stored in two boxes. One contains the sand, while the other contains the computer tray, monitor, and projector assembly.
Every part of the sandbox should stay in these boxes for ease of transport. The only exception is the two base posts which hold the computer tray.
Sand Box
The first step of assembly is to mount the base posts to the sandbox. Mounted to the long edges of the sandbox are some L-brackets, two per side. For each pair, there is a long bolt and a wingnut. Unscrew each bolt and remove it from the brackets. Then, slot the base posts through the gap in the brackets so that the hole at the top is inward. Replace the bolt and screw in the wingnut on each side.
The next step is to attach the computer tray. The two legs of the tray should slot into the holes at the top of the base posts. Most of the cables will stay on the computer tray, but be sure to separate out the two wrapped sets of cables and the large power cable.
Next, slide the projector assembly into the two angled T-Slots at the edge of the computer tray. The projector should just poke through the appropriately sized hole in the tray. Ensure the lens is clear and the zoom wheel is set so the lens is at its maximum protrusion.
Then, locate the wrapped cable set containing the HDMI cable, power cable, and serial cable (looks similar to VGA). Plug in the HDMI cable to the HDMI 1 port of the projector. Plug the serial cable to the port labeled "RS-232". Ensure that both screws on the connector are properly fastened. Plug in the projector power.
Next, place the display near the base of the sandbox (placed on a stool in photos as example). Then, locate the wrapped cable set containing the USB-C cable and mini HDMI cable. Plug in the HDMI cable into the left side of the display. Plug in the USB-C cable into the top port of the display.
Finally, plug the large power cable into a nearby outlet.
To start software setup, turn on the computer using the power button. Make sure that all other connections into the computer are secure, and that there are no extraneous unplugged connectors. Then, turn on the display using the button on the bottom of the right edge of the monitor.
Once the display has turned on and the computer boots into Windows, the AR Sandbox software should start automatically within a few seconds. If the software closes during demonstration, it can be re-opened using the "Start AR Sandbox" shortcut on the desktop. The software will open the control panel interface on the computer's primary monitor (which should be set to the touchscreen in Windows settings) and will separately display the full color image on the secondary monitor (monitor number 2 in settings, should be the projector) if it is connected.
The control interface should look similar to the image below.
When first presented with the control panel, you should tap the "ON" button at the left side to activate the projector. After the projector turns on, the software should automatically detect it and present a video feed. The "OFF" button will turn the projector off, and should be pressed after a demonstration is finished. The projector should not be disconnected from power until it has fully turned off (signalled by two beeps).
The right side of the interface shows a preview of the Kinect video feed, either with cropping and min-max helpers for adjustment, or as displayed by the projector for viewing. Behaviour of the preview and projection can be adjusted using the three widgets to the left of the preview.
The Basic Adjustment tab allows adjustment of the distance range the Kinect sensor detects, the amount of blur (smoothing) on the final video, which of the two feeds is shown on the preview, and which color map is used.
The distance adjustments are measured from the sensor, which can be a bit counterintuitive, as it results in "Maximum Distance" controlling the lowest point of the sandbox and "Minimum Distance" controlling the highest. These values should be adjusted so that the maximum distance is at the base of the sandbox and the minimum distance is around halfway up the sandbox. You can place your hand or another object into the projection to fine-tune distances, or set them differently (e.x. to bring out more detail in flatter terrain).
The "Blur" slider controls the smoothing of the object. It is set by default around halfway up, but can be changed for various scenarios. It serves as a less computationally expensive noise reduction algorithm, so turning it down can result in "graininess" in the projected image. Conversely, turning it up can result in the loss of fine details.
The preview choice buttons allow you to switch between the two preview types mentioned before. "No Crop/Color" makes the preview greyscale, with green and blue markers to show minimum and maximum heights, and a red border to show the crop area. "Crop + Color" shows the same image as is projected onto the sand, allowing you to see the results of the crop and show color maps.
The color map chooser shows all the files in the color map folder, and allows you to choose how different depth values from the Kinect sensor are colored in for the projection. Creating these color maps will be covered later.
The Cropping and Keystone tabs both have similar interfaces, allowing you to transform the captured Kinect data and the projected image. Both functions are done by controlling the four corners of the crop or keystone frame. In practice, you should start with keystoning, using the sliders to align the four corners of the projection with the edge of the sandbox or alignment guides if available. The "X" sliders control movement along the long edge of the box, while the "Y" sliders control movement along the short side. Usually, slightly adjusting the sliders will reveal their effect on the image, even if the labels don't.
The process for cropping is similar, and should happen next. For cropping, direct attention to the image preview on the controls display, which should be set to "No Crop/Color" mode. Using the sliders, adjust the red outline on the preview until it matches the points that you keystoned to. If using the alignment guides, their retro-reflective tape should always appear as a bright green cross on the preview. In this case, line up the corners with the centers of the crosses.
After adjusting min-max sliders, blur, colormaps, keystoning, and crop, you should be ready to present the sandbox.
When talking about and/or presenting this EXPO project, you can mention the facts outlined here.
The AR Sandbox measures the height of terrain and sand sculptures and projects this data as a colourful image.
The AR Sandbox uses an XBOX Kinect sensor array, specifically the infrared camera, to detect depth.
This sensor array is also used for games such as Just Dance, because of its ability to track body motion.
Versions of the Kinect, and similar sensors, are used in professional industrial robotics.
The camera detects light by shining infrared light onto the sand. Darker areas are further away.
The AR Sandbox is based on a similar project created by students at UC Davis.
Many other instances of this project have been made. Some viewers may have seen these at museums, for example.
Our sandbox is quite old, and used to use the same software as UC Davis.
The sandbox was reprogrammed and upgraded after being abandoned due to software issues.
The new code is simpler, and lacks features such as water simulation.
The AR Sandbox can be programmed to do many other things due to its array of sensors and visual feedback.
A Gravity Field simulation using an elastic blanket and spherical masses
A virtual air hockey table using tracked paddles and physics simulation.
The AR Sandbox should be used with safety precautions in place.
Viewers should not look up into the projector, as it is a high power short-throw projector.
Viewers should take care to not litter the sand outside of the box.
Viewers may use tools instead of their hands to interact with the exhibit, as the sand can stick.
Viewers may not touch the electronics, with the exception of the touch screen.
The AR Sandbox has the special ability to be easily customized through the use of custom color maps. These color maps are specially formatted images which define how the program converts depth information into colors. A color map image is an 8-bit PNG image, EXACTLY 256 pixels wide and one pixel tall. You can create these maps using many raster image editing software, such as Photoshop, Photopea, Paint.NET, GIMP, or even Microsoft Paint. Below is a diagram of how a simple color map may work.
Color maps can go further than just simple gradients. Some of the current color maps use color blocks to show different height ranges as "steps", or multiple blended gradients to emulate terrain, or blots of color to emulate topographical contour lines, or even single-pixel color blocks to emulate lava using sensor noise. Some example color maps are shown below.
The Rainbow color map, using color blocking and contour line emulation.
The Terrain color map, displaying water, beach, grass, mountain, and snowy peaks.
The Magma color map, with a deep red gradient and lava at the deepest depths.
A red-white-blue color block map, for using the sand to draw various flags.
To add your own color map to the AR Sandbox project, simply take your exported PNG file and place it in the "ARTable\img\cmap" folder. The ARTable folder should be on the desktop. The changes should take place when you next open the program.