Tutorial (adapted from the Help screen in the Version 2.0 application)
(For a more visual tutorial, check out these two Videos.)
WHAT IS A SYNTHETIC CAMERA?
If you open the aperture on a single-lens reflex (SLR) camera to its widest setting (lowest F-number), images taken with the camera will have a shallow depth of field. Cell phones have a small aperture, hence a large depth of field. In other words, more of the scene is in focus at once.
However, if you record video while moving the phone slightly, and you add the video frames together, you can simulate the large aperture of an SLR. If you shift the frames digitally before adding them, so that one scene feature stays in the same place in all frames, the blended photograph will seem to be "focused" on that feature. This app lets you do that.
In addition to their shallow depth of field, large apertures gather more light. This is one reason SLRs take better pictures in low light than cell phones. If you add many frames from a cell phone camera together, you can match the light-gathering ability of an SLR. This app lets you do that as well.
SynthCam lets you do other things, like see through bushes, remove moving people or cars, and blur flowing water, but let's not get ahead of ourselves.
INSTRUCTIONS FOR USE
I assume the app is in its initial configuration, with a "1" icon displayed in the second position from left on the toolbar. First, find a scene that isn't moving. Hold the phone upright and point the white square at a foreground object with some detail on it, but not a fine-grain texture. Faces work well, but newsprint is too fine.
Press Record (red button) and slowly move the phone left, right, up, and down, while keeping it aimed straight ahead. Don't tilt the camera, don't move it forward or backward, and don't move too far; an inch in each direction should suffice. The white square should "track" your object, i.e. stay fixed on it, while leaving a trail of red dots behind. Try to evenly cover the area inside the orange circle with these dots.
After 10-15 seconds, press Pause (black bars). You should see a "synthetic aperture photograph", which seems focused on your chosen object and has a shallow depth of field. The trail of dots represents your aperture function, which photographers call the "bokeh". It is displayed as an inset in one corner of the screen.
If you hate the photograph, press Reset (refresh icon at left side of toolbar) and try again. If you like it, press Save (inbox icon). It will go into the Camera Roll in your Photos app. You can also press Record and keep adding more frames to the same photograph.
To synthetically focus on an object that is not in the middle of your field of view, touch anywhere on the screen or drag the white square with a swipe gesture. This will move the square and turn it yellow. It will also cause the iPhone's camera to focus and meter on the selected point rather than the whole image. To focus on a small object without the tracker becoming confused by the background, you can shrink the square with a pinch gesture. You want everything inside the square to be the same distance from the camera. Press Reset twice to reset the location and size of the square.
Once you get the hang of it, here are some other things to try. Instead of aiming the square at a foreground object, aim at the background and see what happens to objects in the foreground. But don't aim at the sky; there are no features there the app can track. Try aiming on an object beyond some sparse bushes, then move enough to blur out the bushes. Can you see through them? Are there people walking through your scene? Keep shooting; you'll blur them out!
Another cool effect is to slightly blur moving objects. Is there a fountain, stream, or waterfall in your scene? Record it for 5 seconds with the square aimed at a nearby rock, and the water will change to a milky haze. This effect is hard to obtain even with an SLR, because a 5-second exposure will normally saturate the sensor.
Finally, if you're in a dark room, aim the square at a stationary person or object and record for 2-3 seconds. Although each video frame is noisy, the average of 40-60 frames looks pretty good. Meanwhile, the app is compensating for slight motions of the person and the inevitable shaking of your hand. To make a picture this noise-free with an SLR, you would need a tripod.
Didn't work for you? First, make sure your scene is static. If it's moving you'll get some interesting effects, but they're probably not what you want. Also make sure the lighting isn't changing during your recording. For example if the sun becomes hidden by a cloud, tracking will almost certainly fail immediately. Also be careful of subtle lighting changes you yourself cause as you move, for example if you stand between the scene and the light.
Second, make sure the white square is aimed at a trackable feature. What are good features to track? Corners are good. Edges are bad, because the square will slide along the edge. Texture is good if it's coarse, but bad if it's fine. Clutter is good if it's at a constant distance from the camera.
Third, make sure you don't rotate the phone during the recording. Making a synthetic aperture photograph isn't like making a panorama. You want to move the phone, not rotate it. You also want to move the phone only parallel to its screen; avoid drifting forward or backward (closer or farther from your subject). Finally, move the phone slowly - don't cross the diameter of the orange circle in less than 2 seconds. This takes practice.
If your chosen object contains stripes or other repeating elements, or it doesn't contain enough visible detail (like a blank wall), the tracker can get confused. You'll know when this happens because the white square will get left behind as you move, or it might jump around wildly. The app will notice this problem, and it will fill the square with whatever feature you were pointing at when you pressed Record. Move the camera back to this feature and tracking will resume.
Lastly, if your composition requires you to aim the camera upwards or downwards, i.e. not pointed at the horizon, that's ok. Always move the phone parallel to its screen. In this case the (synthetic) plane of focus will be another plane, somewhere out in the scene, and also parallel to the screen. Thus, if you point the camera upwards to fit a tall wall into your field of view, only a part of it will be in focus.
Suppose you wanted the entire wall to be in focus, even though your camera was angled relative to the wall. Then what you need is a tilted focal plane. If you had a bellows-style view camera or a tilt-shift lens, you could accomplish this by tilting the lens relative to the sensor. In SynthCam you can obtain the same effect by specifying multiple focus points.
If you press on the "1" icon, it will cycle to "2", "3", "4", and back to "1". At the same time you'll notice different numbers of white focus squares appearing on the screen. The solid white square shows where the camera itself is metering and its lens is focusing. The open white reticles show other points the app can track and keep sharp. You can move or resize these points with drag and pinch gestures. You can also move all of them at once, by dragging the small circle in the center.
The idea is to move each square or reticle until it sits over a feature you want to keep sharp, then press Record and move the phone slowly as before. If any square loses tracking it will turn red; move the phone around a bit, and it will probably regain tracking. If you move the phone too far and a square wanders offscreen, everything turns red. Just move back onscreen.
Since there are multiple squares to keep track of, multi-point focusing takes some practice. The best strategy is to use the fewest number of squares you need for a particular scene. For people or tall objects, 2 squares may suffice. For walls 3 squares are usually enough. If the wall extends beyond the field of view in all directions, then use 4 squares. Try to space the squares out. Place them near the edges of the screen if you can. And whatever you do, don't place 2 squares atop one another or 3 squares in a line; horrible things will happen.
How does multi-point focusing work? With one focus square, the app translates each video frame to keep the features in that square aligned (and therefore sharp). With 2 squares it uses a similarity transform to keep the features in both squares aligned. With 3 squares it uses an affine transformation, and with 4 squares it uses a homography. Don't know what these things are? Start with Wikipedia.
A bellows-style view camera or tilt-shift lens lets you turn real-world scenes into miniature models. Can you do this with SynthCam? Yes and no. In miniature-model photographs the camera is usually placed high, aimed downward at an angle, and the lens is tilted so that the focal plane passes through the ground at an oblique angle. Because the focal plane intersects the ground plane in a line, only objects lying on that line are in focus. This makes it look like you're photographing a model using a macro lens.
To obtain this effect with SynthCam, you would need to find features on this oblique, ground-penetrating plane. The scene usually contains no such features. Thus, although SynthCam can in theory create true tilt-shift effects, in practice there's no easy way to control the app to create this effect.
That said, you can fake the effect by placing 2 or 3 points along a line through the scene (for 3 points, don't place them exactly in a line), then rotate the phone slowly and continuously around its lens while making a recording. With some practice you can create photographs that make the world look like a miniature model. Technically, it's only a "fake tilt-shift" image, but it looks cool.
Pressing the gear icon on the main screen brings you to a Preferences screen. On this screen you can enable or disable display of the bokeh. You can also separately enable or disable saving the bokeh in your Camera Roll.
You can also enable or disable a collection of algorithms that "assists" tracking in some situations. Among these is a bit of code that predicts where the focusing square might go next and moves it there, and another bit of code that detects a wandering focus square and freezes it before it starts jumping around wildly. These algorithms have been tuned to help in almost all situtations and hurt in none, but if the focus squares are turning red too often on a particular object, try turning the assists off.
If you're in 1-point focusing mode, yet another bit of code uses the accelerometer to correct for slight pitching motions (like nodding your head) and rolling motions (like tipping your head to the side). However, it can't correct for yawing motions (like shaking your head). If you disable these "tracking assists" your images might look worse, but you can create some cool effects. Try aiming the white square at a circular object and roll the phone (partway from portrait mode to landscape mode) while recording. Look at the Examples tab for ideas.
When you launch SynthCam for the first time, it defaults to VGA resolution. VGA means 640 x 480 pixels. At this resolution the app can manage 20 frames per second. Switching to HD resolution (iPhone 4 or iPod Touch 4 only) means 960 x 720 pixels, but at only 10 frames per second. You'll notice the slower frame rate, but your saved photographs will be bigger.
Finally, the act of tracking features and resampling each frame of captured video introduces a small but consistent 1-pixel blur into synthetic photographs. To reduce this blur, mild unsharp masking is applied when you hit Save. The level of sharpening is adjustable using a slider. If you tap Save while looking at live video (rather than a synthetic photograph), that image is stored as is, without sharpening.
WANT TO LEARN MORE?
I teach an undergraduate course at Stanford University on digital photography. The URL is http://graphics.stanford.edu/courses/cs178/. Under Course materials are some Flash applets about the technical aspects of photography. Click on the applet called "Depth of Field". It'll help you understand why a large aperture produces pictures with a shallow depth of field.
In my laboratory we've built many devices for making synthetic aperture photographs, including an array of 100 cameras, a camera with microlenses inside it, and a microscope whose photographs you can refocus after you take them. Start exploring at http://graphics.stanford.edu/projects/lightfield/.
Using a cell phone camera to make synthetic aperture photographs isn't a new idea. In fact, it's a common student project in my graduate computational photography course. The URL is http://graphics.stanford.edu/courses/cs448a/. We sometimes call this idea Painted Aperture, because you "paint" the aperture function in air by moving the phone.