Examples

Cameras with a shallow depth of field are often used in macro photography. The photographer typically wants the foreground object to be well focused, and the background to be out of focus. Using SynthCam this effect is easy to achieve, as shown with the sunflower at left.

The shape of the aperture in a lens will affect the look of out-of-focus objects. Photographers call this shape the "bokeh" of the lens. Typically it's a circle, or maybe a hexagon if the physical aperture consists of six metal leaves.

In SynthCam the bokeh becomes whatever path you trace out with your iPhone during the brief recording session used to make each photograph. This path is displayed as an inset image in the lower-left corner of the photograph, as shown at left. In this case, it looks like I traced out a few wobbly ellipses. A spiral would have been better. I need more practice.

Here's a better example of a flower, taken 3 months later in the Stanford Cactus Garden using Version 2.0 of the app. It's HD resolution. Click on the thumbnail at left to enlarge it.

In this example, the wind was gently moving the flower. The app tracked its motion, thereby keeping the flower sharp and making the background blurry, so I didn't need to move the phone at all! For focusing I used 2 focus squares, one at the top of the flower and one at the bottom.

The previous two examples show off SynthCam's ability to create a very shallow depth of field. For many scenes, however, it suffices to blur the background only slightly.

In this example of a silver wine cooler at the museum of the Palace of the Legion of Honor in San Francisco, I placed 4 focus points on the cooler, then moved the phone a few millimeters in each direction. This slightly softened the candelabra and painting beyond, so they wouldn't compete with the main subject for attention.

Here's a comparison of two images taken using SynthCam. The first one is a straight photograph, i.e. a single frame, made by pressing Save (the Inbox icon) instead of Record.

The second image is a synthetic aperture photograph - the result of pressing Record, moving the phone up, down, left, and right over an area of about 2 inches for 10 seconds, and pressing Pause. See how the shallow depth of field seems to "spotlight" the frontmost column?

These images were captured at HD resolution. Click on the thumbnails at left to view them at full size. The columns are part of the facade of Memorial Church at Stanford University.

In this shot of Rodin's Burghers of Calais sculpture group at Stanford, there were people walking by constantly. Instead of asking them to wait, I went ahead and recorded a session. VoilĂ , no more people! How does this work? If you record for more than a few seconds, and the people continue walking, they become so blurred out that they effectively disappear.

In this shot taken in a mostly dark room, the iPhone's camera is struggling to find enough photons. (It's also struggling to find the right white balance, but that's another story.)

The first shot is a single frame, and it exhibits the noise that is characteristic of a cell phone camera operating in low light.

The second shot is a 3-second recording, with the phone handheld as still as possible. The image isn't pin-sharp, because the subject couldn't hold perfectly still. But the phone tracked her pretty well, so it's sharper than a 3-second exposure with an ordinary camera would be. And it's completely free of noise.

The difference is more dramatic in the original images, which are HD resolution. Click on the thumbnails at left to see them at full size.

Here's an even more extreme example of shooting in the dark. These pictures of Professor C. Karen Liu of Georgia Tech were taken with the room lights off and only a sliver of light coming through the door.

The first shot at left is a single frame, as in the previous example. The thumbnail looks ok, but if you click on it, you can see how noisy it really is. It's also blurry, because the iPhone automatically lengthens exposure time in low light. The second shot is a 3-second recording, during which Karen managed to hold her smile. Click on that one as well to see the improvement.

Broadly speaking, by averaging multiple frames together we are reducing two kinds of image noise - "shot noise", which arises from natural variations in the number of photons striking each pixel on the camera's sensor, and "read noise", which arises when the camera tries to count the number of electrons these photons create in each pixel. If we average 36 frames together, we reduce these sources of noise, or equivalently we increase the signal-to-noise ratio (SNR), by sqrt(36), or 6x.

One final example of using SynthCam to improve imaging in low-light settings. The subject is the Second Inaugural Address by Abraham Lincoln, as carved into an inside wall of the Lincoln Memorial in Washington D.C.

The first shot is a single frame. It's rather dark in the memorial, so the camera used a long exposure. This created handshake blur. Despite the long exposure, the shot is noisy.

The second shot is a roughly 3-second recording using SynthCam with the camera handheld as still as possible. Thus, it is an aligned average of about 30 HD resolution video frames. Click on the images to see them at full resolution.

Apple should really implement something like this in their native camera app. For that matter, so should all camera vendors!

[If the foregoing paragraphs sound like an explanation of the night modes on recent cell phones, it's not a coincidence. Synthcam was released in 2011. Over the next 8 years I built a computational photography team at Google, initially as visiting faculty and eventually as a full-time engineer. My Google team launched HDR+ on Nexus phones and Night Sight on Pixel phones. The latter grew out of a prototype I wrote at Google called SeeInTheDark, which I presented at a ICCV 2015 workshop on extreme imaging. While Synthcam and SeeInTheDark compute a continuous average until you stop recording (so-called IIR filtering), HDR+ and Night Sight average a fixed-size burst of frames (so-called FIR filtering). For more information, see also the team's papers in SIGGRAPH Asia 2016 and 2019 and this blog on astrophotography on Pixel 4. These computational photography features have spurred Google's competitors to develop similar capabilities, including, finally, Apple in 2019. Note added 12/8/19]

Although the goal when using SynthCam is usually to hold the camera as still as possible during a recording session, you can produce some cool effects if you don't.

In the first shot at left, I rolled the camera around its optical axis, i.e. from portrait mode to landscape mode and back, while keeping the focusing square centered on the girl's face. (The girl held very still because she's a photograph.)

In the second shot I rolled the camera the same way while aiming at the center of clock face. In the third shot I dollied the phone (pulled it backwards away from the clock) during the recording session.

My tracking algorithm usually gets confused if you roll the phone too much while recording, or move it forward or backward (towards or away from the subject). However, it can usually maintain tracking if the object inside the focusing square is circular, like the disk in the middle of the clock face. One could imagine alternative tracking algorithms designed to facilitate these sorts of effects, and maybe I'll implement a few of them in a future version of the app.

Beginning with Version 1.1 of the app, I use the accelerometer to detect accidental rolling of the phone during a recording session. If you want to reproduce these effects, select 1-point focusing on the main screen's toolbar, and turn off Tracking Assists on the Preferences screen.


Another rule worth breaking is the requirement that the scene should be stationary. What happens if it's not?

At left is a 5-second recording during which I didn't move the phone. Of course I can't hold still for 5 seconds, but SynthCam tracked the base of the fountain and kept it sharp. Meanwhile, the fountain continued flowing, so if you click on the image and examine it at full size, you'll see that the water is blurred to a silky haze.

Effects like this are normally hard to capture during the day, because a 5-second exposure will saturate the sensor of any digital camera. The traditional way to take this picture is to use an SLR and attach a neutral density filter to its lens; this will cut down the light enough to allow a 5-second exposure.