The Limits of Night Vision

While GURPS lets humans buy up to 9 levels of night vision, and routinely gives that many levels to sensors, in reality that's very difficult to accomplish, and is quite unlikely on a man-portable sensor. This is not to say that you can't get a sensor that multiplies light by a million, but it won't have the performance of 9 levels night vision, because of the limits of quantum mechanics.

The human pupil, fully distended, is about 7mm in diameter, giving it an area of 3.8e-5m^2. Human vision has a resolution of about 1 minute of arc, or 0.29mm at a distance of 1m (an area of 6.6e-8m^2). 1 lux of ideal white light is roughly 4e-3W/m^2; a 550 nm photon has an energy of 3.6e-19J. Thus, our 0.29mm object, illuminated by 1 lux, reflects 7.3e+8 photons/sec. How much of that will reach the eye depends on the way the object reflects light, but if the object is normal to the eye, a typical figure would be about 14,000 photons/sec. The daylight-adapted human eye can distinguish a 60Hz flicker but typically combines images over a longer timescale, so 12 fps is probably a more reasonable estimate. That works out to around 1,200 photons per pixel per frame.

That sounds like enough for 256 colors (8 bit color depth), but it actually isn't, because the number of photons hitting is random; the maximum color depth you can reliably get is the square root of the photon count, or 35 colors (5 bit color depth). As a sample, I've taken a test image, and ran it through a mangler to produce what the image would look like if it was in darkness (using the midpoint of each range) and then you just multiplied the brightness until it got back up normal light level, using an ideal photomultiplier or CCD (no read noise); darkness levels 1 and 2 are discarded due to no visible effect. 'White=' indicates the number of photons that produces a pixel with a value of 255.

Original Image

All subsequent images have white listed as a specific number of photons per pixel.

Darkness 4

White=2000

Still probably no penalty, though it's possible to tell something is up.

Darkness 6

White=100

A bit worse grainyness.

Darkness 8

White=5

If I didn't already know what it was, I don't think I'd be able to figure it out.

Darkness 3

White=10000

Differences are subtle, probably no penalty.

Darkness 5

White=500

At this point, the image is visibly grainy; a -1 to vision tests seems reasonable.

Darkness 7

White=20

Well, I can still tell what it is, but it looks like an old newspaper photo.

Darkness 9

White=1

Well, there's something there, but I'm not sure what. Note that 2+ photons saturates the cell.

From the above images, it seems fair to give around 4 levels of night vision from electronic magnification, but that's about it (and we're in grayscale). If you want image magnification, you're splitting the same number of photons into more pixels, and thus you're dividing photons per pixel by the square of the magnification; the net effect is that every level of telescopic vision negates one level of night vision (unless looking at a point source; in that case you're getting the same number of photons in one pixel, and reducing nearby noise by your magnification). If you want more, you're going to need to get more photons somehow. There's several ways of doing that:

    • Use a larger lens; 7mm is not very large, and the available photons scale as the square of the lens diameter. However, such optics can be bulky, and it's necessary to either multiply the sensor size, or divide the sensor field of view. This only works for cameras; visual telescopes and photomultipliers are using the pupil as the end sensor, and thus can only get enough to negate the penalty for magnification, meaning the maximum useful size is 7mm x magnification (which is why 7x50 is a standard size for binoculars).

    • Use a longer exposure time. This is done routinely in astronomy, but it causes problems when looking at moving objects, because they become blurred. Even 1/12 of a second is really too long without some visual processing tricks, it's typically hard to take pictures without a tripod at less than 1/60 of a second (astronomy also uses visual tricks; adaptive optics change the focal point of a telescope to cancel out atmospheric fluctuation).

    • Capture non-visual photons. If you start collecting near infrared (700-1500 nm), for a sunlike source you roughly double the energy and triple the photon count. For an incandescent source, you increase the photon count by a factor of more than ten. This does result in false colors if an object is much more or less reflective in the near infrared, but gray is alien enough to normal vision anyway that people can probably get used to the gray being slightly wrong.

    • Tolerate lower resolution. The above images are using the pixel size of your monitor, which is typically larger than actual max visual resolution, so it makes the graininess a bit more apparent than it would actually be. At 0.001 lumens/m^2 (roughly the darkness-9 level above), human vision would be challenged to even detect an object the size of the entire image above, let alone make out internal details.

In practice, most TL 7-8 night vision gear captures some non-visual photons, but also is operating at less than the theoretical limit, so you're probably capped at around 3 levels for TL 7, 4 levels for TL 8.

Color Night Vision

Now for a related point: color and night vision. GURPS says that all night vision is colorblind. That's historically accurate, but it's not like the photons actually lose their color, it's just that it's technically easier to make black and white night vision. There's three reasons for this:

    • Color vision can't capture non-visual photons.

    • You don't get any additional photons; if you split the photons into three colors, you've only got 1/3 as many photons in each color.

    • The simplest way to construct a color sensor is to just take three images with three different color filters and combine them electronically, but this involves filter switching and means that any photons that arrive at the wrong time are lost, effectively discarding 2/3 of the photons you could be collecting. It's also possible to make a color sensor that works the way the eye or color film works, simply scattering a mix of different sensors across the collecting plate, but again, this results in losing 2/3 of the photons due to striking a sensor of the wrong color. There are theoretical ways around this (for example, a three layer sensor -- the top layer absorbs blue photons and lets the rest through, the second layer absorbs green, the third absorbs red), but they aren't available at TL 8.

    • A PC may buy color night vision for 1 point.

If you can make it work, low photon counts have similar effects. Here, I'm assuming that some advanced form of color night vision that captures about 60% of the photons (20% in each color), and then doing the same processing as was done above (Saturated= indicates the number of photons to produce a color bit of 255).

Original Image

All subsequent images have white listed as a specific number of photons of each color per pixel.

Darkness 4

Saturated=500

Still probably no penalty, though it's detectable.

Darkness 6

Saturated=20

A bit worse grainyness.

Darkness 8

Saturated=1

Interestingly, this is easier to recognize than the equivalent black and white image.

Darkness 3

Saturated=2000

Differences are subtle, probably no penalty.

Darkness 5

Saturated=100

At this point, the image is visibly grainy; a -1 to vision tests seems reasonable.

Darkness 7

Saturated=5

Well, I can still tell what it is, but it looks like an old newspaper photo.

Darkness 9

Saturated=0.2

Like the black and white, it's not really practical to distinguish what this is. Note that each byte here is either 0 or 255.

As you can see, losing 40% of the photons didn't cost us very much, and improves recognition significantly at low levels. However, this source image has rather high color contrast, the benefit would likely be smaller for a low contrast image.