Theory
 

Home
Back Story
Details
Theory
Other Tests
Testing
Results
Conclusion 
FAQ

Unfortunately, there is a theory which provides a best-case scenario for digital television.

I say unfortunately because the best-case scenario still isn't ideal.

In the world of analog television (standard-definition CRT), the image is rendered to the screen as it is received. There may be some exceptions to this if there's some sort of weird image processing going on internally, but let's just assume for our purposes that this isn't the case.

As an image is received at a speed of, let's say 30 frames per second, the CRT draws the image to the screen such that a full frame takes 1/30 of a second.

This means that your 30 fps input is being displayed more or less instantaneously at 30 fps on  your display.

The jump to digital television changes that slightly.

Let's assume the signal remains unchanged and is being fed to the HDTV at a rate of 30 frames per second. The television doesn't scan the image to the screen instantly the way an analog tv would. Instead, it buffers an entire frame, performs upscaling routines to get the image to fit the native resolution of the television and also performs any image enhancement (noise reduction, contrast optimizations, etc.) before displaying the image to the screen.

Assuming that all the upscaling and image processing routines take absolutely zero time (which of course is impossible), the best a digital television can hope to achieve is lagging the input source by one frame. This is because the entire frame needs to be buffered before it can be processed and sent to the screen.

This provides us with a theoretical minimum of 1/30 of a second latency no matter how good the video processor is.

Next : Other Tests