Home‎ > ‎Classroom simulations‎ > ‎

Analog-to-digital conversion

Download

  A-to-D Conversion.vi   LabVIEW Virtual Instrument (requires LabVIEW version 2010 or higher)
 
A-to-D Conversion.exe

Stand-alone executable (download folder and run setup.exe to install)

Description

Analog-to-digital conversion is the process by which an analog (continuous) signal is converted to a digital (discrete) signal.  Two important parameters to consider in an A/D converter are resolution and range.  Resolution refers to the number of bits available to represent the signal, and range refers to the range of input signals the converter will accept.  For example, an A/D converter with 12-bit resolution has 212 = 4096 possible digital representations of the input signal.  If the range is ±10 V = 20 V, then these 4096 points are spread over 20 V, resulting in a minimum step height of 20 V / 4096 = 4.88 mV.
 
This simulation uses a sine wave with a user-defined amplitude as the analog signal.  The user sets the resolution and range for the A/D converter, and the resulting digital output is superimposed on the analog signal.

Front panel

(click to enlarge)

Guiding questions
  1. Using an A/D converter with 12-bit resolution and an input range of ±10 V, at what analog signal amplitude do you begin to see distortion of the analog signal due to digitization?
  2. How does your answer to part a change if you switch to 16-bit resolution?
  3. How does your answer to part a change if you use 12-bit resolution but with an input range of ±1.25 V?
  4. Why does a higher resolution give a better digital representation of small analog signals?
  5. Why does a smaller input range give a better digital representation of small analog signals?