Data analysis

Experimental uncertainties and errors

It is important not to confuse the terms ‘error’ and ‘uncertainty’, which are not synonyms. Error is the difference between the measured value and the accepted value of what is being measured. Uncertainty is a quantification of the doubt associated with the measurement result. It is also important not to confuse ‘error’ with ‘mistake’.

Experimental uncertainties are inherent in the measurement process and cannot be eliminated simply by repeating the experiment no matter how carefully it is done. There are two sources of experimental uncertainties: systematic errors and random errors. Experimental uncertainties are distinct from human errors.

Human errors

Human errors include mistakes or miscalculations such as measuring a height when the depth should have been measured, or misreading the scale on a thermometer, or measuring the voltage across the wrong section of an electric circuit, or forgetting to divide the diameter by 2 before calculating the area of a circle using the formula A = πr2. Human errors can be eliminated by performing the experiment again correctly the next time, and do not form part of error analysis.

Systematic errors

Systematic errors are errors that affect the accuracy of a measurement. Systematic errors cause readings to differ from the accepted value by a consistent amount each time a measurement is made, so that all the readings are shifted in one direction from the accepted value. The accuracy of measurements subject to systematic errors cannot be improved by repeating those measurements.

Common sources of systematic errors are faulty calibration of measuring instruments, poorly maintained instruments, or faulty reading of instruments by the user (for example, ‘parallax error’).

Random errors

Random errors are uncertainties that affect the precision of a measurement and are always present in measurements (except for ‘counting’ measurements). These types of uncertainties are unpredictable variations in the measurement process and result in a spread of readings.

Common sources of random errors are variations in estimating a quantity that lies between the graduations (lines) on a measuring instrument, the inability to read an instrument because the reading fluctuates during the measurement and making a quick judgment of a transient event, for example, the rebound height of a ball.

The effect of random errors can be reduced by making more or repeated measurements and calculating a new mean and/or by refining the measurement method or technique.

In the VCE Physics Study Design, random errors are shown on graphs using error bars. Error bars are a graphical representation of the uncertainty of data. When determining the average and uncertainty of a set of readings, the average is the simple mean with outliers ignored while the uncertainty should take account the spread of readings using one of the many common procedures used in data analysis.

Outliers

Readings that lie a long way from other results are called outliers. Outliers should be further analysed and accounted for, rather than being automatically dismissed. Extra readings may be useful in further examining an outlier.

Quantitative analysis of uncertainties in measurement

The experimental uncertainty is the estimated amount by which a particular measurement might be inaccurate. For example, if a measured mass is 2.70 g and the uncertainty in the measurement is 0.05 g, the actual value is likely to be in the range between (2.70 - 0.05) g to (2.70 + 0.05) g, that is, between 2.65 g and 2.75 g.

Significant figures

Non-zero digits in data are always considered significant. Leading zeros are never significant whereas following zeros and zeros between non-zero digits are always significant. For example, 075.0210 contains six significant figures with the zero at the beginning not considered significant. 400 has three significant figures while 400.0 has four.

Using a significant figures approach, one can infer the claimed accuracy of a value. For example, 400 is closer to 400 than 399 or 401. Similarly 0.0675 is closer to 0.0675 than 0.0674 or 0.0676.

Columns of data in tables should have the same number of decimal places, for example, measurements of lengths in centimetres or time intervals in seconds may yield the following data: 5.6, 9.2, 11.2 and 14.5. Significant figure rules should then be applied in subsequent data analysis.

Calculations in physics often involve numbers having different numbers of significant figures. In mathematical operations involving:

· addition and subtraction, the student should retain as many digits to the right of the decimal as in the number with the fewest significant digits to the right of the decimal, for example: 386.38 + 793.354 - 0.000397 = 1179.73

· multiplication and division, the student should retain as many significant digits as in the number with the fewest significant digits, for example: 326.95 x 10.2 ÷ 20.322 = 164.

Determining uncertainties in measured data

The uncertainty of a measured value is half of the smallest deviation on a graduated scale and half of the smallest digit shown on a digital scale, for example, the uncertainty in an individual measurement can be written as 2.5 ± 0.05 g. However, where several readings are averaged, the average should have the same number of decimal places as the uncertainty. For example, if the rebound heights of a basketball are measured to the nearest centimetre yield the set of results: 60 ± 0.5, 62 ± 0.5, 59 ± 0.5, 60 ± 0.5, 61 ± 0.5, then the average rebound height is 60.4 cm with a maximum of 62 and a minimum of 59. The larger difference of these two values from the mean is 62 - 60 = 2 cm, so the reading now becomes 60.4 ± 2. Since the average has more decimal places than the uncertainty, the number recorded should be 60 ± 2 cm.

Propagation of uncertainties

There are various ways to represent uncertainty. For VCE Physics, students should represent uncertainties as absolute uncertainties, for example x ± Δx, or as percentage uncertainties, for example, z ± Δz%. Tables of results usually include uncertainties that are represented as absolute uncertainties.

When adding or subtracting quantities, absolute uncertainties are added. When multiplying or dividing quantities, percentage uncertainties are added. When a variable is raised to a power, for example, y = xn, the percentage uncertainty in y, Δy/y, is determined using │nΔx/x│, and the percentage uncertainty is determined by multiplying by n.

For any other mathematical treatment of variables students may simply substitute the lowest and the highest data points to determine the range.