**Generalized Gaussian Error Calculus**

**Generalized Gaussian Error Calculus **

Michael Grabe** **

Background

Due to a decision taken by Gauss himself, the error calculus should refer to random errors only. Notwithstanding that, Gauss discussed what he called regular or constant errors, with it addressing measuring errors being constant in time and unknown with respect to magnitude and sign. In the end he dismissed the latter arguing that it were up to experimenters to get rid of them.

Eventually metrologists became aware that Gauss’s view turned out unaccomplishable: those regular or constant errors turned out ineliminable. Naturally, as a consequence, they had to be brought to bear. Being constant in time they had to be estimated via symmetric intervals of estimated lengths. However, for whatever reasons, they now came on the stage under the heading *unknown systematic errors***.** Concurrently experimenters had to decide of how to embed them into the seemingly firmly joint error formalism as outlined by Gauss.

First and foremost let us recall, a constant error might at best be interpreted as the realization of a random variable but definitely not as a random variable per se. Still, the worldwide accepted proceeding puts those errors on the same level as random errors, i.e. treats them as if they were random too – though, doubtlessly, they affect measuring processes very much differently as compared to random errors.

Unfortunately, the procedure chosen implies a contradiction in term. As quantities being constant in time do not own a probability density, but according to the proceeding chosen such one was asked for, experimentalists postulated a distribution density to exist. Thereby however an inescapable consequence was overseen: the expected value of a random quantity following a symmetric rectangular density vanishes. However, with regard to the impact of unknown systematic errors, this property distinctly contradicts physical reality.

If anything, a constant measuring error, by virtue of its being constant in time, induces bias.

Probably, the metrological community backed off the prospect to treat unknown systematic errors as what they physically are, namely constants, in order to salvage Gauss’s classical formalism - as otherwise, far reaching modifications of error handling and propagation would have to be devised.

Over the past decades, metrology went international more than ever. Thereby contradictions in measurement results stemming from different laboratories became strikingly apparent. Hence, a wide-ranging troubleshooting was high on the agenda. In the course of a now famous panel discussion even the abolition of the method of least squares was discussed [2]. The turning point happened to occur in the wake of a seminar held in February 1978 at the Physikalisch-Technische Bundesanstalt Braunschweig. Being one of the lecturers, I revisited a topic issued by C. Eisenhart in the 1950s [1]. Eisenhart considered unknown systematic errors to spawn biases, a view that would cause the then commonly used procedures to assign measurement uncertainties to collapse.

Nevertheless, Eisenhart’s statement did not gain official acceptance. Based on Eisenhart’s view, I tentatively formalized measurement uncertainties presupposing “nonrandomized” unknown systematic errors [3]. Naturally, the approach issued new, say, larger measurement uncertainties. But this was not everything. Rather, measurement uncertainties would now localize *the true values* *of the measurands*, this in contrast to the common proceeding according to which uncertainties were intended to aime at the expected values of the measurands – properly understood – based on the proceeding to randomize unknown systematic errors. Evidently, the statement that uncertainties should localize true values of the measurands opted out from the classical procedures for the evaluation of measured data. In a sense, this turned out a metrological revolution.

Breathtakingly, the demand for a generalization of the Gaussian error calculus from scratch was in the air. What is more, statisticians would have to critically review their otherwise sophisticated procedures concerning the rating of empirical data being charged by unknown systematic errors.

Generalized Gaussian Error Calculus

In as much as the error calculus may be based on linearization, the formalism splits itself into a component induced by random errors and another one given by systematic errors. The components offer to be treated separately.

With respect to systematic errors the only information available reveals them acting as time-constant disturbances contained within symmetric intervals of experimentally appraised lengths. Preferably, they should be treated according to their physical property, namely to spawn biases. Hence, as seems natural, their exertion of influence should be accounted for by worst case estimations, say, applications of the triangle inequality – another recourse hardly appears substantial.

Treating random errors, the prevalent assumption is to consider them normally distributed. However, contrary to the common proceeding, I propose to base their analysis on complete variance-covariance matrices. That is to say, given *m* series of repeated measurements, I propose to let each series dispose of the same number *n* of repeated measurements. This approach enables experimenters to establish complete empirical variance-covariance matrices. In doing so, experimenters put themselves in a position to lean on the multidimensional normal model and hence on the distribution density of the empirical moments of second order. Surprisingly enough, just this mode of thought issues confidence intervals according to Student f*or bunches of measurands* and thus solves a slumbering problem of old.

Adding both uncertainty components linearly produces the overall uncertainty of the measuring process. The so defined uncertainties reveal metrologically remarkable aspects: By virtue of treating systematic errors to act as they make themselves out to be – namely as unknown constants to be assessed via experimentally appraised intervals – measurement uncertainties would localize the true values of the measurands. Just this property is the must-have precondition to safeguard what is called **metrological traceability***.* To stress: It is just this very property which establishes the spoken to metrological revolution. What is implied and likewise important, the proceeding produces the smallest uncertainties localizing the true values of the measurands “quasi-safely”.

Impact of the Generalized Gaussian Error Calculus onto the method of least squares:

aside from the defective linear system, to be treated according to least squares, there is a notionally true linear system. Hence, measurement uncertainties do take reference to implied true values. This way of looking at measured quantities opens up new metrological perspectives. The approach rests on the dropping of the randomization of unknown systematic errors.

the premised complete empirical variance-covariance matrix of the input data issues confidence intervals with respect to the expected values of the measurands

a weighting matrix in the sense of the classical Gaussian error calculus does no longer exist. In principle, any weighting appears choosable, as the true linear system reproduces the true values of the measurands at any rate.

Ultimately, experimenters as well as statisticians might wish to reflect about the interference of unknown systematic errors with the classical procedures as* tests of hypotheses and analyses of variance*. May we like it or not, it is what it is, the majority of them breaks down.

Literature

Papers the Generalized Gaussian Error Calculus is related to or based on, respectively.

[ 1] Eisenhart, C., The Reliability of Measured Values – Part I Fundamental Concepts; Photogrammetric Engineering **18** (1952) 543-561

Wagner, S., Zur Behandlung systematischer Fehler bei der Angabe von Meßunsicherheiten (On the treatment of systematic errors stating measurement uncertainties); PTB-Mitteilungen **79** (1969) 343-347

[ 2] Bender, P.L., B. N. Taylor, E.R. Cohen, J.S. Thomas, P. Franken, C. Eisenhart, Should least squares adjust-ment of the fundamental constants be abolished?; NBS Special Publications 343, Washington D.C. 1971

[ 3] Grabe, M., Über die Fortpflanzung zufälliger und systematischer Fehler (On the propagation of random and systematic errors) in "Seminar über die Angabe der Meßunsicherheit", PTB-Mitteilungen, Braunschweig, February 20th and 21th, 1978

[ 4] Guide to the Expression of Uncertainty in Measurement, 1 Rue Varambé, Case Postale 56, CH 1221 Geneva 20, Switzerland

Publications of the Author

[ 5] -: Principles of "Metrological Statistics", Metrologia **23** (1986/87) 213-219

[ 6]-: On the assignment of measurement uncertainties within the method of least squares, Poster Paper, Second Int. Conf. on Precision Measurement and Fundamental Constants, Washington DC, June 8-12, 1981

[ 7]-: Towards a New Standard for the Assignment of Measurement Uncertainties; NCSL Workshop and Symposium, Chigaco 1994, USA, Proceedings pp. 395-401

[ 8] -: Gedanken zur Revision der Gauß'schen Fehlerrechnung, tm – Technisches Messen ** 6 **(2000) 283-288

[ 9] -: Biased error analysis - a rigorous revision of uncertainty assignment; III International Conference on Soft Computing and Measurement; SCM'2000 June 2000; St. Petersburg, UDSSR

[10] -: Estimation of measurement uncertainties - an alternative to the ISO-Guide, metrologia **38** (2001) 97-106

[11] -: Schätzen von Meßunsicherheiten in Wissenschaft und Technik (Estimation of Measurement Uncertainties in Science and Technology), LIBRI - Books on Demand , Dezember 2000, ISBN 3 - 833 - 403 -187

[12] -: On Measurement Uncertainties derived from "Metrological Statistics", Algorithms for Approximation IV, 16.-20. July, 2001 University of Huddersfield, UK

[13] -: Neue Formalismen zum Schätzen von Meßunsicherheiten - Ein Beitrag zum Verknüpfen und Fortpflanzen von Meßfehlern, tm–Technisches Messen **3** (2002) 142-150

[14] -: An Alternative Error Model and its Impact on Traceability and Key Comparison, BIPM-NPL Workshop 19. 10. 2002, Teddington, U.K.

[15] -: Unknown systematic errors and the method of least squares, Uncert 2003, St. Catherine's College, Oxford, U.K., 9. and 10. April 2003 , Power Point Presentation

[16] -: Neue Formalismen zum Schätzen von Meßunsicherheiten – Ausgleich nach kleinsten Quadraten, tm -Technisches Messen **9 ** (2006) 531-540

[17] -: Key Comparisons (Poster Paper) Uncertainties for Industrial Environments (Poster Paper) CIE, 2. Expert Symposium on Measurement Uncertainties, June 11-17, 2006, Braunschweig

[18] -: Ten Theses for a New GUM (Poster Paper) PTB-BIPM Workshop on the Impact of Information Technology in Metrology, June 5-7, 2007, PTB Berlin, Berlin

Books

Measurement Uncertainties in Science and Technology, Springer 2005, First Edition

Measurement Uncertainties in Science and Technology, Springer 2014. Second Edition, ISBN 978-3-319-04888-8

Generalized Gaussian Error Calculus, Springer 2010, ISBN: 978-3-642- 03305-6

Grundriss der Generalisierten Gauß’schen Fehlerrechnung, Springer 2011, ISBN 978-3-642-17822-1

Truth and Traceability in Physics and Metrology, IOP Concise Physics 2018, ISBN: 978-1-64327-093-7

** **Later Lectures** **

** **

Correlations and interdependences between measured data, CIE, 2. Expert Symposium on Measurement Uncertainties, June 11-17, 2006 Braunschweig

Überlegungen zur Revision der Gauß’schen Fehlerrechnung (Considerations on a revision of the Gaussian error calculus) Physikalisches Kolloquium der T.U. Braunschweig, 9. Januar 2007

Korrelationen in Messdaten, Seminar für Qualitätssicherung und Messtechnik der Universität Erlangen,

Erlangen, 8. Dezember 2008

Systematic Errors and Correlations in Measured Data, Joint Committee for Guides in Metrology (at the BIPM) Sèvres, April 10, 2009