Traditional error analysis to estimate the uncertainties in experimental measurements and predictions has given way during the last two decades to uncertainty analysis. The current methodology is described in the GUM (Guide to the Expression of Uncertainty in Measurement) issued by ISO. The reasoning behind the change was the recognition that one could never know the true value of a measurement and thus the error could not be determined.

As promulgated, the uncertainties are represented in statistical terms based upon sampling (classical statistics) and upon subjective reasoning (Bayesian). The approach combines these uncertainties in the usual error analysis approach of summing the variances (now labeled the ‘Law of Propagation of Uncertainties’) and then using the student-t distribution to attach some meaning to the result.

The correct procedure is to apply the Bayesian approach throughout, but many investigators have heard only that Bayesian inference is computationally expensive and question the idea of subjective probability and resort to using the Law of Propagation of Uncertainties.

This short note treats the common problem of the uncertainty of a measurement using a calibrated device. It compares the two methods, reminding the reader of what the GUM calls for and assumes, and showing that with our current computing power there is no good reason to avoid Bayesian inference to provide more exact and informative understandings of measurement uncertainties.

This content is only available via PDF.
You do not currently have access to this content.