By Denny Wong | Dec 13, 2013
‘Accuracy’ and ‘precision’ are often used interchangeably in everyday conversation. While watching a hockey game, a fan might exclaim “That was a precision shot! He hit the five-hole from the blue line!” As engineer you may be tempted clarify whether that shot was really precise, or whether it was accurate, or true. While this may make you an unpopular person to watch games with, you’d be perfectly correct in noticing that this usage doesn’t match the scientific definition. In colloquial context, preciseness often implies accuracy, and vice versa.
In science and engineering there are distinct definitions for these terms. International standards like the ISO standard have their own definitions as well, which makes understanding the terms critical if you need to qualify an electronic product design or perform cycle tests as part of your design validation testing.
In numerical analysis, precision refers to the resolution or number of significant digits of the quantity. The goal of this post is to give you a good understanding of these terms so that you can use them in a consistent manner.
Let’s look at the contexts in which an engineer might use these terms.
In science and engineering, accuracy refers to the trueness of a quantity. Trueness implies the closeness of a value to the true value or to a reference standard. For example, the Celsius scale is defined by two temperatures: absolute zero and the triple point of a specially purified water called Vienna Standard Mean Ocean Water (VSMOW). One degree Celsius is fixed as 1/273.16 of the difference between these two temperatures. If the accuracy of a thermometer was 1° Celsius, it would mean that it was always within one degree of its true value.
Precision is defined as the reproducibility or repeatability of a result from repeated measurements under unchanged conditions. Precision has a special significance in statistics where it is defined as the reciprocal of the variance but ironically in practice, it is computed as a standard deviation of a test result which is an expression of imprecision; to be precise.
By their scientific definitions, a measurement may be accurate without being precise and it may be precise without being accurate. This is contrary to colloquial use of the two terms. This is contrasted in the figure below:
In the ISO standard, trueness and precision are defined in the same manner as the scientific method we just described, but accuracy requires both trueness and precision. Accuracy is a combination of a random component and a common systematic error or bias component. Trueness is usually expressed in terms of bias or systematic errors whereas precision depends on the distribution of random errors.
Looking again at the targets, the previous descriptions for precision and trueness are still true according to ISO/WD 15725-1, but both results are thought of as low accuracy as they both do not demonstrate good precision and trueness.
In numerical analysis, accuracy is also the closeness to the true value, but precision is the resolution of the representation, i.e. number of significant digits of a quantity. In numerical analysis, precision does not refer to the reproducibility or randomness of a result. In practice, resolution is the smallest distinguishable change of the measurement or the system is capable of producing. So the number of significant digits used in representing a quantity is to reflect the smallest measurable or producible change in the measurement or system.
In the next post we’ll see how these definitions all come into play for an electronic design engineer!
Nuvation Engineering provides the full range of electronic design services to bring products from concept to market for our clients. Contact Nuvation Engineering to learn more about the services our skilled engineering design teams offer.