How Can Precision Actually Be a Bias?

Let's say you've got a set of studies on the same subject, and they all yield very similar results. They must be showing an accurate answer, right? Nope. Precision and accuracy are not anywhere near the same thing. Mixing them up could get you in real trouble.

Most people explain precision bias as what happens when people mix up precision and accuracy. Perhaps a better explanation would be it's what happens when people assume that precise results are more of an indicator of accuracy than they really are. To make the difference clear, precision is what happens when you test multiple times for a certain quantity - the length of a wave of light, how many people in a given population are fully vaccinated, the average yearly rainfall for a city - and get sets of results that are all very similar. This shows precision, but does it give us the number we need?

At first, the similarity of the numbers looks like accuracy. How could five different surveys, or tests, all turn up such similar results by sheer coincidence? To begin with, biased tests produced biased results. If you unknowingly survey only a part of a population, or unknowingly measure a length with an overlong measuring apparatus, you can get a numerically tight set of results, but they're all thrown off by the same factor. Measure a lot of different populations, or measure with a lot of different instruments, and you can get a wide range of results that average to something accurate, even if you're relatively haphazard.

Even if you don't do anything biased, coincidences happen all the time. Let's look at the example of rainfall. Seven-to-ten year droughts don't come along a lot, but they do come along. If you want to measure a city's rainfall, and you happen to do it during drought years, you'll get a narrow range of results that look accurate. (There aren't that many ways to have very small amounts of water.) But they won't represent average rainfall. Measure during normal years and you'll get results that are all over the scale, but which will average out to a much more accurate set of results than the precise results you get from drought years.

The trick here is figuring out if you're taking a precise measurement of accurate results, or if you're taking a precise measurement of your own bias and misfortune. Sometimes you're helped out by what you know of random results. If you're measuring an independent quality with a little randomness to it, but you're not getting a set of results that look random or independent, chances are something is throwing you off in one direction or another. Other than that, you just have to come up with new ways to test everything, and realize that your results might not be as accurate as you otherwise might think they are, even if they're precise.

[Via Accuracy and Precision, The Truth About Precision and Bias.]