Let's say you've got a set of studies on the same subject, and they all yield very similar results. They must be showing an accurate answer, right? Nope. Precision and accuracy are not anywhere near the same thing. Mixing them up could get you in real trouble.

Most people explain precision bias as what happens when people mix up precision and accuracy. Perhaps a better explanation would be it's what happens when people assume that precise results are more of an indicator of accuracy than they really are. To make the difference clear, precision is what happens when you test multiple times for a certain quantity - the length of a wave of light, how many people in a given population are fully vaccinated, the average yearly rainfall for a city - and get sets of results that are all very similar. This shows precision, but does it give us the number we need?

At first, the similarity of the numbers looks like accuracy. How could five different surveys, or tests, all turn up such similar results by sheer coincidence? To begin with, biased tests produced biased results. If you unknowingly survey only a part of a population, or unknowingly measure a length with an overlong measuring apparatus, you can get a numerically tight set of results, but they're all thrown off by the same factor. Measure a lot of different populations, or measure with a lot of different instruments, and you can get a wide range of results that average to something accurate, even if you're relatively haphazard.