Two heads really can be better than one...as long as those heads are similar

Two heads are supposedly better than one, but is this true when it comes to how we see the world? Combining people's initial estimates really can make for better measurements - but only if both observers know what they're doing.

Although it seems intuitively true that two or more people would come up with a more accurate assessment than one person alone, it's surprisingly tricky to actually demonstrate that. University College London neuroscientist Bahador Bahrami and his team took up the challenge by trying to figure out what factors make people more or less likely to improve the accuracy of their observations when combined.

They presented pairs of test subjects with a simple observational task and then were asked to evaluate what they saw. In order to replicate the observational "noise" that people experience in the real world, the subjects were each shown slightly different parts of the same display, and one used a mouse to interact with the screen while the other used a keyboard. (For a more detailed explanation of their experimental setup, check out their original paper.) The test subjects were allowed to discuss what they had each seen, but neither knew how good they actually were at assessing the visual data.

They found that the final, combined measurement was generally better than either of the original guesses, but only when certain conditions were met. It helped immensely for the two subjects to share how certain or uncertain they were of their respective estimates, which allowed them to weight the combined guess in favor of the more certain observer. This held true, even though the subjects were not told how accurate their guesses really were ā€” and, thus, whether their confidence was actually well-placed ā€” which suggests the subjects were generally pretty good at evaluating their confidence in their observations.

Perhaps most intriguingly, the subjects only improved their measurements if they were reasonably close on their initial guesses. In other words, two good observers could combine for a great guess, but a great observer and a poor observer would not improve upon the great observer's initial estimate, and in fact the addition of new, bad information would only hurt the quality of the ultimate estimate.

So how can we apply these new findings to problem-solving strategies? Writing his perspective on the team's paper, Marc O. Ernst of Germany's Max Planck Institute for Biological Cybernetics explained why simple information-combining strategies don't really work by using the example of two soccer referees deciding whether the ball went into the goal:

If the referees disagree in their individual judgments, the simplest way to resolve the conflict is to flip a coin. This strategy is less than optimal because it would be wrong half of the time, such that joint decision performance, taking all decisions together, will be in-between that of the two referees and not better than either one alone. To improve on this outcome, more information is needed. For example, if we knew from previous experience that Referee 1 usually makes more accurate decisions, we would ask that person to always make the final call. However, decision performance is just as good as having one referee present at the match, and there is still no improvement.

Ernst suggests that the best thing to do is to provide feedback to people who frequently observe the same event from different angles, with referees being the obvious example. That feedback would hopefully be able to improve not just how well the two observers see events, but also how similarly they assess them. This trained similarity would eventually meet the conditions Bahrami and his team outline for combined guesses to be better than individual ones, and it would make for more accurate assessments overall.

[Science]