Will CAPTCHA-breaking bots soon make it impossible to prove we're human?

Have you noticed that CAPTCHAs are getting more difficult to solve? CAPTCHAs, for those who've never heard of the internet, are those annoyingly ubiquitous prompts asking you to make sense of a marginally legible string of letters to prove that you're human.

And yes, these bot-defying, rage-inducing tests are indeed getting harder to solve, on account of an arms race against hackers who are constantly trying to defeat them. But as these bots get better at solving CAPTCHAs, and as humans get worse, are we going to lose our last hope of proving we're human?

This is the question that was recently asked by David J. Hill of Forbes Magazine, who worries about how difficult the tests are becoming:

Will CAPTCHA-breaking bots soon make it impossible to prove we're human?

For example, an app developer named Andrew Munsell put a post recently about his own frustration with reCAPTCHA, Google‘s own version of the CAPTCHA system, after a few failed login attempts on an account. In the post, he includes a sample of the reCAPTCHAs he was presented [to the left], and many commenters chimed in with their own CAPTCHA moments. It's understandable why this is such a common experience for web users when 280 million CAPTCHAs are solved daily, according to Businessweek.

And as Hill notes, a fundamental assumption behind the whole CAPTCHA system is is that, while humans may fail to solve some of the tests, bots must always fail — otherwise the system isn't secure. The challenge, then, is to create CAPTCHA tests that are beyond the capacities of a bot's optical character recognition program. This, says Hill, is a problem:

First, a boatload of strategies are posted around the web about how to improve recognition in scripts in order to break CAPTCHAs. These may involve ways to enhance OCR by removing noise, such as those annoying intentionally introduced lines or dots peppered throughout images. Another strategy is to manipulate the characters in an image by rotating, aligning, or warping them - basically, many of the features that come standard to today's photo editors. Libraries of solved CAPTCHA images have also been collected, thanks to sites around the web that pay people fractions of pennies to solve tons of CAPTCHAs. Amazon Mechanical Turk used to be a popular one, but now a number of independent sites are around, such as Death by CAPTCHA. Clever hacks have even been developed for audio CAPTCHAs that merely deconstruct waveform shapes to identify what numbers are being spoken.

Developers are trying to come up with new ideas to foil the hackers, including the use of input devices like a keyboard, mouse, or even Google Glass (which could track eye movements as an image moves across the field of view). But Hill doesn't put much faith into these approaches, arguing that even these software-driven solutions can be exploited.

And as a consequence of all these efforts and counter-efforts, hackers and companies are developing increasingly sophisticated Turing tests that are helping to evolve artificial intelligence. "So every step backward for CAPTCHA is a step forward for AI," writes Hill.

The end result, he says, is that it will soon be very difficult to prove to a computer that we are human and not a bot:

Perhaps eye scanners and blood samples are inevitable, but those are exploitable too (watch Gattaca to see a masterful exploit at work). Truth is, artificial intelligence will find more and more ways to make computers look human until finding the difference between them will be a painstaking process, something akin to the [attached] classic scene from Blade Runner.

Be sure to read Hill's entire article at Forbes.

Image via.