sábado, 16 de noviembre de 2013

Captcha FAIL: Researchers Crack the Web's Most Popular Turing Test

Captcha is the the gold standard for Turing tests on the web: Whenever an online form wants to check if you're a human being and not a spambot, it asks you to decipher one or two distorted words, presented as images. But what if there was a way for machines to defeat it?

That's exactly what researchers at Vicarious AI say they've done. In trying to develop a machine that thinks like a human — a multi-decade project — the small team of computer scientists says they have their first breakthrough: A computer that can process visual information similar to a human. That brings with it the ability to solve Captcha from the major web services of Google, Yahoo and PayPal up to 90% of the time.

"Past solutions may have solved a Captcha at a particular point in time, whereas this solution solves Captcha," says D. Scott Phoenix, one of the co-founders of Vicarious. "Past solutions were hacks that were not part of a general vision system, whereas we're trying to build an intelligent machine, and it happens to solve Captcha along the way."

Captcha (short for Completely Automated Public Turing test to tell Computers and Humans Apart) has been around since the mid 1990s, and since then many others have claimed over the years to have developed ways to get around it. Luis von Ahn, one of the inventors of Captcha, told Mashable that most techniques usually involve targeting a specific weakness of a specific Captcha system, and don't go far.

"The more common [approach] is to exploit specific weaknesses in the Captcha," says von Ahn. "That's dealt with very easily. I'd say 75% of the approaches do that sort of thing."

However, Vicarious' technique falls into another group, one that takes on Captcha with a broader approach: examining what makes Captcha so hard for computers, and changing the way they interpret visual data to make it easy, or at least easier. Although Vicarious isn't the first research team to defeat Captcha in this way, its success rate compared to past attempts is much higher.

"We [train] the system by showing it images of letters," says Dileep George, Vicarious' other co-founder. "It needs just a few examples of letters to learn about them. Previous work would require in the order 10,000 examples of a letter even to understand minor variations."

Some websites offer Captcha defeaters, although these are usually based on a computational trick that's easy for Captcha operators to adapt to, collections of known Captcha images, or even human drones. Since Captcha is present on many web services (such as logging in Gmail accounts), anything that defeats it could be a potential gold mine to spanners.

Vicarious isn't trying to help them out. Its goal is to get machines to think like humans, not make their lives harder, and has no intention of making its Recursive Cortical Network (RCN) technology available to the public. In time, however, it hopes to apply the tech to fields such as medicine (think analyzing x-rays), robotics and search engines. At the very least, it's probably created the most sophisticated optical character recognition (OCR) software ever made.

It seems inevitable, though, that Captcha will eventually fall in the wake of ever-advancing computer technology, but von Ahn isn't worried. Even if computers get really good at defeating Captcha, there are many other factors web services can use to weed out machines.

"At the end of the day, these programs are not perfect — they still make a number of mistakes," he points out. "Every time they make a mistake, they stick out a little. If an IP address is making more mistakes than a normal human, it'll get blocked. Also, a human takes 11 or so seconds to solve a Captcha, so if it's too far off, that's weird. Normal people only try three times. The Captcha itself is usually just one signal in a lot of signals."

In other words, Captcha will evolve, not go away. While computers will eventually solve text-based Captchas, image- and animation-based Captchas (like these) are waiting to replace them. After all, if it's hard for computers to tell what a distorted word is, imagine how flummoxing "what's in this picture?" would be.

Only when computers truly think like humans will the Turing test become obsolete. And when that happens, there will be many more troubling questions to address before figuring out how to keep our Facebook accounts secure.

Have something to add to this story? Share it in the comments.

Image: Flickr

No hay comentarios:

Publicar un comentario