CAPTCHAs Defeated, But Who Cares?
Columbia University researchers made headlines by defeating Google and Facebook CAPTCHAs through artificial intelligence, but the real fraud for enterprises happens via cheap labor, not AI.
Mobile Messaging Apps: 8 Tips For Keeping Your Workplace Secure
Mobile Messaging Apps: 8 Tips For Keeping Your Workplace Secure (Click image for larger view and slideshow.)
Columbia University researchers recently disclosed that they had created an artificial intelligence system to defeat CAPTCHAs (Completely Automated Public Turing test to tell Computers and Humans Apart). Since CAPTCHAs are often used to combat fraud, you would think that the ability to use an automated system to defeat CAPTCHAs at 80% or greater accuracy would be a big deal. It's actually not, at least from a business standpoint.
Here's what I mean. For use-cases that don't involve criminal activity and financial gain, there's very little incentive for bad actors to spend money to compromise systems. Your hobbyist bulletin board system, for example, is likely pretty safe with a CAPTCHA.
But for use-cases that attract criminal activity, like retail or credit card processing, there is significant incentive for bad actors to spend some amount of money to defeat the CAPTCHA.
Enter global economy, stage left.
Anyone who has worked in the jaded, weary world of fraud prevention knows that there are entire global businesses that provide rooms full of people in third-world nations to do things like solve CAPTCHAs. These businesses provide an API so that criminals can write code to provide an interface for people in a CAPTCHA-solving call center. It's like a Cory Doctorow novel, except that it's real.
Take a look at the Google results of "read CAPTCHA API." You'll see businesses that proclaim to be "a human-powered image and CAPTCHA recognition service" and urge you to "earn with us." One vendor promises the ability to "solve any CAPTCHA. All you need to do is implement our API, pass us your CAPTCHAs, and we'll return the text. It's that easy!"
Gain insight into the latest threats and emerging best practices for managing them. Attend the Security Track at Interop Las Vegas, May 2-6. Register now!
As Brian Krebs said, these are virtual sweatshops that charge around a dollar to solve 1,000 CAPTCHAs, and they're working in third-world economies, where paying an operator 35 cents for 1,000 solved CAPTCHAs is an incentive.
This is probably a lot cheaper and perhaps even faster than the artificial intelligence that the Columbia University researchers implemented.
So, again, the results at Columbia are interesting, but not terribly relevant to fraud prevention. With human labor so cheap, it's clear that relying on a Turing test doesn't and isn't going to work for anything serious.
What will? A more comprehensive anti-fraud service that keeps track of a number of variables, not simply whether the person solved a puzzle. It will work kind of like really good anti-spam services, which keep analytics about how often they've seen an actor, how many different credit cards are used, what geographies the actor is coming from, how many accounts are associated with that actor, and other items.
Facebook and Google actually do use such systems. Ask anyone who gets an email when they sign in from a different location. A high-security system, like the LastPass password manager, works this way: If you don't have its cookie in your browser, it sends you a confirmation email when you log in asking you if it's really you. Even the gaming service Steam does this, for crying out loud.
Frankly, the enterprise is somewhat behind. What can we do? We can start to investigate anti-fraud startups. This gives us a sense of what's possible for early-adopter enterprises.
I got a demo of one such service, Smyte, recently. Like other startups on the move, it relies on not just data, but big data to come up with its conclusions. As the founder, Pete Hunt, told me, "We aggregate many weak signals to come up with a strong signal when it comes to detecting fraud." That is, they don't rely on someone's IP address, or what their email address is, or whether they're coming through a proxy. It takes all these signals into account.
Smyte, backed by Y-Combinator, uses algorithms and machine learning to provide a verdict to its customers. But Smyte doesn't rely exclusively on big data and machine learning to infer patterns. Like a good anti-spam service, it has analysts who monitor trends and incidents, like an anti-fraud network operations center, and then apply new rules for new types of threats.
Hunt claims customers have prevented millions of dollars of fraudulent activity by using the service, which, through its own API, provides a verdict to customers to prevent a questionable transaction prior to letting it going through. Though the service supports automation through the verdict, it also provides a human-readable reason for the verdict for later manual review.
Smyte is serving social media and financial clients at the moment, but Hunt says that the company is open to new use cases. Could the next use-case be the enterprise? Whether the need is served by Smyte or another vendor, something more than CAPTCHA is needed.
About the Author
You May Also Like