Stanford University researchers have found a security flaw in audio-based completely automated public Turing test to tell computers and humans apart (CAPTCHAs), which are designed to provide Internet security for the visually impaired. Audio CAPTCHAs require users to listen to a string of spoken letters or numbers disguised with background noise. However, Stanford professor John Mitchell and postdoctoral fellow Eli Bursztein developed Decaptcha, a program that can understand commercial audio CAPTCHAs used by Digg, eBay, Microsoft, Yahoo, and reCAPTCHA. During testing, Decaptcha was able to decode Microsoft’s audio CAPTCHA about 50 percent of the time. In addition, it broke about one percent of reCAPTCHA’s codes, and even this small a success rate can result in a major security breach for Web sites such as YouTube and Facebook, which get hundreds of millions of page views a day. Decaptcha can recognize the distinct sounds of each letter and number, and compares the sounds it hears in audio CAPTCHAs to those sounds stored in its memory. The researchers created four million audio CAPTCHAs mixed with white noise, echoes, or music, and found that music gave the computer systems the most trouble.
View Full Article