Video frames, it turns out, aren't as easy for computers to read as Web pages, making them an ideal place to advertise illegal products and services without a machine-readable text trail.
F-Secure's chief research officer, Mikko Hypponen, on Wednesday published a series of screenshots documenting the trend. "Online criminals regularly post their ads on YouTube, looking for buyers for their products," he explained.
One ad offers stolen credit card numbers that have purportedly been skimmed from cruise liners, casinos, and hotels, with PINs captured by a covert camera.
Credit card numbers also are advertised, bought, and sold on Web sites, but such sites can be filtered, blocked, or taken down.
Uploading a video ad to YouTube is a way to reach a mass audience without worrying about Web hosting. And make no mistake, YouTube's audience is massive: according to ComScore, YouTube surpassed 100 million unique viewers in the United States in January.
Identifying unlawful video content isn't as easy as spotting sites that host malware. Watching a video to determine whether it advertises stolen data or illegal software can take several minutes. It's far more time-intensive than blocking a URL identified algorithmically as malicious.
"It certainly much more difficult to filter video than a phishing attack," acknowledged David Frazer, director of technology services at F-Secure. Videos advertising stolen data are nonetheless easy to find through metadata provided by the video creator. One only needs to search YouTube for keywords popular among cybercriminals, like "carding" or "credit card numbers" to find YouTube videos hawking stolen data, unethical hacking techniques, or programs of questionable legality.
As an example, search YouTube for the keywords "credit card generator" and you get 477 results, some of which advertise software that can be used for credit card fraud and some of which are harmless.
Google has long dealt with undesirable content in its Web index and on its other properties. But it's hard to gauge the effectiveness of its actions. YouTube may have a policy that prohibits various kinds of bad content and may encourage its members to flag content that violates its policies. Yet unlawful content persists and like a weed returns after being pruned.
A spokesman for YouTube said the site prefers to rely on its community to police itself. "We don't prescreen content," he said. "That's the key to our community. Fifteen hours of video uploaded every minute to YouTube. That's a staggering amount of user-generated content. We count on our community to know our community guidelines and flag content that violates the guidelines."
While Google has a mechanism for users to flag inappropriate content, the penalty -- video removal and possibly account cancellation -- isn't much of a punishment given how easy it is to open a new account and post a similar video.
The spokesman acknowledged that the system isn't perfect but contended that the community usually flags objectionable content quickly.
Frazer observed that YouTube doesn't have a specific mechanism for reporting cybercrime. He argues YouTube would benefit from having more fine-grained reporting options.
Even so, user-generated content is likely to continue to be used to generate revenue through scams.
What other issues are important to the YouTube model? InformationWeek has published an independent analysis of this topic. Download the report here (registration required).