Real-Time Acoustic Processing Has Big Data PotentialReal-Time Acoustic Processing Has Big Data Potential
Ready for a wearable that listens to your snoring -- or your stomach? Meet audio machine-learning tech.
March 10, 2014
CES 2014: 8 Technologies To Watch
CES 2014: 8 Technologies To Watch (Click image for larger view and slideshow.)
You're jogging down a busy city street, cranking tunes on your smartphone, oblivious to the world around you. The intersection ahead looks clear, and you're unaware of loud sirens signaling that a speeding ambulance is coming your way. But before disaster strikes, your smartphone shuts off the music and warns you of the approaching vehicle.
This is just one of many potential uses of real-time acoustic processing, a machine-learning system that analyzes ambient audio to predict near-future outcomes. In the example above it saved a clueless jogger from being squashed like a bug, but the technology has other potential uses too. It could, for instance, detect when industrial equipment is about to fail, alert deaf people to alarms and other auditory warnings, helping ornithologists analyze bird calls, and even monitor bodily sounds -- such as heartbeats, stomach rumblings, and snoring -- for use by mobile medical apps.
Rapid improvements in mobile devices, most notably faster processors and longer battery life, are helping audio machine-learning technology go mainstream, says One Llama Labs, a New York City-based developer of acoustic-processing software.
[There's more to wearable tech than just smartwatches. Read Wearables To Watch At CES 2014.]
"Wearable technology is now powerful enough to do serious machine learning, even at the audio level. And that technology will change the world in terms of monitoring," said David Tcheng, One Llama Labs' cofounder and chief science officer, in a phone interview with InformationWeek.
The company's Audio Aware machine-learning app is capable of analyzing hundreds of sounds, including music, from its surroundings. It will be available this month in the Google Play store; One Llama Labs plans to develop iOS and Windows Phone versions too, but no timetable was given.
The audio technology is based on research started a decade ago at the National Center for Supercomputing Applications' Automated Learning Group (which Tcheng cofounded) at the University of Illinois at Urbana-Champaign. One Llama Labs' original focus was on music recommendation technologies -- "sort of what like Pandora does but using supercomputers," explained company cofounder and EVP of business development Hassan Miah, who joined the call.
"The core acoustic, artificial-intelligence machine learning could apply to a lot of things," said Miah. "And now with the emergence of wearable technology, the cloud, and other factors, [our] technology can be used well beyond music. So that's the genesis of how we came out with the... Audio Aware system."
The company sees three primary markets for Audio Aware on mobile devices. The first: deaf users. "They can't hear alarms and other alerts," said Tcheng. "With my previous work with audio recognition and bird-call analysis and speech recognition -- in general, machine learning -- I knew we could detect these sounds with some of the audio machine-learning software I've created."
The second group: music lovers wearing headphones. "There is an epidemic of people just walking around -- kind of like zombies -- attached to their cellphones," said Tcheng with a chuckle. "And in the worst case [they're] cranking music so loud that they can't hear common threats."
The third group: people who want to be notified of specific sounds -- for example, nature lovers or users who study birds and other wildlife in outdoor settings.
Medical applications have potential as well, although identifying bodily sounds may present its own set of technical challenges. "We've been thinking about doing a sleep apnea application, because all the system needs to learn is how to recognize a breath," said Tcheng. "But as soon as you put the microphone on a body, you pick up all sorts of bodily sounds, from heart rate to the digestion system. If you've ever heard someone's tummy, it makes all sorts of noise."
In industrial settings, audio machine-learning technology might be used to distinguish between normally functioning machines, those in need of maintenance, and those about to fail, Tcheng said.
Engage with Oracle president Mark Hurd, NFL CIO Michelle McKenna-Doyle, General Motors CIO Randy Mott, Box founder Aaron Levie, UPMC CIO Dan Drawbaugh, GE Power CIO Jim Fowler, and other leaders of the Digital Business movement at the InformationWeek Conference and Elite 100 Awards Ceremony, to be held in conjunction with Interop in Las Vegas, March 31 to April 1, 2014. See the full agenda here.
About the Author(s)
You May Also Like