Google Voice Search Now Groks Meaning Behind Queries - InformationWeek

InformationWeek is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

IoT
IoT
Mobile // Mobile Applications
Commentary
11/17/2015
12:45 PM
Eric Zeman
Eric Zeman
Commentary
50%
50%

Google Voice Search Now Groks Meaning Behind Queries

An update to Google voice search improves its ability to understand complex inquiries.

CES 2016 Sneak Peek: 9 Cool Gadgets
CES 2016 Sneak Peek: 9 Cool Gadgets
(Click image for larger view and slideshow.)

Google voice search is about to get a whole lot better. Improvements to how voice search understands language will give it the ability to answer questions that contain multiple phrases variables.

Satyajeet Salgar, product manager of Google's mobile app, writes that the company has been taking baby steps forward with voice search.

Salgar points out several milestones reached in 2008 and 2012, such as the introduction of the Knowledge Graph, which helped push the app forward in incremental ways. Voice was first able to recognize simple entities and later the properties associated with those entities.

Now, Google is able to parse the relationships between all the entities in a voice query and deliver the desired answer.

(Image: TARIK KIZILKAYA/iStockphoto)

(Image: TARIK KIZILKAYA/iStockphoto)

"The Google app is starting to truly understand the meaning of what you're asking," wrote Salgar in a November 16 blog post. "We can now break down a query to understand the semantics of each piece so we can get at the intent behind the entire question. That lets us traverse the Knowledge Graph much more reliably to find the right facts and compose a useful answer. And we can build on this base to answer harder questions."

Google's mobile app can now associate superlatives, time, and combinations thereof. This ability means end-users can ask ever more complex questions and receive appropriate answers.

For example, support for superlatives means search can answer questions about the biggest, smallest, or fastest of something. For example, "What's the biggest ocean on Earth?"

Support for points in time allows for questions such as, "What year did The Beatles release Abbey Road?"

The complex relationships between entities? Well, that's where all the real fun is. Google says its voice search app should be able to answer questions such as, "What was the population of New York City while Fiorello La Guardia was mayor?" and, "Which movie is Robert De Niro's biggest box office hit?"

Google contends that it still has plenty of work to do. The Knowledge Graph still struggles with defining between some relationships -- especially when words have multiple meanings. What's not clear is just how much artificial intelligence Google is using to explore the Knowledge Graph.

[Read about Google and machine learning.]

Google isn't the only company working on this type of technology. IBM's Watson continues to evolve, and Apple and Microsoft are still hard at work improving Siri and Cortana, respectively. Apple's iOS 9 brought with it a much more powerful version of Siri that is able to handle far more complex questions than it did when it first appeared several years ago.

The new natural language voice recognition powers should reach the Google mobile search app this week for both the Android and iOS platforms. The app is free to download.

**New deadline of Dec. 18, 2015** Be a part of the prestigious InformationWeek Elite 100! Time is running out to submit your company's application by Dec. 18, 2015. Go to our 2016 registration page: InformationWeek's Elite 100 list for 2016.

Eric is a freelance writer for InformationWeek specializing in mobile technologies. View Full Bio
We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Thomas Claburn
50%
50%
Thomas Claburn,
User Rank: Author
11/18/2015 | 4:45:55 PM
Re: Snooping
I find that as soon as I try a vocal command that's misunderstood or doesn't work, I'm back using typed text. I think we have a few more years yet before computers are so good at understanding that we can talk to them without fear of misunderstanding.
Whoopty
50%
50%
Whoopty,
User Rank: Ninja
11/18/2015 | 7:17:24 AM
Snooping
As much as I like the idea of vocal commands - they have huge potential for those with limited mobility - I do worry about them always 'listening' for commands. We've already seen how the NSA and GCHQ are happy to snoop into anything we put online, so if we have essentially have open microphones in our houses and pockets, will they snoop in there too?
Commentary
What Becomes of CFOs During Digital Transformation?
Joao-Pierre S. Ruth, Senior Writer,  2/4/2020
News
Fighting the Coronavirus with Analytics and GIS
Jessica Davis, Senior Editor, Enterprise Apps,  2/3/2020
Slideshows
IT Careers: 10 Job Skills in High Demand This Year
Cynthia Harvey, Freelance Journalist, InformationWeek,  2/3/2020
White Papers
Register for InformationWeek Newsletters
Video
Current Issue
IT 2020: A Look Ahead
Are you ready for the critical changes that will occur in 2020? We've compiled editor insights from the best of our network (Dark Reading, Data Center Knowledge, InformationWeek, ITPro Today and Network Computing) to deliver to you a look at the trends, technologies, and threats that are emerging in the coming year. Download it today!
Slideshows
Flash Poll