3 Challenges To Designing A Voice Interface - InformationWeek
IoT
IoT
Software // Enterprise Applications
Commentary
11/3/2014
01:30 PM
50%
50%

3 Challenges To Designing A Voice Interface

Here's advice on how to overcome these challenges, as we increasingly rely on voice to interact with connected, smart devices.

Today we're using our voice to engage with our smartphones and cars, and in the not-so-distant future, voice interfaces will extend to other areas of our lives, perhaps even to our favorite appliances as part of a more intelligent connected home. Your kitchen, for example, could become a voice-enabled control center of sorts for the entire house.

But designing for a voice interface -- and integrating voice as part of the overall device experience -- requires different thinking than designing for the keyboard, mouse, and touchscreen. As designers for Nuance, a global leader in voice and natural language technologies, we're focused on this fundamental challenge. For companies looking to incorporate voice interfaces into their products, we see three fundamental adoption challenges that designers should focus on overcoming: a lack of trust, discovery issues, and simple usability concerns.

These issues are important because voice and natural language understanding have become table stakes for device interactions. The technology has become incredibly accurate and highly sophisticated with elements of artificial intelligence that create an intuitive and natural conversation.

[Names, ideally, tell us what an object can and can't do. See Self-Driving Cars: 10 More Realistic Names.]

Even the most sophisticated speech system in the world, however, will fail if it does not support users the way they expect. To deliver a better voice experience, we can combine fundamental design concepts with an understanding of natural conversational principles to build systems that listen, understand, and respond to get us relevant information. Here's how to overcome these three broad challenges.

Challenge #1: Lack of trust
When people talk, there is a natural cadence that leads us from start to finish within a conversation. A chat begins with input, which could be a nudge for attention ("Hey!") or a request ("Is there a coffee shop around here?"). The other party will recognize ("He said, 'Is there a ...'"), interpret this ("He's looking for a local place to get coffee ..."), and respond ("Travis Café is five minutes down the street"), based on contextual knowledge like location.

Virtual personal assistants should offer the same, because the closer an experience follows the path of natural conversation, the more trusted and understandable it will be. In our coffee shop conversation, we also may want to continue the dialogue to get more information -- is Travis Café popular among the locals? How do I get there from here? Allowing personal assistants to understand context and navigate an extended dialogue is all part of the design process.

An abrupt end to the conversation at an unexpected point in a dialogue without an explanation (most likely due to lack of resources) hurts the trust in a virtual personal assistant experience. It's OK to end conversations or refer users to other sources to continue -- even humans don't know everything. The key is to establish a framework that users can recognize and understand.

(Source: Alex Washburn of Wired, under Creative Commons license)
(Source: Alex Washburn of Wired, under Creative Commons license)

One of the biggest barriers to a voice system achieving trust is inconsistency. Product designers are applying voice and natural language to devices that already boast established and accepted input methods, and a key step toward trust will be voice technologies first replicating and then improving upon these established methods. When using a television, for example, pressing the "Guide" button on the remote brings up a corresponding interface. When voice is incorporated, it is vital that a spoken request for the "Guide" brings up the same interface. Once users understand these consistencies between input methods, they will develop greater trust in the system. Once designers build that trust, they can turn to offering a better experience than that handheld remote through more sophisticated natural language and reasoning capabilities.

Challenge #2: Discovery
Quite simply, people need to know that they can speak to a system, and what kinds of things they can say. Basic identification of speech may be simple -- a microphone icon is straightforward and recognizable -- but guiding users around what they can say is often more challenging.

In some cases, a proactive introduction could be a useful solution. If, for example, a personal assistant is the main way users will interact with the device, the assistant might introduce itself during device setup, engaging the user through voice interactions right from the start.

So, what can I say to a device? With natural language, the power and challenge are one and the same: You can say anything.

Context is important and an integral part of a well-designed speech system. For instance, if speech is part of a pizza-ordering application, it will probably only support pizza-related conversation. For applications with broader scope, like personal assistants, the challenge is greater -- these systems need to rely on context and user insights to have a fruitful dialogue, but not be so narrow that they operate within a confined set of boundaries.

Remember also that initiating human-to-human conversation is a two-way street -- we engage in conversations we're invited to by others, not just ones that we start. And it's the conversations that others start where we often receive new, sometimes surprising and delightful information.

Starting a dialogue means we're looking for something and expect a response, and this is largely the premise for our interactions with personal virtual assistants. However, today's assistants are becoming much more anticipatory and proactive, offering up information that we're likely interested in without having asked for it, such as sports scores, music recommendations, or an urgent email. Such proactivity can further reduce the challenge of discovering what you can talk to a personal assistant or voice system about.

With that possibility in mind, it's important to design thoughtful systems. They should engage the aforementioned notion of context to provide proactive insight at the right times, such as providing traffic updates when you're heading out, and not in the middle of the night when you're sleeping. If they're offering to read out news headlines, they should do it when you're getting into the car, and not stepping into a meeting.

Challenge #3: Usability
As natural language systems build this trust and become easy to discover, people will experiment with them and make requests that aren't supported. Systems must be flexible enough to account for the unknown inquiry. For instance, a person may direct their avatar in a computer game

Tim Lynch leads all design activities for Nuance Communication's Mobile-Consumer division, encompassing a range of devices, including smartphones, televisions, the connected car, wearables, and many others. His experience ranges from leading design efforts for several ... View Full Bio
We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Previous
1 of 2
Next
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
ChrisMurphy
50%
50%
ChrisMurphy,
User Rank: Author
11/6/2014 | 5:15:44 PM
Re: Significant
You make an interesting point -- that wearables and other Internet of things uses couid drive demand for voice interfaces. With a lot of devices, the question is where do you put the interface. A lot of times the answer will be it goes on an app on your phone, like a FitBit's does, but voice could be the convenient choice for some. 
David F. Carr
50%
50%
David F. Carr,
User Rank: Author
11/4/2014 | 10:21:25 AM
What's the voice equivalent of SQL injection?
Someone will eventually figure out a way to hack an application with their voice. Mr. Spock rendered automated systems helpless by barraging them with logical paradoxes or asking them to calculate the ultimate value of Pi. Your software probably won't be that dumb, but what might you have to worry about when users try to fool a voice app into doing something it wasn't designed for?
danielcawrey
50%
50%
danielcawrey,
User Rank: Ninja
11/3/2014 | 5:19:06 PM
Significant
These are some fairly significant issues – voice interfaces are still not up to par for replacing other methods of input – at least not yet.

However, these interfaces are improving, and I think with more wearables hitting the market we should see more ease of use with this technology as companies are forced to innovate user inputs. 
Commentary
Tech Vendors to Watch in 2019
Susan Fogarty, Editor in Chief,  11/13/2018
Commentary
Getting DevOps Wrong: Top 5 Mistakes Organizations Make
Bill Kleyman, Writer/Blogger/Speaker,  11/2/2018
Commentary
AI & Machine Learning: An Enterprise Guide
James M. Connolly, Executive Managing Editor, InformationWeekEditor in Chief,  9/27/2018
Register for InformationWeek Newsletters
Video
Current Issue
The Next Generation of IT Support
The workforce is changing as businesses become global and technology erodes geographical and physical barriers.IT organizations are critical to enabling this transition and can utilize next-generation tools and strategies to provide world-class support regardless of location, platform or device
White Papers
Slideshows
Twitter Feed
Sponsored Live Streaming Video
Everything You've Been Told About Mobility Is Wrong
Attend this video symposium with Sean Wisdom, Global Director of Mobility Solutions, and learn about how you can harness powerful new products to mobilize your business potential.
Sponsored Video
Flash Poll