to choose a weapon by saying, "The cheapest one!" However, the system needs to know which is the cheapest within a menu of reasonable choices, versus only understanding a command to select a rock.
Further, systems should be designed to provide responses to known-unknown scenarios through a conversational dialogue, such as, "I'm sorry, I can't create a playlist for you just yet," and then explain what elements are missing.
But, we can't anticipate everything. When we communicate with other people, sometimes we don't understand the other party, don't know how to help them, or simply can't hear them. But we can typically resolve those issues through the conversation. The same should be part of the voice experience design, where the system should aim to identify what went wrong so users understand how to reengage or better yet, have the system redirect the conversation so it can still complete the task.
People often bring experiences with previous speech systems with them -- including ones that require specific voice commands. A virtual personal assistant may be listening for "Hey, can you put on some jazz for me?" but instead the user might say, "Play ... music ... jazz." A true natural language system shouldn't proscribe what "natural" means, and should support "simple" requests as well as full dialogues. Natural conversation is in the eye, or mind, of the beholder.
As we think about what else makes for a successful speech interaction, we need to keep in mind the importance of the two tracks we have in conversational feedback. In face-to-face conversations, people subconsciously follow a content track and a management track. The content track manages ideas -- recognize, interpret, and respond. The conversational management track is where we monitor the other party -- can they hear us, are they attentive, are they confused? If anything in this management track goes awry, we can solicit feedback or change the dialogue to get back to the content.
People look for the same feedback from virtual personal assistants and voice systems. It should be very obvious when the system is available and attentive, and when it is listening, processing, understanding, and responding. If people aren't sure when a system is listening, for example, they won't know when to talk, resulting in partial speech being captured and misrecognitions -- and of course, frustration.
When we interact with other people, we respond to the physical and visual cues of our partners -- "Take that small orange piece, and place it next to the wheel here." We communicate with our bodies, our hands, our expressions, and our words.
This is where designing voice interfaces with context is key. People increasingly expect devices and their personal virtual assistants to have a basic understanding of us and the world -- to know where we are, what we are doing or just did -- and surface that knowledge through responses. We expect existing modalities -- touch, mouse, gesture, and others -- to coexist with speech. Browsing and selecting a photo may be easier via touch, but texting it may be easier through speech. Speech should not be viewed as the solution, but should work with and keep up with other input methods for a holistic conversational experience.
The Internet of Things demands reliable connectivity, but standards remain up in the air. Here's how to kick your IoT strategy into high gear. Get the new IoT Goes Mobile issue of InformationWeek Tech Digest today. (Free registration required.)