Algorithm Predicts Relationship Success
It's not just what you say but how you say it that can indicate whether that romantic relationship is destined to last, and data provides the clues.
“He loves me. He loves me not.” Flower petals predictions have a 50 percent accuracy rate. Marriage therapists have a somewhat better rate of accuracy, but a computer algorithm beats most of them with nearly 79 percent accuracy. What puts the odds in its favor is measuring the tone of voice in couple interaction.
A research team led by Shrikanth Narayanan and Panayiotis Georgiou of the University of Southern California Viterbi School of Engineering, along with Brian Baucome of University of Utah and USC doctoral student Md Nasir, recorded numerous conversations from the marriage therapy sessions of more than 100 couples over two years and followed up with a check on the state of their marriage after five years. They published their findings in Proceedings of Interspeech on September 6, 2015 under the title Still Together?: The Role of Acoustic Features in Predicting Marital Outcome.
As the abstract says, the results demonstrate “that acoustic features can predict marital outcome more accurately than those based on behavioral descriptors provided by human experts.” But it’s not just about the sound but about the sound in context: “that the impact of the behavior of one interlocutor on the other is more important than the behavior itself looked in isolation.”
These researchers are not the first to discover that predictive accuracy depend on looking at interaction rather than at each of the spouses individually. It is something John Gottman (see {doclink 251889}) discovered a long time ago. His accuracy actually tops the algorithm’s. It’s consistently above 85% and reportedly has even topped 93% in some studies.
I did reach out to the researchers to ask about their awareness of Gottman ’s methods but never got an answer. I’d have to surmise that his approach was not represented in their “expert-created behavioral codes” that proved less accurate than their direct study of acoustics. That direct study includes “acoustic features characterizing speech prosody (pitch and energy)” and what they call “voice quality (jitter, shimmer).” The idea is that such factors would register the emotion of the speaker more accurately than what they say and so is regarded as a more objective measurement to assess the state of a couple’s interactions.
In the press release on the USC study, Georgious stressed the importance of drawing on a large enough sampling for the data analytics, “Looking at one instance of a couple’s behavior limits our observational power,” he said. “However, looking at multiple points in time and looking at both the individuals and the dynamics of the dyad can help identify trajectories of the their relationship.” In all, the study drew on 139 outcomes that ranged from “deteriorated” to “recovered.”
While the algorithm may prove useful for therapists, it has great potential in assessing all kinds of relationships, a possibility the researchers raise in their conclusion. They also point to more aspects of communication to include in future research, including “an assessment of the visual (e.g., head-movement and other face and body gestures).” That may get a bit more complicated to track but with the advances in software that works with visual information, it may be quite viable in the near future.
Even working just with the acoustics, this algorithm could work for figuring out if a business team will prove viable. Perhaps some managers should look into that possibility to better understand what sort of interaction indicates successful team dynamics and what raises a red flag.
About the Author
You May Also Like