Artificial intelligence is getting smarter and smarter, and it’s reducing the gap between machines and humans. But are some technological developments in AI in danger of becoming too human?
This may sound strange coming from someone whose mission is to make technology more human and conversational. Don’t get me wrong, I’m all for companies using AI in innovative ways. But when humans and machines can perform many of the same functions, we run into a few problems.
Most people haven’t realized that AI has seamlessly blended into our lives already. Only 33% of consumers think they use technology with AI, but in reality, 77% are using an AI-powered service or device. With the touch of a button or a voice command, virtual assistants can play music, make calls, and even order toilet paper when we’re running low.
Using AI like this is mostly unproblematic. That’s because until recently, humans initiated interactions with computers, and not vice versa. But with new advancements in communication, this has started to change. As a result, the line between humans and machines is beginning to blur.
Can computers learn the art of conversation?
Google recently unveiled Duplex, a virtual assistant that can carry out “real world” tasks over the phone. It can perform functions like scheduling a dentist appointment or making a dinner reservation. These are tasks that typically require human interaction on both ends.
Not anymore. Duplex’s AI voice sounds so natural that the person taking the call could be unaware they’re chatting with a machine. It’s an exciting technological leap for sure, but it also creates a bit of a dilemma.
On one hand, it’s easy to think of the benefits, things like assisting those with hearing impairments or communication barriers. But in some situations, mistaking an AI’s voice for a person’s could cause serious issues.
Imagine a scenario where fake AI calls clog up emergency phone lines, or an AI phishing scam that calls people pretending to be from a bank to get their card details. It would be a Duplex-powered disaster.
While it’s inspiring to imagine how the future could look as AI tech improves, it’s also important to think about the ethical dilemmas and risks that pop up along the way.
Human tech is trending tech
Duplex’s uncanny human sound and use of language certainly carries a shock factor the first time you hear it. But it’s part of a wider tech trend, where products are being designed to behave more like humans than machines. The future of technology now aims to observe how people interact without the help of machines, then mimic these human traits.
The consequence? Our relationship with technology is rapidly changing. So far, we’ve been comfortable with machines acting like humans because we’re able to spot the difference. But soon that might change, too.
So, as AI advances and becomes more emotionally aware, companies have a responsibility to come up with ethical guidelines to keep technology in check. And these can’t just be limited to protecting tech consumers, as the influence of technological leaps spreads far beyond early adopters.
This year, we saw new rules related to technology go into effect with GDPR. The EU government initiative has completely restructured how companies think about transparency and accountability. It’s also changed the way millions of people interact with the Internet on a day-to-day basis.
As our relationship with technology continues to evolve, companies themselves need to start developing similar rules to govern more human-machine interactions. Because one day, the line between humans and machines may completely disappear.
David Okuniev is chief executive officer and co-founder of Typeform.The InformationWeek community brings together IT practitioners and industry experts with IT advice, education, and opinions. We strive to highlight technology executives and subject matter experts and use their knowledge and experiences to help our audience of IT ... View Full Bio