Potential societal benefits of artificial intelligence outweigh concerns, although people want some transparency.
From tutors and travel agents to financial advisers and office assistants, the rise of artificial intelligence (AI) brings the added concern that machines will replace many jobs and daily roles commonly held by humans while leaving one’s personal data open to privacy or security harms. Despite this, only 23% of consumers actually believe that AI will have serious and negative implications on society, according to PwC’s latest Consumer Intelligence Series research.
In a world where more than four billion records of personal information were stolen or lost during 2016 and data breaches at large corporations dominate news headlines, privacy has become a hot-button issue for any new technology, including AI. Although consumers remain concerned about protecting their privacy and the vulnerability of their personal information, most are more interested in the potential for positive societal impact.
When asked about the importance of AI being used to solve today’s bigger issues for the benefit of our society, consumers told us that they would be willing to share their personal information if it meant doing so could further medical breakthroughs (57%), relieve city traffic and improve infrastructure (62%), solve cybersecurity and privacy issues (68%), and help solve personal financial security and fraud (61%).
That said, our survey found that consumers would be more comfortable if they could control when and for how long the data they give to AI platforms exists, as they remain open to the advances in technology yet vigilant to maintain personal privacy.
The growing “risk versus reward” conversation surrounding AI goes beyond the consumer. Companies are also feeling the pressure to address privacy, particularly as more automated ways of tracking and collecting data are being developed.
Creating a sound accountability framework is then imperative to bolster public confidence in the development and application of AI. Going forward, continued public reception for AI and automated decision-making will be conditioned upon the development and application of fair and equitable algorithms. In short, transparency is key. AI developers need to be able to show both a process and a product that instills trust in consumers; further, consumers need to see what is being done with their data, and they need a meaningful opportunity to weigh in.
We’re only on the forefront of AI and algorithmic machine learning, and with AI expected to contribute over $15 trillion to the global economy by 2030, there is an unequivocal need to balance this burgeoning technological innovation with consumer privacy protection. Even for those most concerned with privacy, AI might be their greatest hope. Consumers in the survey identified cybersecurity and privacy as the most important issue AI could help solve today, and this application is already being realized.
Jocelyn Aqua, PwC
Jocelyn Aqua is privacy and cybersecurity principal at PricewaterhouseCoopers.
The InformationWeek community brings together IT practitioners and industry experts with IT advice, education, and opinions. We strive to highlight technology executives and subject matter experts and use their knowledge and experiences to help our audience of IT ... View Full Bio
We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Cybersecurity Strategies for the Digital EraAt its core, digital business relies on strong security practices. In addition, leveraging security intelligence and integrating security with operations and developer teams can help organizations push the boundaries of innovation.