Is Synthetic Media a Poison Pill for Truth?
The dangers presented by misuse of AI-based synthetic media for exploits such as deepfakes in politics or fraud are real, and they call for better security strategies. However, there also is an upside.
You may not have heard of “synthetic media” before, but it’s likely that you have experienced it.
Synthetic media is media (like voicemail, video or audio clips) that has been either produced or modified by artificial intelligence. Deepfakes are a common example of synthetic media and how it is deployed. AI-generated videos, images, and audio clips can be created to impersonate real people.
Back in January, for example, New Hampshire investigated a deepfake robocall using President Biden’s voice to discourage people from voting during the primary. That is just one example among many.
Concerns Rising
To the casual observer, deepfakes can be very convincing. The technology can be used in personal, business and societal settings. For instance:
Deepfakes could be used to initiate a phone call from someone’s child or grandchild in distress and asking for money.
The technology can be used to impersonate company CEOs or other C-suite leaders, instructing someone in finance to transfer funds or share sensitive information. A finance worker in Hong Kong paid out $25 million on the direction of a fraudulent imitator of his CFO.
In broad societal settings, deepfakes can be used for political manipulation or election interference, delivering very convincing voice or video messages from candidates saying things they never said.
The technology is not perfect -- yet -- but it is getting better. We are in the salad days with AI. For instance, the Microsoft VASA-1 project is quite frightening in terms of its potential. VASA stands for “Visual Affective Skills Animator.” Feed VASA with a single headshot along with a voice sampling and the system can generate realistic video, with associated facial expressions and lip movements that are eerily true to life. VASA is a lab project (for now) but how long will it take before it -- or a similarly powerful technology -- gets leaked and exploited for mal intent? Could these technologies effectively become a “poison pill” against truth?
While the potential can be frightening, it can also be enlightening.
The Positive Power of Synthetic Media
There are positive potential applications for synthetic media. For instance:
Organizations can use synthetic media to extend the reach of their training and sales staff members, creating product demos, explainer videos, webinars and other outputs in seconds.
AI chatbots and virtual assistants can use the voices of recognized staff members to deliver messages.
Synthetic media can be used to deliver company messages across geographic boundaries and language barriers by translating a presentation by the CEO into multiple languages.
As long as there is transparency behind these efforts, the power of synthetic media can potentially extend limited resources in increasingly creative ways. At the same time, there are steps organizations can take to protect their business from pernicious abuse of technologies like synthetic media.
Combatting the Dangers of Synthetic Media
The gap between rapid tech advancement and human adaptability creates fertile ground for exploitation. This gap is where cybercriminals live, continually adopting and adapting methods to weaken organizational defenses. Despite their efforts, though, there is hope. Organizations can mitigate the risks from technologies like synthetic media by fostering a robust security culture through:
Awareness is step one. Employees need to be trained and educated on the latest synthetic threats and how they can be identified.
Controlling access to sensitive information can be done through stringent verification processes that might include multi-factor authentication and the use of code words or phrases.
Technical controls could be applied as an aid for detecting deepfakes by analyzing inconsistencies in videos or audio-visual mismatches. This could be an excellent application for AI automation.
Collaborating with industry peers and reaching out to cybersecurity experts can help companies stay up to date on emerging threats -- and the best practices to combat those threats.
Advocating for state and federal policies and regulations to combat the misuse of synthetic media.
The same AI algorithms that create deepfakes can be applied to defend against them. These systems are becoming increasingly more sophisticated and capable of identifying even very subtle signs that something is amiss in a video or audio clip. AI can be used to help train employees to recognize and respond to these deepfake threats.
Will synthetic media poison our ability to discern truth? Arguably, it already has, given the prevalence of deepfakes. The capacity for these manipulations to be exploited by malicious actors is a concerning reality. But just as businesses and governments have successfully addressed various cyber threats by building and nurturing a healthy security culture, they can do the same to combat the dangers posed by synthetic media and deepfakes.
About the Author
You May Also Like
2024 InformationWeek US IT Salary Report
May 29, 20242022 State of ITOps and SecOps
Jun 21, 2022