Is Synthetic Media a Poison Pill for Truth?

The dangers presented by misuse of AI-based synthetic media for exploits such as deepfakes in politics or fraud are real, and they call for better security strategies. However, there also is an upside.

Perry Carpenter , Chief Evangelist and Security Officer, KnowBe4

August 1, 2024

4 Min Read
skull made up of pills
Mopic via Alamy Stock

You may not have heard of “synthetic media” before, but it’s likely that you have experienced it.  

Synthetic media is media (like voicemail, video or audio clips) that has been either produced or modified by artificial intelligence. Deepfakes are a common example of synthetic media and how it is deployed. AI-generated videos, images, and audio clips can be created to impersonate real people.  

Back in January, for example, New Hampshire investigated a deepfake robocall using President Biden’s voice to discourage people from voting during the primary. That is just one example among many. 

Concerns Rising 

To the casual observer, deepfakes can be very convincing. The technology can be used in personal, business and societal settings. For instance: 

  • Deepfakes could be used to initiate a phone call from someone’s child or grandchild in distress and asking for money. 

  • The technology can be used to impersonate company CEOs or other C-suite leaders, instructing someone in finance to transfer funds or share sensitive information. A finance worker in Hong Kong paid out $25 million on the direction of a fraudulent imitator of his CFO. 

  • In broad societal settings, deepfakes can be used for political manipulation or election interference, delivering very convincing voice or video messages from candidates saying things they never said.  

Related:Dealing With Deepfakes

The technology is not perfect -- yet -- but it is getting better. We are in the salad days with AI. For instance, the Microsoft VASA-1 project is quite frightening in terms of its potential. VASA stands for “Visual Affective Skills Animator.” Feed VASA with a single headshot along with a voice sampling and the system can generate realistic video, with associated facial expressions and lip movements that are eerily true to life. VASA is a lab project (for now) but how long will it take before it -- or a similarly powerful technology -- gets leaked and exploited for mal intent? Could these technologies effectively become a “poison pill” against truth? 

While the potential can be frightening, it can also be enlightening.  

The Positive Power of Synthetic Media 

There are positive potential applications for synthetic media. For instance: 

  • Organizations can use synthetic media to extend the reach of their training and sales staff members, creating product demos, explainer videos, webinars and other outputs in seconds.  

  • AI chatbots and virtual assistants can use the voices of recognized staff members to deliver messages.  

  • Synthetic media can be used to deliver company messages across geographic boundaries and language barriers by translating a presentation by the CEO into multiple languages.  

Related:How to Protect Your Enterprise from Deepfake Damage

As long as there is transparency behind these efforts, the power of synthetic media can potentially extend limited resources in increasingly creative ways. At the same time, there are steps organizations can take to protect their business from pernicious abuse of technologies like synthetic media. 

Combatting the Dangers of Synthetic Media 

The gap between rapid tech advancement and human adaptability creates fertile ground for exploitation. This gap is where cybercriminals live, continually adopting and adapting methods to weaken organizational defenses. Despite their efforts, though, there is hope. Organizations can mitigate the risks from technologies like synthetic media by fostering a robust security culture through: 

  • Awareness is step one. Employees need to be trained and educated on the latest synthetic threats and how they can be identified. 

  • Controlling access to sensitive information can be done through stringent verification processes that might include multi-factor authentication and the use of code words or phrases. 

  • Technical controls could be applied as an aid for detecting deepfakes by analyzing inconsistencies in videos or audio-visual mismatches. This could be an excellent application for AI automation. 

  • Collaborating with industry peers and reaching out to cybersecurity experts can help companies stay up to date on emerging threats -- and the best practices to combat those threats. 

  • Advocating for state and federal policies and regulations to combat the misuse of synthetic media.  

Related:The Rise of Deepfakes and What They Mean for Security

The same AI algorithms that create deepfakes can be applied to defend against them. These systems are becoming increasingly more sophisticated and capable of identifying even very subtle signs that something is amiss in a video or audio clip. AI can be used to help train employees to recognize and respond to these deepfake threats. 

Will synthetic media poison our ability to discern truth? Arguably, it already has, given the prevalence of deepfakes. The capacity for these manipulations to be exploited by malicious actors is a concerning reality. But just as businesses and governments have successfully addressed various cyber threats by building and nurturing a healthy security culture, they can do the same to combat the dangers posed by synthetic media and deepfakes. 

About the Author

Perry Carpenter

Chief Evangelist and Security Officer, KnowBe4

Perry Carpenter is co-author of “The Security Culture Playbook: An Executive Guide to Reducing Risk and Developing Your Human Defense Layer.” [2022, Wiley] His second Wiley book on the subject. He is chief evangelist and security officer for KnowBe4, provider of security awareness training and simulated phishing platforms used by more than 65,000 organizations and 60 million users worldwide. 

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights