How to Protect Your Enterprise from Deepfake Damage

As fraudulent content proliferates, it's important to build a defense and response strategy. Here's how to get started.

John Edwards, Technology Journalist & Author

June 6, 2024

6 Min Read
Deepfake deep learning fake news generator modern internet technology concept.
Kirill Ivanov via Alamy Stock Photo

AI deepfakes are cheap, relatively easy to create, and ready to damage your enterprise's reputation. That's why it's important to develop a comprehensive defense and response strategy now. 

The deepfake threat is large problem that's growing larger with easy access to AI tools and services, says Ari Lightman, professor of digital media at Carnegie Mellon University’s Heinz College of Information Systems and Public Policy in an email interview. "Part of the problem is that it’s hard to classify the intent," he notes. "In many cases, they are deliberately designed to deceive for a political, ideological, or financial reason -- in other cases, the intent is harder to assess." 

Deepfake technology has advanced rapidly, making it increasingly difficult to distinguish between real and manipulated content, says Rob Rendell, global head of fraud market strategy and fraud prevention at financial crime compliance support provider NICE Actimize in an email interview. "This poses serious risks to various aspects of society, including politics, business, and personal reputation," he explains. "Developments in deepfakes and AI have caused a wave of misinformation and confusion, with many consumers falling victim to AI-generated phone calls." 

Technology has become democratized to the point where virtually anyone can create a passable fake with a consumer-grade computer or smartphone and an Internet connection, observes Arik Atar, senior threat intelligence researcher for security technologies provider Radware, via email. "We're rapidly approaching an era where audiovisual content is no longer inherently trustworthy." 

Related:What CIOs Can Learn from an Attempted Deepfake Call

Multiple Threats 

Deepfakes can harm enterprises in several ways. "They can damage the reputation of the company or its executives by spreading false information or creating fake videos or audio recordings," Rendell says. "Deepfakes can also be used to impersonate employees, executives, or customers, leading to fraudulent activities or damaging interactions with intended parties." 

Rendell notes that a deepfake generally falls into one of four basic categories: 

Face swaps. Replacing one person's face with another in videos or images. 

Voice synthesis. Generating realistic speech from text, allowing for the creation of fake audio recordings. 

Contextual manipulation. Altering the context of a video or audio clip to change its meaning or implications. 

Full-body deepfakes. Creating entirely fake videos of people engaging in activities they never actually participated in. 

Related:Cyber Risks When Job Hunters Become the Hunted

Facing a combination of social media posts, public polarization, and an erosion of trust, brands are now struggling to monitor their online brand perception while addressing misconceptions, Lightman says. "In many cases, satire through AI might be mistaken as information and result in real world consequences," he notes. Meanwhile, using deepfake AI to impersonate a brand has often succeed into tricking employees and compromising potentially sensitive information. 

Prevention Tactics 

Preventing or quickly neutralizing deepfakes requires a multi-faceted approach, Rendell says. "This may involve implementing authentication mechanisms for verifying the authenticity of media content, educating employees and customers about the existence of deepfakes and how to recognize them, and developing advanced detection technologies to identify and mitigate the spread of deepfake content." 

Both manual and automated methods can help detect deepfakes by analyzing unnatural movements, visual artifacts, audio distortions, contextual inaccuracies, and other signatures, Atar says. "AI-based detection systems can identify fakes across large datasets, but it's an arms race as deepfake creators learn to overcome imperfections." He warns that some security experts now estimate that current deepfake detection methods will be unreliable within 12 to 18 months. 

Related:The Next Generation Will Be the Driving Force Behind AI Regulation

Damage Containment 

Rendell says that for now and the immediate future, IT leaders can take proactive steps to mitigate the impact of deepfakes by implementing robust fraud control measures across all of their transaction channels. "This involves having layers of fraud controls in place at every stage of a transaction, from initiation to completion, and ensuring that these controls operate in real-time." 

By continuously monitoring transactions for suspicious activity, detecting anomalies, and intervening promptly, when necessary, organizations can effectively mitigate their overall financial fraud risk exposure, Rendell says. "Additionally, investing in advanced technologies, such as AI-powered fraud detection systems and biometric authentication methods, can further strengthen an enterprise's ability to detect and prevent fraud attempts enabled by deepfakes." 

Undoing the damage caused by a deepfake attack can be challenging, says Damir J. Brescic, CISO at security technology and services provider Inversion6 via email. "Companies may need to invest in public relation efforts to rebuild their reputation, provide compensation to affected parties, and work with law enforcement to hold the attackers accountable," he explains. "It's essential to take a proactive approach to cybersecurity and invest in the necessary tools and training to prevent deepfake attacks from occurring in the first place." 

Final Thought 

The key to effective deepfake defense is acting swiftly and decisively -- like a firefighting crew rushing to contain a blaze before it spreads out of control, Atar says. The longer false information circulates unchecked, the more damage it can cause, he notes. "Organizations need a rapid response playbook ready at the first sign of smoke."

About the Author(s)

John Edwards

Technology Journalist & Author

John Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights