Sponsored By

The Rise of Deepfakes and What They Mean for Security

While deepfakes are not new, questions remain around the practical applications of using a deepfake as an attack vector, how easy it is to perform this kind of attack, and what they mean for our security.

Nabil Hannan

January 4, 2024

4 Min Read
deepfake concept two heads
freshidea via Adobe Stock

Deepfakes are increasingly popular as a modern technology phenomenon, gaining popularity primarily because the source code and software to create them have become readily available to the public.

At the same time, recent data indicates general awareness around deepfakes continues to increase, especially as high-profile figures like Mark Zuckerberg are mimicked through the technology. However, while deepfakes are not so new anymore, questions remain around the practical applications of using a deepfake as an attack vector, how easy it is to perform this kind of attack, and what they mean for our security.

How Deepfakes Are Used

Deepfakes are artificial audio, video, or image creations that use known, valid data and artificial intelligence to produce a synthetic output. In the context of cybersecurity, deepfakes are categorized as social engineering attacks that can be used to breach an organization’s systems and compromise internal data. 

A common use case for deepfakes is impersonating someone to steal their identity. Audio deepfakes, specifically, have been used to gain access to personally identifiable information by impersonating the real individual.  We see this use case most commonly among attackers attempting to impersonate an individual and gain access to internal systems by changing their credentials.

Related:AI Governance: Risks, Regulations and Trends for Enterprises

Consider, for example, the fact that many financial institutions rely on voice recognition to make authenticating an employee or customer’s identity more seamless. With this technique, if someone calls the helpdesk, provides their credentials, and their voice matches, they would be authenticated and connected with a support agent. Or, if a financial institution implements a “your voice is your password” feature, attackers could gain access to customer accounts.

Beyond identity theft, most use cases of deepfakes involve spreading misinformation and manipulating the general population's opinion to pressure or extort someone with money and influence. A deepfake of Ukrainian President Volodymyr Zelenskyy showed him telling his soldiers to lay down their arms and surrender, and is a prime example of misinformation with dire implications.

Common Misconceptions about Deepfake Attacks

One of the biggest misconceptions about deepfake attacks is that an individual's face can be edited into a video of someone else doing anything the creator wants to display. To create realistic deepfake videos, an attacker needs access to pictures and videos of the person from as many angles as possible. Similarly for audio, it requires large amounts of audio samples. This makes celebrities a good target because the data needed is easily accessible for AI to analyze as input and generate a realistic looking deepfake. Ultimately, the quality of the input data will usually drive the quality of the output.

Related:2023 Cyber Risk and Resiliency Report: How CIOs Are Dueling Disaster in 2023

Another common misconception is that photo and video editing software such as Adobe Photoshop has existed for a long time without any adverse damage. With this, the question becomes, why would deepfakes be any different? As with any technology being exploited for cyber-attacks, the objective is more than simply creating an artificial replica. Attackers like to go where the money is, which again makes those who are financially affluent or in positions of power the primary targets of those using deepfakes. It's not just public figures but business leaders, as well.

How Deepfakes Impact the Security Landscape  

Similar to how phishing gained popularity, deepfakes as an attack vector will continue to evolve and become more sophisticated, especially as the use of AI skyrockets. That further complicates the security landscape.

For voice deepfakes especially, despite the fact that an individual’s communication or interaction may be unfamiliar or suspicious, human error causes many people to fall victim to the attack. Referring to the example of a financial institution, this means that even if the organization has security measures in place for detecting deepfakes and preventing access, these techniques can be easily bypassed. Chances become exponentially higher if there is a lack of awareness among employees and those with access to business systems.

Related:5 Things You Can Do Today to Prepare for 2024’s Security Threats

Moreover, with how quickly misinformation gets propagated over the internet through social media and messaging applications, the implications of deepfakes are much more far-reaching and consequential than they may seem. These attacks have a sense of urgency, and --depending on the nature and accuracy of the deepfake deployed -- an environment of chaos and compromise may ensue if the threat is not mitigated.

Organizations need to start planning on how to proactively protect against deepfakes by incorporating the phenomenon into regular training and security testing. Security teams now need to understand how to detect when an attacker is using a deepfake to impersonate an employee, vendor, partner, or customer and determine how to protect their most sensitive business assets, namely data.

At a minimum, this requires employees to be educated and trained regarding social engineering attacks. People are still an organization’s weakest link, and attackers know this. Proper training, along with installing technical controls such as strong authentication, multi-factor authentication (MFA), and rigorous verification processes for sensitive actions, can reduce the likelihood of an attack. Most importantly, protecting against deepfakes requires influencing a culture where everyone takes security seriously. This allows organizations and individuals to learn and adapt to emerging attack methods intelligently, offensively, and efficiently.

About the Author(s)

Nabil Hannan

Field CISO, NetSPI

Nabil Hannan is the Field CISO at NetSPI. He leads the company’s advisory consulting practice, focusing on helping clients solve their cyber security assessment and threat andvulnerability management needs. His background is in building and improving effective software security initiatives, with deep expertise in the financial services sector. Most notably, in his 13 years of experience in cyber security consulting, he held a position at Cigital/Synopsys Software Integrity Group, where he identified, scoped, and delivered on software security projects, including architectural risk analysis, penetration testing, secure code review, malicious code detection, vulnerability remediation, and mobile security assessments.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights