Dealing With Deepfakes

We all know about creepy celebrity deepfakes -- even deepfakes that disseminate inaccurate political messages. But now these eerie digital specters are creeping into daily life and affecting businesses.

Richard Pallardy, Freelance Writer

February 8, 2024

16 Min Read
AI analyzing eye
Bildagentur-online via Alamy Stock

At a Glance

  • A deepfake generated in 1997 by a program called Video Rewrite reanimated the lips of the people depicted in the video.
  • Researchers put producers of deepfakes into categories: hobbyists, political entities, scammers, and media corporations.
  • Algorithms can detect inconsistencies in movement and speech using authenticated videos.

We live in a deepfake world now. There’s no way around it. Digital reality is eminently susceptible to alteration and our brains are simply not tuned to reliably detect those alterations.

Some are obvious. A recent video that appeared to depict Ukrainian President Volodymyr Zelenskyy dancing in a red leotard was not entirely improbable -- the politician did win the Ukrainian “Dancing with the Stars” in 2006. But as the sparkling figure twirls around the dance floor, his facial expressions are oddly wooden.

As it turns out, Zelenskyy’s face was digitally grafted onto someone else’s body using deepfake technology. The intent, presumably, was to make the president look foolish.

While this may appear to be a digital parlor trick, an amusing sleight of hand, the implications are far more insidious. Videos of other politicians show them saying things they never said -- the British thriller series “The Capture” has already leveraged the narrative potential of that scenario.

Both celebrities and private citizens have had their faces inserted into pornographic images and videos. And individuals and businesses have been targeted by convincing videos and voices that appear to be known parties but are in fact scammers impersonating them -- sometimes resulting in the loss of thousands of dollars.

Related:Deepfakes Get Weaponized in the Gaza War

Resistance thus far has been tepid and piecemeal. Legislation has targeted specific aspects of the deepfake phenomenon but is far from comprehensive. And while some technologies have emerged to quickly analyze deepfakes, they are not widely available. Ethical guardrails are largely nonexistent outside of the fourth estate. In the meantime, deepfake technology advances largely untrammeled.

“In the 20th century, notions of progress shifted from being about social conditions to being about technology,” laments Deborah Johnson, emeritus professor of applied ethics at the University of Virginia. “We used to think progress meant people’s lives getting better. Now it means automating this or that. It’s ridiculous.”

So, what are we to do?

Here, InformationWeek investigates, with insights from Johnson, Nir Kshetri, a professor of management at University of North Carolina, Greensboro, who has written about the economic impacts of deepfakes, and Andrew Newell, chief scientific officer at biometric authentication firm iProov.

History of Deepfakes

One of the first known deepfakes was generated in 1997 by a program called Video Rewrite, which reanimated the lips of the people depicted in the video and matched them to different audio so that it appeared they were saying something entirely different from the original recording.

Related:The Rise of Deepfakes and What They Mean for Security

The first paper on generative adversarial networks (GANs), which are the basis of deepfake technology, was published in 2014. GANs pit two models against one another: One, the generator or actor, that attempts the fake; and another, the discriminator or critic, that attempts to discern the veracity of the fake. GANs have rapidly become more sophisticated.

Deepfake technology became more widely known when a Reddit user posted source code for creating deepfakes two decades later, in November 2017. He is credited with creating the word deepfake, which is a portmanteau of “deep learning,” referring to artificial intelligence technology, and “fake.” He began posting pornographic deepfakes depicting female public figures. A deepfake Reddit thread exploded, garnering tens of thousands of users in a matter of months.

What had once been an obscure academic pursuit was now available to people with minimal skill sets.

This new phenomenon built on earlier modes of digital deception, Newell explains. “In the past, we were dealing with threats such as people trying to impersonate another person by using a silicone mask,” he says. “That’s changed over the last four or five years to greater emphasis on deepfakes.”

Related:Sign Up for InformationWeek's New Cyber Resilience Newsletter

Andrew_Newell_-iproov.jpg

Once that post was made on Reddit, it was open season for deepfakes, and it hasn’t slowed down since. There are believed to be millions of deepfakes online; the vast majority are pornographic.

“That [stat] refers to video, but deepfakes are not necessarily only videos,” Kshetri says. “They may be audio, or even text.”

Researchers have subdivided producers of deepfakes into four categories: hobbyists, political entities, scammers, and media corporations. Some are intrinsically motivated, aiming for harmless entertainment. Others are extrinsically motivated by the desire to inform, as in the case of reanimating historical figures in museum displays. More often they are compelled by a potential monetary reward or by the desire to create harm.

Four types are now possible: face replacement, face reenactment (including reenactment of expressions, gaze, and movements), face generation, and speech synthesis. And they are becoming more convincing by the day.

“Five years ago, deepfakes were things that were complicated to build. You needed to be quite expert,” Newell recalls. “They were expensive. And to be honest, they didn't look very good. Now, they are extremely easy to make. There are tools that are essentially free. The level of expertise [required] is very, very low.”

The possibilities for bad actors are thus nearly limitless and they have wasted no time in taking advantage. From creating revenge pornography to impersonating people in order to bilk them out of money, deepfakes have proven enormously useful to criminals and others with more malevolent goals in mind.

Cause Célèbre

Some of the most attention-grabbing deepfakes have involved celebrities and other public figures.

In 2020, a computer science professor presented a series of deepfakes at the World Economic Forum. His installation allowed participants to swap their faces with those of actors including Leonardo DiCaprio and Will Smith. He later made a presentation highlighting the dangers of this technology, entitled “Do Not Believe What You See.”

Many celebrity deepfakes have been created mostly for the sake of amusement. Russian mystic Grigori Rasputin was made to sing Beyoncé’s song “Halo,” for example. An edited clip of actor Bill Hader doing impressions of Tom Cruise and Seth Rogen on “Late Night with David Letterman” went viral when someone replaced his face with those of the other actors as he impersonated them. A spate of Cruise deepfakes followed.

Other celebrity deepfakes have been created by media organizations and artists in order to edify the public. The New York Times posted a video of a media expert impersonating singer Adele, who then goes on to explain the dangers of deepfakes. An installation called “Spectre” featured deepfakes of such figures as Mark Zuckerberg and Kim Kardashian extolling the virtues of a fictitious data harvesting service and how it had benefited their business.

The ability to impersonate famous people has already been leveraged against seniors, according reporting by NBC. A spokesperson for the American Association of Retired Persons (AARP) told reporters that members had been targeted by deepfakes featuring actors such as Brendan Fraser and Kevin Costner and singers including Carrie Underwood and Andrea Bocelli.

The Big Lie: Deepfakes and Politics

A 2018 deepfake of US President Barack Obama appeared to depict him making inflammatory remarks -- notably, calling US President Donald Trump a “dipshit.” Initially structured as a PSA by Obama about deepfakes, the video later reveals that it actually depicts a deepfake using actor and director Jordan Peele’s voice and Obama’s image. The video apparently employed 56 hours’ worth of recordings of Obama’s facial expressions and mannerisms to generate his likeness.

Essentially a warning to viewers, the video's prognostications turned out to be accurate in short order. Even the suspicion of a deepfake can provoke unrest. A 2018 video of Gabonese President Ali Bongo Ondimba, who had been in ill-health, precipitated an attempted coup. Some analysts believe that the odd expressions in the video were due to a stroke, but others are convinced his features were grafted onto someone else’s face. Another video purported to show Trump denying climate change in an effort to garner support for a Belgian climate change petition.

Events like these have led leaders including Congressman Adam Schiff to call on security services to assess the threats posed by deepfake technology.

Such videos could easily precipitate international crises if a leader is depicted making false statements about another nation or threatening military action. The Chinese government is already using deepfakes to sow more generalized discord. Videos of apparently Western news anchors criticizing various policies were found to be deepfakes -- and were linked to Chinese propaganda operations.

Deepfakes also pose a significant threat to elections -- inaccurate statements purportedly made by politicians could easily influence the democratic process. In the upcoming US election, they will almost certainly be used to mock candidates -- Donald Trump, Jr. posted a video of current Republican candidate Ron DeSantis as Michael Scott, Steve Carell’s buffoonish character from “The Office” in May 2023.

Through a Scammer Darkly: Business and Personal Deepfakes

Turning celebrities into unwilling pornstars and putting dangerous words into the mouths of politicians are disturbing enough. But perhaps more exigent is the use of deepfake technology to scam businesses and private citizens -- if only because the volume of these attacks is almost certain to be higher.

There is, predictably, some overlap here. Faux celebrities are scamming vulnerable seniors out of their money. But even the ostensibly digital savvy have fallen victim to con artists leveraging the images of well-known people. According to reporting in 2022, Japanese manga artist Chikae Ide was conned out of some $500,000 by someone on Facebook pretending to be actor Mark Ruffalo, who convinced her that he needed money for plane tickets and medical treatment. Ide later wrote a manga describing her experience.

In 2019, a California widow was scammed out of $287,928 by a person (or people) posing as two different men on a dating site. One posed as an admiral in the US Navy. Another posed as a man who was trying to erect a hospital in Turkey. The latter claimed that he needed additional funds and, later, that he had been wrongfully imprisoned. The woman contacted the supposed admiral, who instructed her to wire him money. As it turned out, both men were real people whose identities had been appropriated. The woman was able to recover some of her money and the case is now being investigated.

But lonely singles aren’t the only targets. Even businesspeople have been duped by deepfakes.

In 2019, a UK energy firm was relieved of $243,000 when someone posing as a German executive with its parent company called the CEO and requested an urgent transfer. Because the energy firm exec recognized the apparently deepfaked voice, he did so without hesitation. Employees became suspicious after follow-up calls by the same scammer.

In 2020, a much larger scam was pulled on a Japanese company based in Hong Kong. Someone posing as the director of the parent company authorized a transfer of some $35 million from a United Arab Emirates bank over the phone and by email in order to facilitate an acquisition.

These are essentially versions of a well-known mode of fraud: the business email compromise (BEC) scam. According to data from the Association for Financial Professionals, in 2022 71% of fraud victims from the 450 treasury practitioners polled were victims of attempted or successful BEC scams. Audio deepfakes can be generated using as little as five seconds worth of recording, making fraud attempts that employ email that much more plausible. According to New York Times reporting, VALL-E, a Microsoft program, can do it in as little as three seconds.

The Times report describes an investor attempting to make a transfer from Bank of America and then calling back a second time to send the money to a different location. The second call came from a scammer attempting to emulate the original caller’s voice. The piece also cites a “60 Minutes” report in which a white hat hacker was able to convince a reporter’s assistant to hand out her passport number using a deepfake of her voice generated using publicly available audio.

Deepfaked voices have even been used to stage fake kidnappings, using snippets of the supposed victim’s voice to extort money from loved ones. Even more creative deepfake scams are almost certainly on the horizon.

Nir_Kshetri_-_University_of_North_Carolina--Greensboro.jpg

Kshetri cites a recent report that found deepfake YouTube videos directing seniors to sites that scammed them out of their social security benefits. “The president is talking or the commissioner of the Social Security Administration is talking. People are convinced they are getting new social security benefits.” The viewers are directed to a website where they fill in detailed personal information.

Newell suggests that harvesting faces and voices may be useful in other types of scams, such as accessing public services. “I might try and make a high-quality face that I can use either to access accounts of existing people or to create an account using a synthetic identity. That’s right at the forefront of the threat right now,” he claims.

“The technology makes it possible. And it makes it easier,” Johnson says. “That means that more and more people are likely to do it. And then you could go any number of directions.”

The Devil You Don’t Know: Protecting Yourself From Deepfakes

As with all cybersecurity threats, awareness is key. If something seems fishy, it probably is. You have little to lose if you flag the issue and much to lose if you don’t.

“The people who really aren’t aware of the existence of technology to identify whether something is fake or people who don't really know anything about deepfakes -- they haven't even heard of it -- those are the type of people are likely to be victimized,” Kshetri warns.

Simple steps like establishing a protocol for verifying an incoming call or video chat may be useful -- code words or visible hand signals, for example. And being attentive to such deepfake signifiers as distortion of facial features, misalignment of sound and mouth movements, odd movements of other body parts and motion of objects that should be stationary can help in identifying both recorded videos and real-time deepfakes, such as those generated on video calls.

“We just need to make people more literate,” Johnson exhorts. “We need tools that help people detect what's a deep fake and what's real.”

Indeed, as deepfakes become exponentially more sophisticated, your spidey sense will likely become decreasingly useful. A number of technological solutions have been proposed to close the gap.

Some leverage blockchain technology to certify the authenticity of images, for example. By attaching smart contracts that generate records of the image or video transfer, their authenticity can be verified. This could potentially be leveraged by any number of platforms that display images, assuring that their provenance is traceable and that the images have not been tampered with. If a given image or video cannot be traced using the contract, which also includes provisions for authorized editors who reuse them for other purposes, it is likely a fake. Companies such as Truepic and Serelay now offer variations of this type of service to users.

This is much more difficult in real time. “You have to produce a system where you know that you're dealing with a real person, and that person is in front of the camera,” Newell says. “You have to capture something using biometrics that is quite hard to fake.”

IProov uses a one-time pattern of colored light flashed across the user’s face to ensure that they are real before interacting with someone else online. The pattern cannot be reused; it is essentially a verification code.

“Because the sequence of colors changes every time, we can tell that we’re dealing with a real human being, at that very moment in time,” Newell claims. But these proprietary technologies are not widely available yet.

In the interim, internet users need to be cautious about posting their images and recordings on public social networks. A single image or recording can be used in the production of a deepfake.

Newell sees this as an ethical concern for digital service providers. “We don't place the onus of security on the end user,” he says. “I would not want to ask my 80-year-old father to look out for deepfakes when he is contacted by his bank.”

Deepfake producers are highly attentive to the emerging research on detection. They have, for example, corrected an easily detected problem in early deepfake videos: the rate at which the subject blinked. What were once disturbing, unblinking replications of actual people now blink just as a real person might.

Thus, the arms race proceeds. Algorithms can now detect inconsistencies in movement and speech using authenticated videos to determine whether a video has been altered. While they may escape the naked eye, these programs pick up such factors as inconsistencies in the foreground and background, changes in contrast, patches of grayscale pixels that are inserted when a new face is superimposed on the original, alterations in the pose of the head and shifting reflections in the eyes and teeth.

These detection methods will, more than likely, be circumvented in short order though. “As long as we have this kind of open innovation, I don't see how we're going to stop it,” cautions Johnson. “I don't see how you regulate it.”

Legislative Protections Against an Unquantified Problem

Regulation of deepfakes is an ongoing concern. A scant patchwork of legislation has emerged to address some of the most visible instances of deepfake abuse. But it is clearly insufficient protection against the current risk. Most protections now rest on laws covering fraud, copyright infringement, harassment, and defamation.

“The issues around responsibility are really going to be complicated,” Johnson predicts. “But when it comes to deep fakes, it's fraud. The laws that we have already in place should be able to take care of it. The fact that it's so easy is the problem.”

Some current laws address the use of deepfakes as revenge porn specifically. Virginia, for example, updated its 2014 law against revenge porn to include deepfaked versions of people in pornographic scenarios in 2019. A 2019 California law similarly prohibits pornographic deepfakes and so does a 2023 New York law.

Deborah_Johnson_-_University_of_Virginia.jpg

A 2019 Texas law specifically prohibits the use of deepfakes in election campaigns and a California bill that expired in 2023 did the same. Similar federal legislation has been introduced.

Additional proposed legislation aims to protect against the deepfake misuse of the likenesses of public figures more broadly, particularly in entertainment.

None of these proposed laws appear to protect against harms to businesses and private individuals. As Johnson notes, victims will need to rely on existing fraud protections. And those will likely be far from adequate in many cases. Kshetri observes that deepfake scam perpetrators are often from other countries, meaning that enforcement of laws that might penalize them is much more difficult.

So, for the time being, we must all navigate this dangerous new landscape without guardrails and hope that our own common sense and the tenuous array of tools at our disposal will be sufficient until more comprehensive solutions arrive.

About the Author(s)

Richard Pallardy

Freelance Writer

Richard Pallardy is a freelance writer based in Chicago. He has written for such publications as Vice, Discover, Science Magazine, and the Encyclopedia Britannica.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights