AI-Built Deepfake Videos Did Not Alter the Election

Things could change next election time, putting truth and democracy at grave risk.

Guest Commentary, Guest Commentary

December 7, 2018

5 Min Read

There is no credible evidence so far that deepfake videos produced using artificial intelligence technologies had an impact on the 2018 US midterm elections. However, the potential for these to affect future elections is growing rapidly, and the outcomes should that happen could be extreme. Deepfakes are videos that are doctored to alter reality, “showing” events or depicting a speech that never happened. 

Developers are using deep learning technology – and thus the term deepfake – to identify the facial movements of a targeted person and then render highly realistic, computer-generated fake videos with real-looking lip movement, facial expression and background that can accompany any piece of authentic or manufactured audio. The most recent techniques produce fake videos that are nearly indistinguishable in quality from the source materials. These videos are still not perfect, and thus they are not yet entirely convincing, but the technology is improving rapidly and will soon be able to fool even an expert eye and ear.

Deepfakes are developed using generative adversarial networks (GANs) that use two neural networks at the same time. One network learns to identify the patterns in a piece of digital media and the second network determines if the video is real or not. With feedback from the second network, the first uses it to improve the quality and believability of the deepfake video. A Wired article notes there are free and readily accessible open source tools to develop deepfakes, requiring no programming skills.

There are positive applications for the underlying technologies such as audio dubbing for films. For example, if a film is made in English it would now be much easier to alter an actor’s lip movements to match audio in German or French for those markets. However, these fakes mean it is now possible to portray someone – say the leader of a country – saying pretty much anything the video creator wants them to and in whatever setting they desire.

The implications of deepfakes are staggering. The line between what is real and not real is changing. Many of the early deepfakes are pornographic, with developers replacing the faces of porn stars with those of celebrities, or even the person next door. Beyond that, fake videos could falsely depict an innocent person participating in a criminal activity, show soldiers committing atrocities or world leaders declaring war on another country, possibly triggering a very real military response. An example can be seen in this report about a deepfake of President Obama, developed to portray the inherent dangers.

Because people tend to lend substantial credence to what they see and hear, deepfakes could soon become a very real danger. Just as with “fake news,” if fake videos with extreme agendas become common on social platforms and websites, people may start questioning real videos. As stated in a Wall Street Journal story about the impact of deepfakes, seeing isn’t believing anymore.

An instructive current example is the recent controversy surrounding a video tweeted by White House press secretary Sarah Sanders of CNN reporter Jim Acosta. According to a report from the Washington Post, the video appears to have been altered to make Acosta’s actions at a news conference look more aggressive toward a White House intern, and also stripped-out parts of the audio. The White House used the doctored video as part of its justification for suspending Acosta’s press credentials. While technically not an AI-generated deepfake. the video, which sped up the movement of Acosta’s arms in a way that dramatically changed the journalist’s response, was deceptively edited to score political points. While technically not an AI-generated deepfake, the report said the incident shows how video content — long seen as an unassailable verification tool for truth and confirmation — has become as vulnerable to political distortion as anything else.

Widespread use of deepfakes has society-wide ramifications. It is of genuine concern that, in a digital environment saturated with fake videos, people may lose the ability to discern what is real and what is not, what is truth and what is fake, and reality will lose its meaning. This leads to any number of concerns, not the least of which is the question of whether people are capable of living in a world where there is no credible “truth.” This could easily undercut our basis for rational decision making. The New York Times commented that we find ourselves on the cusp of a new world, one in which it will be impossible, literally, to tell what is real from what is invented.

The threat is growing, and fortunately so is awareness among decision makers. This is evidenced by a recent bipartisan letter from US congressional leaders to Daniel Coats, the Director of National Intelligence. In part, the letter said: “Deepfakes could become a potent tool for hostile powers seeking to spread misinformation.” The issue is also receiving attention in a white paper from the Senate Intelligence Committee, noting deepfakes are “poised to usher in an unprecedented wave of false and defamatory content.” Even though deepfakes have had little influence on elections in the US or elsewhere to date, it is only a matter of time before this happens. In a report issued in the summer of 2018 by The Center for a New American Security, the authors note that deepfakes “are likely less than five years away from being able to fool the untrained ear and eye.” 

Researchers are at work developing approaches, also using AI, that could identify these fakes. This includes the Defense Advanced Research Projects Agency (DARPA) at the Pentagon which has started the Media Forensics project to identify deepfakes and other deceptive imagery. Recent advances pinpoint eye blinking as a weakness in fake video development. Associate Professor of Computer Science Siwei Lyu notes that people typically blink between every two and ten seconds, but that is not what happens in many deepfake videos. Startup companies are working on additional means of identifying and defeating deepfakes.

While software could soon be available to detect weaknesses in these videos, it is very likely that developers will then improve their deepfake techniques. Much like cybersecurity hackers and those trying to thwart them leapfrog one another, the same will be true for deepfakes. At that point, people will increasingly be on their own to discern fact from fiction.

Gary Grossman is Senior Vice President and Technology Practice Lead, Edelman AI Center of Expertise.

About the Author

Guest Commentary

Guest Commentary

The InformationWeek community brings together IT practitioners and industry experts with IT advice, education, and opinions. We strive to highlight technology executives and subject matter experts and use their knowledge and experiences to help our audience of IT professionals in a meaningful way. We publish Guest Commentaries from IT practitioners, industry analysts, technology evangelists, and researchers in the field. We are focusing on four main topics: cloud computing; DevOps; data and analytics; and IT leadership and career development. We aim to offer objective, practical advice to our audience on those topics from people who have deep experience in these topics and know the ropes. Guest Commentaries must be vendor neutral. We don't publish articles that promote the writer's company or product.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights