AI Manipulation Threatens the Bonds of Our Digital World

AI-generated deepfakes pose significant risks to election integrity, public trust, and global institutions, as seen with recent high-profile cases involving political figures and international organizations.

Christophe Van de Weyer, CEO, Telesign

October 25, 2024

5 Min Read
close up of an eye with digital background
Bildagentur via Alamy Stock

Artificial intelligence manipulation is no longer a threat just theorized about. It’s here. Steps are being taken to protect people and institutions from fraudulent, AI-generated content. However, more can be done proactively to preserve trust in our digital ecosystem. 

Deepfakes Seek to Disrupt Free and Fair Elections 

In August, Elon Musk shared a deep fake video of Vice President Kamala Harris on X. He wrote, “This is amazing,” with a crying laughing emoji. His post received more than 100 million views and plenty of criticism. Musk called it satire. Pundits, however, condemned it as a violation of X’s own synthetic and manipulated media policy. Others signaled alarms about AI’s potential to disrupt the free and fair election process or called for a stronger national response to stop the spread of deepfakes. 

2024 is a consequential election year, with nearly half of the world’s population heading to the polls. Moody’s warned that AI-generated deepfake political content could contribute to election integrity threats -- a sentiment shared by voters globally, with 72% fearing that AI content will undermine upcoming elections, according to The 2024 Telesign Trust Index.  

The risk of AI-manipulation cuts across all spheres of society.  

Related:2024 Cyber Resilience Strategy Report: CISOs Battle Attacks, Disasters, AI ... and Dust

Stoking Fear and Doubt in Global Institutions  

In June, Microsoft reported that a network of Russia-affiliated groups was running malign influence campaigns against France, the International Olympic Committee (IOC), and the Paris Games. Microsoft credited a well-known, Kremlin-linked organization for the creation of a deepfake of Tom Cruise criticizing the IOC. They also blamed the group for creating a highly convincing deepfake news report to stoke terrorism fears.  

It’s important to remember that this isn’t the first time bad actors have sought to manipulate perceptions of global institutions. It’s even important to distinguish the real problem from the red herring.  

The real problem is not that generative AI has democratized the ability to create believable fake content easily and cheaply. It is the lack of adequate protections in place to stop its proliferation. This is what, in turn, has effectively democratized the ability to mislead, disrupt or corrupt -- convincingly -- on a massive, global scale.  

Even You Could Be Responsible for Scaling a Deep Fake 

One way that deepfakes can be proliferated is through fake accounts, and another is what we in the cybersecurity world call account takeovers. 

On January 9, a hacker managed to take control of a social media account owned by the Securities and Exchange Commission (SEC). That criminal quickly posted false regulatory information about a bitcoin exchange-traded fund that caused bitcoin prices to spike. 

Related:Juliet Okafor Highlights Ways to Maintain Cyber Resiliency

Now, imagine a different -- yet not far-fetched -- hypothetical: A bad actor takes over the official account of a trusted national journalist. This can be done relatively easily by fraudsters if the right authentication measures are not in place. Once inside, they could post a misleading deepfake of a candidate a few days before polls open or a CEO before he or she is set to make a major news announcement.  

Because the deepfake came from a legitimate account, it could spread and gain a level of credibility that could change minds, impact an election, or move financial markets. Once the false information is out, it’s hard to get that genie back in the bottle. 

Stopping the Spread of AI Manipulation? 

Important work is being done in the public and private sectors to protect people and institutions from these threats. The Federal Communications Commission (FCC), for instance, banned the use of AI-generated voices in robocalls and proposed a disclosure rule for AI-generated content used in political ads.  

Large technology firms are also making strides. Meta and Google are working to quickly identify, label and remove fraudulent, AI-generated content. Microsoft is doing excellent work to reduce the creation of deepfakes.   

Related:Beyond the Election: The Long Cybersecurity Fight vs Bad Actors

But the stakes are too high for us to sit idly waiting for a comprehensive national or global solution. And why wait? There are three crucial steps that are available now yet vastly underutilized: 

  1. Social media companies need better onboarding to prevent fake accounts. With around 1.3 billion fake accounts across various platforms, more robust authentication is needed. Requiring both a phone number and email address, and using technologies to analyze risk signals, can improve fraud detection and ensure safer user experiences.  

  2. AI and machine learning can be deployed in the fight against AI-powered fraud. Seventy-three percent of people globally agree that if AI was used to combat election-related cyberattacks and to identify and remove election misinformation, they would better trust the election outcome.  

  3. Finally, there must be more public education so that the average citizen better understands the risks. Cybersecurity Awareness Month observed each October in the United States is an example of the kind of public/private cooperation needed to raise awareness of the importance of cybersecurity. A greater focus on building security-conscious workplace cultures is also needed. A recent CybSafe report found 38% of employees admit to sharing sensitive information without the knowledge of their employer, and 23% skip security awareness training, believing they “already know enough.” 

Trust is a precious resource and deserves better protection in our digital world. An ounce of prevention is worth a pound of cure. It’s time we all take our medicine. Or else we risk the health of our digital infrastructure and faith in our democracy, economy, institutions, and one another.

About the Author

Christophe Van de Weyer

CEO, Telesign

Christophe Van de Weyer is the CEO of Telesign, the leading provider of customer identity and engagement solutions that help to build trust and reduce fraud in the digital economy. 

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights