Weaponized Disinformation Threatens Democratic Values

Fortifying democracy against AI-driven disinformation will involve public awareness campaigns and childhood education.

Steve Durbin, Chief Executive, Information Security Forum

June 11, 2024

4 Min Read
robot pushing boulder uphill with word misinformation written on the boulder
GoodIdeas via Alamy Stock

The headline blares, “China is targeting US voters and Taiwan with AI-powered disinformation” and spending billions doing so. Artificial intelligence is not just a symbol of innovation but a source of digital threat, especially through the spread of misinformation and disinformation. (These terms are often used interchangeably with "misinformation" referring to false information shared online unintentionally.) Below are insights from my interview with Brian Lord, CEO of Protection Group International, a British firm specializing in risk management since 2013. We discussed looming AI-related security threats which may potentially bear negatively on democratic values, and the urgent call for taking proactive measures to thwart these risks. As we navigate these challenges it's crucial to explore the multifaceted role AI has in shaping public opinion and policies, and the necessity for a vigilant and educated public required for protecting democratic principles.   

Changing Dynamics of Cybersecurity Threats  

The digital world is teeming with increasing cyber threats that use AI technology to create and disseminate disinformation. The misuse (and abuse) of AI goes beyond manipulating facts and exploiting network and software vulnerabilities, causing contemporary issues to become even more divisive. AI-fueled cyber manipulation is especially insidious as it subtly changes public discourse, with the power to affect elections and policymaking, yet under the guise of appearing sincere and genuinely truthful. The power of AI-driven disinformation lies in its ability to imitate human behaviors, creating images and messages that resonate deeply but distort the truth.   

Related:Fake News, Deepfakes: What Every CIO Should Know

Societal Implications of Mis- and Disinformation  

With significant democratic elections scheduled around the world, it's crucial to delve into the societal responsibility for safeguarding electoral integrity. While direct attacks on voting systems may cause temporary disruptions, their rarity and limited effects pale in comparison to the threats posed by AI-powered disinformation. The real potency of AI lies in its capacity to create and spread false narratives that sway public opinion, often exacerbating existing societal divisions. Disinformation campaigns don't emerge out of voids, rather they amplify highly controversial topics like immigration, where public sentiment is already polarized.  

By exploiting these divisive issues, such malicious operations erode trust in the media and government, undermining democratic institutions. At base, the goal of these cybercriminal adversaries is to distort public perception and discourse, ultimately influencing electoral outcomes more enduringly than a wave of direct hacking attempts could ever achieve. The consequences of electoral disinformation campaigns extends beyond the creation of false narratives; it breeds wider societal discord, with far-reaching consequences.   

Related:What CIOs Can Learn from an Attempted Deepfake Call

AI-driven operations contribute to societal tensions causing rifts that could prove challenging to mend. It is crucial for everyone with a stake in this issue -- from policymakers and technology firms to educators -- to take a deliberate stance to safeguard against these attack vectors and to strengthen societal resilience to withstand the creeping threat of digital falsehoods.  

Strategies to Combat AI-Fueled Disinformation  

Having a multifaceted strategy is crucial for safeguarding the authenticity of democratic processes. The strategies below emphasize the need for significant collaboration across multiple sectors to mitigate these threats and combat the proliferation of sprawling disinformation.  

1. Educate on cyber awareness from a young age: Incorporating cyber awareness into educational curricula is not a far-fetched idea considering the drastic rise in AI driven intrusions. This proactive educational strategy should go beyond the basics of digital literacy and include critical thinking skills that question the validity and biases present in digital content. Foster an environment where questioning and verifying online information become a standard practice. Training young minds to navigate the minefield of digital content will enable future generations to discern between reliable information and potential falsehood.  

Related:Deepfakes Get Weaponized in the Gaza War

2. Public campaigns to boost critical thinking and media literacy: Societies should consider running public awareness campaigns focused on enhancing media literacy for all age groups. By encouraging a deep understanding of how information is created and shared online, individuals can better assess the credibility of sources and the content they engage with. This strategy plays a vital role in educating the public and creating an informed electorate so they can resist the influence of misleading data that may be built on lies.  

3. Collaboration among governments, tech companies, and civil society: Addressing the sprawl of AI-driven disinformation requires collaborative approaches to develop stronger technological solutions and effective regulatory frameworks. Partnerships involving governments, tech companies and civil society can promote the exchange of best practices and advancements in AI management. These collaborations are vital for establishing resilient systems that not only identify and counter mis- and disinformation but also uphold freedom of speech and the dissemination of authentic information.  

Reflecting on the Importance of Preserving Principles  

The complex issue of AI-driven disinformation poses a threat to the core foundations of democratic societies. From the dual threat posed by cyberattacks and the widespread dissemination of false information, education emerges as an adaptable and dynamic defense strategy, empowering individuals with the skepticism and insights needed to navigate digital content more intelligently.    

Policymakers, educators, and tech experts should prioritize investments in practical societal solutions that uphold democratic values, address current threats, and anticipate the potential for future AI risks while still at a nascent stage. Regulations should be implemented to hold platforms accountable for what they generate. By promoting awareness and fostering collaboration we can strengthen our democracies against the profound impact of AI-facilitated information.  

About the Author(s)

Steve Durbin

Chief Executive, Information Security Forum

Steve Durbin is Chief Executive of the Information Security Forum, an independent association dedicated to investigating, clarifying, and resolving key issues in information security and risk management by developing best practice methodologies, processes, and solutions that meet the business needs of its members. ISF membership comprises the Fortune 500 and Forbes 2000. Find out more at www.securityforum.org.  

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights