Fake News, Deepfakes: What Every CIO Should Know

AI advances are making fake news and deepfakes easier than ever. Most organizations aren’t prepared enough, yet.

Lisa Morgan, Freelance Writer

May 21, 2024

6 Min Read
Deepfake AI disinformation fake news and misinformation daily newspaper reading on mobile tablet computer screen.
Skorzewiak via Alamy Stock

Imagine attending a meeting only to discover that everyone else present was a deepfake. That’s precisely what happened to a Chinese finance worker on a video call who believed the CFO had asked for a $25.6 million transfer. The other people in the meeting were also deepfakes. As technology continues to improve, so will the efficacy of such attacks.

“Imagine this scenario: A ransomware attack happens at your business, and along with a demand for money comes a threat to release a convincing deepfake video of your CEO taking a hardline stance on a divisive social issue. Would you know how to respond?” says Cody Shultz, director, investigations and private client protection at global security, investigations, and compliance firm Guidepost Solutions, in an email interview. He is also a former CIA officer. “Perhaps an employee finds a ‘news story’ on YouTube about an executive’s criminal past with a video confession. Are policies in place for how to address this?”

If your company’s security policy doesn’t anticipate deep fakes, the time to fix that is now.

“Deepfakes are the next evolution in social engineering weaponry,” says Jake Williams, former US National Security Agency (NSA) hacker and faculty member at cybersecurity research and advisory firm IANS Research in an email interview. “The idea of social engineers convincingly imitating a person's voice or image is the kind of thing that used to happen in Hollywood, but not in real life. With deep fakes, threat actors [can] deploy Hollywood-like techniques in real life. Threat actors are using deepfakes to scam organizations, often by tricking people into redirecting funds (invoice scams). [Though] these invoice scams are nothing new, deepfakes make the scam more believable.”

Related:What CIOs Can Learn from an Attempted Deepfake Call

Why 2024 Is Such a Pivotal Year

Deepfakes and fake news are a highly anticipated trend this year, especially given that 2024 is an election year in a divided nation between a sitting president and former president. It’s the perfect storm for fake news and deepfakes.

Also, since the 2020 presidential election interference, AI has advanced significantly, enough to make the difference between obvious deepfakes and content that looks legitimate.

“Threat actors are increasingly leveraging deepfake technology to spread disinformation and propaganda, as evidenced by its prominent role during the Russia-Ukraine conflict on platforms like Twitter. With a major election looming, the proliferation of misinformation and propaganda is expected to intensify,” says Shawn Waldman, CEO and founder of Secure Cyber Defense, a cybersecurity consulting and managed detection and response services company, in an email interview. “The successful attack on a multinational company resulting in significant financial losses is a stark reminder of the effectiveness of such tactics, likely emboldening further attacks.”

Related:Dealing With Deepfakes

Deepfakes are becoming more believable over time, using natural obfuscation effects to increase believability. According to Jimmy White, CTO at generative AI security and enablement company CalypsoAI, receiving a phone call with perfect audio quality may have the uncanny valley effect which causes human brains to question the validity of what they are hearing or seeing.

“Take the same voice and add some natural background sounds or signal noise and it can become more believable,” says White. “We are witnessing quality leaps akin to the advances made by Hollywood over a 30-year period in one to two years. The pace of improvement will make it almost impossible to distinguish [between] what is real versus generated in a very short time frame and the most vulnerable of our society will be at the most risk.”

What to Do

Deepfakes are a very real cybersecurity threat for which every organization should be prepared. Like other cybersecurity approaches, it’s a combination of people, processes, and technology. Procuring technology is the easiest part. The harder part is getting employees to think and act the right way and ensuring processes help prevent attacks rather than help facilitate them.

Related:The Rise of Deepfakes and What They Mean for Security

“The potential harm of deep fakes is significant, and effective detection methods and legal safeguards are urgently needed to protect individuals and society from this threat,” says Shawn Loveland, chief operating officer at cyber solutions and services company Resecurity in an email interview. “To combat the threat of deep fakes, IT and cybersecurity professionals have a crucial role. They must educate employees about risks, invest in AI and machine learning detection tools, strengthen verification processes, maintain robust cybersecurity practices, collaborate with experts and participate in information-sharing networks.”

Employees need to be trained about the deepfake threat, what it can do and what to do if a deepfake is suspected.

Lisa O’Connor, global leader of security research and development at Accenture also recommends having control processes in place, such as implementing controls around what the organization considers important.

“How are we implementing the controls around what we said was important? Do any of those controls use multi factors such as a phone call? How are we using that connection to get that secondary approval or that authorization and really scrutinize those methods so that we de-risk the method in our control framework and say, ‘No, that one's no longer good?” says O’Connor. “That's something that companies can do right away.”

Guidepost’s Shultz says awareness of potential threats is a first step, such as understanding whether the company has an official social media presence and executives have profiles. If not, it’s important to verify that impostor or parody accounts don’t exist because they can damage reputations by sharing deepfake photos or inflammatory posts from accounts that appear to be legitimate. And the problem is exacerbated if those fake accounts remain unknown until an incident happens that requires a response.

“Additionally, family offices should consider employment of a code word or key phrase for phone transactions over a certain threshold. The codeword must be something that would not naturally come up in a financial conversation such as ‘dinosaur’ or ‘paraglider,” says Shultz. “We also recommend changing the code word at least every six months.”

In addition, he recommends:

  • A digital vulnerability assessment to understand the universe of information about oneself on the internet and dark web, such as phone numbers, email addresses, vehicle information, names of children and their schools, photos from inside the home, political contributions and copies of signatures.

  • In-residence device hardening and training by a professional security firm to scan physical devices for the presence of malware, outdated firmware and more.

  • A physical security assessment to understand gaps in security and methods to mitigate or remove those gaps.

  • The creation or review of an incident response plan.

“For many companies and family offices, the answer to ‘How would you handle a ransomware attack, compromising faked photos of an executive or principal or impersonation attempts of the C-suite is ‘I don’t know',” says Shultz. “Employees are both your best shield and your biggest risk. Make sure your employees know your expectations, responsibilities and conduct regular training and testing to ensure compliance.”

About the Author(s)

Lisa Morgan

Freelance Writer

Lisa Morgan is a freelance writer who covers business and IT strategy and emerging technology for InformationWeek. She has contributed articles, reports, and other types of content to various publications and sites ranging from SD Times to the Economist Intelligent Unit. Frequent areas of coverage include big data, mobility, enterprise software, the cloud, software development, and emerging cultural issues affecting the C-suite.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights