The never-ending threat to corporate and personal security and assets is reaching new levels with emerging social engineering schemes, often utilizing AI tools.

Perry Carpenter , Chief Evangelist and Security Officer

February 15, 2024

5 Min Read
phishing and cyber crime concept. fishing hook on computer keyboard
ronstik via Alamy Stock

Imagine a world where a co-worker asking you to click a video could be a trap, where a familiar voice on the phone could be a digital illusion, and where a text from a trusted friend might be a fabricated persona.

Welcome to social engineering, in 2024 one of the most formidable threats exploiting human social frailties, such as our tendency to act impulsively when under stress. E-mail phishing, voice vishing, and text-based smishing are well-known social engineering ploys. Yet fraud has evolved, scammers have upped their game, deploying deceptive practices that blend AI-fueled tactics with social manipulation.

Understanding social engineering risk factors and how they manipulate human behavior is the first step to shielding against threats that often fall under the radar.

1. Business e-mail compromise

An example of CEO fraud called business e-mail compromise (BEC) accounted for $2.7 billion in losses across 21,832 complaints made in one year, according to the IC3.

BEC is a sophisticated scam tricking an individual (usually from accounting) into transferring funds to an attacker's account. This is typically achieved by impersonating a high-ranking company official, such as the CEO, and making a fraudulent request for a wire transfer. The power of BEC lies in its exploitation of trust and authority. Attackers meticulously research their targets, often using social media and corporate websites to collect background. The e-mail used for the scam often closely mimics a legitimate one, often hijacking actual e-mail accounts through phishing or credential stuffing.  

Related:What Security Leaders Need to Know About the ‘Mother of All Breaches’

In a striking 2022 incident, a multinational corporation fell victim to a BEC scam. The fraudster, masquerading as the company's CEO, targeted a junior finance officer via e-mail. The message demanded an urgent wire transfer for what was claimed to be a sensitive acquisition deal. The convincingly crafted e-mail prompted the officer to bypass standard verification processes, leading to the unauthorized transfer of $1.2 million to an offshore account.

2. Pretexting/impersonations

Pretexting is a form of social engineering in which attackers fabricate scenarios to obtain sensitive information under pretenses. This tactic often involves impersonating authority figures, such as law enforcement or company executives, or posing as technical support personnel. The success of pretexting relies heavily on the attacker's ability to appear convincing and authoritative, then manipulating the victim into divulging confidential information.

Related:Sign Up for InformationWeek's New Cyber Resilience Newsletter

The techniques used in pretexting are diverse and can range from simple phone calls to elaborate schemes involving multiple actors and props. For instance, an attacker might call an employee posing as an IT staffer, claiming there is an issue with the company's network that requires immediate password verification. This scenario played out at MGM Resorts last year, shutting down Las Vegas casino operations and making national news.

3. Deepfake phishing

In deepfake phishing attacks, scammers use manipulated audio, video, or text to impersonate individuals or entities. For instance, they might create a video of a CEO issuing urgent instructions for fund transfers or confidential data sharing. Deepfakes can be delivered via e-mail, social media, or direct messaging platforms, leveraging the perceived authenticity to trick victims into compliance.

One notable incident of deepfake phishing was reported in 2023, involving a deepfake video in which the CEO appeared to instruct finance to initiate a money transfer to a vendor. Sent via a compromised e-mail account, the video appeared convincing enough to bypass initial scrutiny. Believing the request to be legitimate, the finance team transferred $243,000 to the scammer's account.

Related:Dealing With Deepfakes

4. The long game

The “long-game” fraud in social engineering is a methodical strategy in which attackers gradually build trust with their target victim over an extended period. These long-game tactics involve patience and persistent communication, often spanning months or even years. To initiate a rapport and sustain interactions, attackers typically use AI-generated profiles (sock puppet accounts) that appear credible, complete with backstories, and social media footprints. These profiles engage with the target on shared interests or professional matters, slowly ingratiating themselves. An example of the long-game ploy, which initiated with spear-phishing, was cited in a CISA advisory involving Russian threat actor Star Blizzard.

5. AI-persona manipulation

A recent trend in social engineering exploits the increasing reliance on automation and AI in everyday tasks. AI-persona manipulation involves creating AI-generated personas that interact with targets through automated systems like chatbots or virtual assistants. Tools abound for creating these AI personas, which marketers use to better understand target audiences. Programmed to mimic human conversational patterns and behaviors, these AI personas can appear genuine and trustworthy. In the hands of threat actors, the insidious aspect involves the seamless integration of these personas into platforms where potential victims have lowered their guard, making it a sophisticated social engineering ploy.

Mitigation Strategies and Best Practices

Organizations must adopt a multifaceted cybersecurity approach to combat advanced social engineering techniques. This includes implementing endpoint threat detection, following basic security protocols like multi-factor authentication, using long and unique passwords and deployment of defenses fundamental to creating barriers against cyberattacks.

But technology alone is not enough. Employee awareness and training form the backbone of an effective defense strategy. Regular training programs that simulate social engineering threats allow employees to practice learning how to recognize what a bogus phishing attempt looks like, and which employees are most susceptible. Businesses that conduct continuous cybersecurity awareness training can greatly reduce the risk of falling prey to social engineering scams. 

Cyber threats constantly evolve, so defenses need to evolve accordingly. This includes testing disaster response plans and conducting regular security audits to ensure patches are up-to-date and effective against known threats, such as those reported by CISA.

Preparing for advanced social engineering tactics like BEC, pretexting, deepfake phishing, and AI-persona manipulation can present unique challenges because they each exploit basic human vulnerabilities. The key to defense lies in cybersecurity best practices, continuous employee education, and a culture of vigilance. By staying informed, organizations can significantly reduce the risk of falling prey to sophisticated social engineering attacks, safeguard their data, finances, and reputation.

About the Author(s)

Perry Carpenter

Chief Evangelist and Security Officer , KnowBe4

Perry Carpenter is co-author of “The Security Culture Playbook: An Executive Guide To Reducing Risk and Developing Your Human Defense Layer.” [2022, Wiley] His second Wiley book publication on the subject. He is chief evangelist and security officer for KnowBe4, provider of security awareness training and simulated phishing platforms used by more than 65,000 organizations around the globe.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like

More Insights