RSA Takeaways: New Awareness Needed vs Sophisticated Cyberattacks
Bad actors are taking advantage of generative AI and social media, but defenders are also changing tactics.
![](https://eu-images.contentstack.com/v3/assets/blt69509c9116440be8/blt2fce4ee06269ae20/64caeb0368e3f07e693228b9/LesleyRitter_Moodys-JPRUTH.jpg?width=700&auto=webp&quality=80&disable=upscale)
Lesley Ritter, analyst with Moody's, the RSA Conference in San FranciscoJoao-Pierre S. Ruth
Increased sophistication in attack methods and tools was a central concern at this year's RSA Conference in San Francisco. The widespread availability of resources such as generative AI brought to light new possibilities in targeting cyberattacks through social engineering and other means. Furthermore, bad actors are finding ways to quietly lift data from vulnerable networks before they commit to more obvious ransomware attacks.
While this all sounds like the digital world is riddled with threats, awareness of such malicious efforts is also increasing. What follows is a glimpse of some of the panels and keynote presentations that spoke to not only new cyber dangers but the policies and efforts being adopted to stop them.
In the “Modern Bank Heists” panel, Matt O’Neill, deputy special agent in charge for cyber with the US Secret Service, said he focuses on transnational criminal groups that are financially motivated. The panel included Ronald Green, chief security officer at Mastercard, and moderator Steve Wilson, chief product officer with Contract Security.
Business email can be compromised in targeted ways, O’Neill said, “because of the high degree of self-disclosure that we all have on social media, places like LinkedIn, these bad actors are able to really identify the weak links in a lot of organizations.”
This can lead to email phishing attacks, O’Neill said, that use information gleaned from social media feeds to target such individuals in personalized ways and get them to let their guard down.
He also said that there can be a trend at the C-suite level of companies that may open up vulnerabilities. Executives who have sway within a company to do so might skirt additional security measures, O’Neill said, such as multifactor authentication “because it’s a pain.” That lack of complete participation in security measures could be a way for bad actors to get in.
Raymond Umerley, field CISO with Coveware, gave a presentation, “The Devil's in the Data: Role of Data Governance in Cyber Risk Mitigation,” about data that can be at risk within organizations and how to set up guardrails to minimize some of the damage that might be done if a network is compromised.
He described a scenario where security teams might catch and contain ransomware attacks to encrypt data, celebrate their success, and only discover days later that the bad actors already captured data from the network long before the attack was noticed. “This double extortion scheme is incredibly common,” he said. “Ninety percent-plus attacks have some level of data exfiltration component.”
Such double-edged attacks occur, Umerley said, because companies that suffer ransomware attacks might stop paying their attackers, finding ways around the encrypted data such as recovering from backups or using other methods to decrypt the data.
That can lead to bad actors stealing even small amounts of information from companies to pressure them into paying up for the sake of brand reputation and regulatory exposures. “Once the bad actor has your data, nobody should ever pay to get the data back,” Umerley said. “When you’re not getting the data, it’s already out there. It doesn’t absolve you of your notification obligations.”
The potential for financial loss was a key concern brought up during the panel on “Preparing for the New Era of Cybersecurity Disclosure.” There may be ways for financial modeling to help organizations understand the possible scale and magnitude of cyber attacks. “There’s this emerging approach of cyber risk quantification, which should allow us to unlock this understanding,” said Lesley Ritter, analyst and vice president with the cyber risk group at Moody’s.
Cyber governance, disclosures that are in place, and cyber insurance are all becoming more significant parts of the equation for companies not only from a technology perspective but in terms of financial impact.
John Dwyer, head of research at IBM Security X-Force, spoke one-on-one with InformationWeek about research and response in threat intelligence, and how he works both on offense and defense in cybersecurity. “We can really understand what the attackers want and when they pivot,” he said. The dwell time, the interval it takes for an organization to detect a malicious cyber event, has been shrinking from hundreds of days, Dwyer said. “That number has been going down pretty drastically since the boom of ransomware, mostly because it’s really hard not to detect that you’ve been ‘ransomwared’ because your systems don’t work anymore.”
Ransomware attacks are happening faster, putting pressure on defenders to detect and respond to attacks before hackers can complete their objectives, he said. “Everyone’s day-to-day life is so connected, whenever there’s a cyber attack it could cause real world implications,” Dwyer said. “The stakes have never been higher and the speed in which you need to detect and respond has never been faster.”
In addition to ransomware, which drives a lot of the criminal ecosystem, he said phishing rings have evolved from stealing credit card data to stealing credentials and PII [personally identifiable information] data that could be sold. “The whole world has shifted to disruptive and destructive attacks,” Dwyer said. “Everything has shifted away from financial crimes in terms of credit card data or fraud.” There has also been some shift from ransomware to data extortion, he said. “That is something that we probably should be prepared for.”
How government addresses privacy, particularly among national security entities, can be a point of concern for individuals and organizations. Karen Dunkley, senior associate privacy and civil liberties officer in the CIA Office of Privacy and Civil Liberties, said during the panel on “Integrating Privacy Protections into National Security Systems,” wanted attendees to know there are evolving national efforts and policies to safeguard privacy.
“The federal government is protecting privacy in PII,” she said. That includes updated guidelines on security categorization and control for National Security Systems. “We definitely want stakeholders to pull in that person with privacy expertise before making privacy decisions,” she said.
With the SEC taking on a new role in cybersecurity, rules are expected to be proposed in the coming months. This is likely to include a requirement that public-traded companies report cyber incidents within four days. Public companies would also be required to have a cybersecurity expert on their respective boards, under the rule. What those changes could mean for organizations was the focus of the “Cybersecurity in the Boardroom - The Impact of the Latest SEC Proposal” panel. “Every organization will have to determine what materiality impact means to their bottom line,” said Erica Wilson, vice president of global cybersecurity and privacy risk management with Reinsurance Group of America. “There’s going to be a requirement probably to develop a process and a standard way for determining what that means and will have to be reported.”
So if there is a cyber incident that includes a material amount of money spent on legal fees, litigation, or other material effects, she said the regulation might require that to be reported for the sake of concern to shareholders.
Heather Mahalik, senior director of digital intelligence with Cellebrite, presented a sobering, cautionary warning about ignorance during the panel on “The Five Most Dangerous New Attack Techniques.” She spoke about how generative AI might be used as part of social engineering attacks to get targets, including children, to reveal information.
“You could use it to write a nice email to your wife; you could use it to write a tone-safe email to a coworker,” she said. “What I’m worried about is ignorance, and how it can be used against us, against you, against our organizations.”
Mahalik is also the digital forensics and incident response (DFIR) curriculum lead with SANS Institute. She tested her nine-year-old son, with his awareness, in phishing attempts using ChatGPT to help write text messages to convince him he was being contacted by another kid. While he was able to spot bogus messages, she said generative AI still raised security concerns.
“Do you know who is actually talking to these people in your workplaces, in your households?” she asked. “What if it’s someone is using AI to be someone different?”
The other attack techniques discussed during the panel:
Search engine optimization, where bad actors game the system to get their malicious links to appear at the top of search results. This might bypass security proxies because users search for and click on links, they believe will lead to the content they need -- but actually feed them malware.
Another attack angle is malicious advertising from adversaries who buy ads to get their malicious content at the top of search engine results.
Developers can also be targeted in attacks via developer tools and tool extensions that might be malicious, giving bad actors access to development environments.
AI and machine learning for writing malware and finding unknown, zero-day vulnerabilities in code, which is before security patches can be introduced to protect software.
Heather Mahalik, senior director of digital intelligence with Cellebrite, presented a sobering, cautionary warning about ignorance during the panel on “The Five Most Dangerous New Attack Techniques.” She spoke about how generative AI might be used as part of social engineering attacks to get targets, including children, to reveal information.
“You could use it to write a nice email to your wife; you could use it to write a tone-safe email to a coworker,” she said. “What I’m worried about is ignorance, and how it can be used against us, against you, against our organizations.”
Mahalik is also the digital forensics and incident response (DFIR) curriculum lead with SANS Institute. She tested her nine-year-old son, with his awareness, in phishing attempts using ChatGPT to help write text messages to convince him he was being contacted by another kid. While he was able to spot bogus messages, she said generative AI still raised security concerns.
“Do you know who is actually talking to these people in your workplaces, in your households?” she asked. “What if it’s someone is using AI to be someone different?”
The other attack techniques discussed during the panel:
Search engine optimization, where bad actors game the system to get their malicious links to appear at the top of search results. This might bypass security proxies because users search for and click on links, they believe will lead to the content they need -- but actually feed them malware.
Another attack angle is malicious advertising from adversaries who buy ads to get their malicious content at the top of search engine results.
Developers can also be targeted in attacks via developer tools and tool extensions that might be malicious, giving bad actors access to development environments.
AI and machine learning for writing malware and finding unknown, zero-day vulnerabilities in code, which is before security patches can be introduced to protect software.
-
About the Author(s)
You May Also Like