The Right Balance Between People and AI for Your SOC’s Sake

When it comes to security leaders’ use of AI to augment their security operations center, it’s about carefully selecting use cases and models -- and remembering that it’s an augmentation, not a replacement, for humans.

Chaz Lever, Senior Director of Security Research

November 2, 2023

5 Min Read
Balancing different shapes of colorful blocks
Photology1971 via Alamy Stock

As adoption and application of AI rises rapidly, new advancements like generative AI are changing the landscape of possibility across all strata of life and work. Organizations want to remain competitive and use the latest technology, but they also need to ensure they’re doing so in a safe manner.

For security teams, in particular, these new technologies offer a lot of opportunity. But how can you ensure you’re using these technologies in a responsible way?

The Rise of AI and its Application in Security

To say that AI adoption is exploding is an understatement. The year 2023 has in many ways been a watershed year for its adoption, in large part due to the availability of ChatGPT and other forms of generative AI.

Organizations are looking to quickly reap the benefits and stay ahead of the competition with these latest tech innovations. Almost 25% of C-suite executives in one McKinsey survey said they directly use generative AI tools for work, and over 25% of those surveyed from companies using AI said generative AI is already on their boards’ agendas. This transforms AI from a topic confined to tech workers to a priority for company leaders. Furthermore, according to 40% of respondents, because of improvements in GenAI, their companies plan to spend more on AI in general.

Related:2023 Cyber Risk and Resiliency Report: How CIOs Are Dueling Disaster in 2023

For a function like cybersecurity that is experiencing both an ongoing skills gap and a high rate of burnout, AI and machine learning offer a lot of potential. According to one survey, 71% of security operations center (SOC) professionals said that they’re likely to quit their job, with the top reasons being information and work overload, lack of tool integration, and alert fatigue. And the global talent gap of 3.4 million cybersecurity professionals persists. AI and machine learning (ML) can go a long way toward helping with automating manual tasks, shifting some tedious tasks off analysts, and freeing them up to do more meaningful work.

One caveat, though, is that there are ethical issues to acknowledge and address to ensure you’re using AI responsibly.

Questions of Ethics and Responsibility

The concept of responsible AI entails asking the question, “How do we work with AI in a way that makes sense while also limiting the potential downsides that can come with it?” Responsible use brings up the question of ethics, which means that training data matters a great deal. Since the models an organization uses are based on the data they have, how do you ensure you aren’t training those models on biased data?

Related:6 Pain Points for CISOs and CIOs and What to Do About Them

Bias in training data could take several forms, and those are important from an ethical perspective:

  • Data bias: Bias can originate from the training data used to build the ML system, and if it’s not representative of the real-world population or contains historical biases, the system may learn and perpetuate those biases.

  • Label bias: If the labels assigned to the training data are biased, the system will learn those biases.

  • Prejudice bias: If the training data reflects existing prejudices in society, the ML system can inadvertently perpetuate those biases in its predictions.

  • Underrepresentation bias: If certain groups or behaviors are underrepresented in the training data, the system may perform poorly on those groups in real-world scenarios.

There are also concerns about privacy and data poisoning. With generative AI, what if you’re submitting proprietary data to those models? Once you’ve input that, it’s hard or impossible to get it back out.

Meanwhile, there’s data poisoning, which is an intentional attack that tampers with training data. It’s a way for bad actors to manipulate systems for their benefit so their attacks will be more successful.

Types of data poisoning include:

  • Data integrity and reliability: If an attacker injects malicious data points into the training dataset, the model might learn from incorrect information, leading to reduced accuracy and reliability in its predictions.

  • Privacy violations: Data poisoning attacks can also involve the injection of private or sensitive information into the training data. This can lead to privacy violations, potentially exposing private details about individuals.

  • Unintended information leakage: Attackers may engineer the deliberate leaking of information about the model. This can be exploited by malicious actors to reverse-engineer the model or gain insights into its functioning.

  • Regulatory compliance: In regulated industries, data poisoning attacks can lead to violations of privacy and security regulations. Organizations may be subject to legal penalties for failing to protect their systems against such attacks.

How to Get it Right

When it comes to security leaders’ use of AI, automation, and ML to augment their SOC, it’s really about looking at the use cases. Think carefully about each use case and whether it’s the right application for AI or the right type of application.

For one, not everything requires AI. It’s important to avoid doing AI for the sake of AI, or to “keep up with the Joneses.” Carefully consider whether the application of AI will provide real value in each scenario.

Choose your models thoughtfully. Once you’ve determined AI is a fit, evaluate what algorithms/models would best suit your particular use case. Don't just use deep learning or large language models (LLMs) because they are the latest trend. For instance, maybe an LLM isn’t the best approach for detection. Recognize the limitations of the different models.

Think about the threat model. Consider how your models could be attacked and build defenses to help mitigate potential attacks.

Finally, don’t think of AI as a replacement for humans. At this point, there still needs to be a human in the loop. The level of autonomy we’re seeing now is about augmenting human work. If you remove the human altogether, you will run into problems. Instead, use AI to let people focus on the higher-level tasks like explainable detection and identifying patterns.

AI Balance for the SOC

The rapid adoption of AI offers significant opportunities for cybersecurity. As a priority for leaders, it can alleviate the skills gap and burnout in the field. However, ethical concerns like data bias and privacy must be addressed. When using AI in security, carefully select use cases and models, and remember that it's an augmentation, not a replacement, for humans. AI can enhance cybersecurity, but its responsible and thoughtful application is paramount.

About the Author(s)

Chaz Lever

Senior Director of Security Research, Devo

Chaz is a security researcher with over a decade of experience who is focused on security solutions leveraging big data.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights