New Secure AI Development Rules are Historic, But Do They Matter?

18 nations have signed on to the new non-binding 'Guidelines for Secure AI System Development,' but, without enforcement, will it make an impact?

Carrie Pallardy, Contributing Reporter

December 1, 2023

6 Min Read
Silhouette of robot hands assemble a colorful brain for another robot against a bright blue background.
Tanapong Sungkaew / Alamy Stock Photo

At a Glance

  • UK's National Cyber Security Centre and US's Cybersecurity Infrastructure Security Agency led effort to draft guidelines.
  • Some say guidelines lack teeth and specificity.
  • However, guidelines are another act of global cooperation on AI security that may send a message to the AI industry.

The development and adoption of artificial intelligence (AI) is booming while regulators across the world work to understand this technology and its potential dangers. The European Union is an early mover with the EU AI Act. President Joe Biden released a broad executive order calling for standards to ensure the safety and security of AI systems. Standards bodies are releasing frameworks addressing AI. The National Institute of Standards and Technology (NIST), for example, launched the AI Risk Management Framework. Now, a new set of guidelines for AI developers to consider has come onto the scene.

The UK’s National Cyber Security Centre (NCSC) and the US’s Cybersecurity and Infrastructure Security Agency (CISA) led the development of Guidelines for Secure AI System Development. Along with the US and UK, 16 additional countries agreed to follow the non-binding agreement.

What do these latest guidelines bring to the table as safety and security concerns continue to mount alongside the proliferation of AI’s exciting possibilities?

The Guidelines

These new guidelines are primarily directed at AI systems providers, whether those systems are built from scratch or on top of services provided by others. But the document authors urge all stakeholders to read the guidelines to help them make informed decisions about how AI is used in their organizations.

The document is separated into four sections: secure design, secure development, secure deployment and secure operations and maintenance.

The first section of the guidelines focuses on measuring risks and threat modeling when AI systems are in the design phase of their lifecycle.

The secure development guidelines urge AI system providers to consider supply chain security, documentation, and technical debt.

In the deployment phase of an AI system’s lifecycle, the guidelines call for proactive protection against model compromise, the development of incident management processes and responsible release.

The secure operation and maintenance guidelines focus on safety and security once a system has been deployed. The document calls for AI system providers to monitor their systems’ behavior and inputs and to prioritize secure by design principles for any system updates. Finally, the guidelines call for system providers to embrace transparency and information-sharing on system security.

The Potential Impact

These guidelines are another indication of the willingness for global collaboration on AI safety, security, and risk. They send a message to the AI industry that these concerns are not going to go away. “Right now, we know that a lot of people are doing things without any real sense of accountability, and they're doing things [without] any real sense of ethics,” says Davi Ottenheimer, vice president of trust and digital ethics at Inrupt, a data infrastructure solutions company.

These guidelines are another indication of the willingness for global collaboration on AI safety, security, and risk. They send a message to the AI industry that these concerns are not going to go away.

So, what do these guidelines mean for AI system developers? First and foremost, these guidelines are exactly that; they are not enforceable. “You're going to have organizations that are focused on speed to market and getting new features out there and capabilities and driving revenue and things like that. Appeasing investors,” says Chris Hughes, chief security advisor at Endor Labs, an open source dependency lifecycle management platform and Cyber Innovation Fellow at CISA. “That's going to be contrasted against the demands for security and rigor and governance.”

But Hughes points out that there are benefits to following these guidelines even if they are not mandated. “I think if large AI developers integrate these recommendations and these practices that it talks about throughout the entire software development lifecycle, it can mitigate a lot of things that we're hearing concerns about whether it's biases or poisoning model data or essentially tampering with the integrity of a model or the system,” he explains.

While these guidelines are targeted at AI system developers, they could have downstream benefits for end users if implemented. End users also have the responsibility to familiarize themselves with the risk of using AI systems, Randy Lariar, practice director of big data and analytics at cybersecurity advisory services and solutions company Optiv, points out. “As users of AI, I think it's important to focus on the outcomes, focus on the best efforts to try to align to security frameworks and ultimately choose use cases and activities…that have been vetted from the risk perspective,” he says.

While these guidelines are a useful foundation for thinking about risk, they are fairly high-level and lack specific, tactical recommendations, according to Hughes. Work remains to be done to “…to translate and bring these guidelines into alignment with the reality of the technology environment, the policy environment,” says Lariar.

What Comes Next for AI?

Adherence to these guidelines is voluntary, and there are a multitude of other frameworks for AI safety, security, and risk. The enterprises building and providing AI systems are challenged to navigate a morass of recommendations and regulations. “We don't know yet which one is going to emerge as the most useful and most widely adopted,” says Lariar. “Decide for your organization with your risk environment, your regulatory environment the things that you need to protect, how you're going to engage with these different frameworks and what you're going to do.”

As regulators consider how to proceed, they will be challenged to find the delicate balance between overseeing the AI industry without undue impediment to progress. Ottenheimer points out that industry self-regulation will likely play a role going forward. “It works in areas where the industry knows how to do what's best for the industry that also benefits people who are affected by it,” he says.

Ottenheimer offers Payment Card Industry Data Security Standard (PCI DSS) as an example. Credit card companies knew consumers would not use their products if fraud was rampant, leading to the development of PCI DSS for credit card security.

The cloud space could also give an idea of how security could evolve in the AI space. “When cloud took off there weren't a lot of security considerations,” says Lariar. “Over time, the cloud vendors themselves as well as an ecosystem of partners have emerged to ensure your cloud is configured securely, that you're doing detection response in a mature way and that you're really securing the whole life cycle of cloud development and use.”

Industry and regulators could potentially work together to create specific standards and bake security into AI, but there are more risks with which to grapple. “The problem itself is fiendishly hard because how do you regulate something which is universal?” asks Martin Rand, cofounder and CEO of Pactum AI, an autonomous negotiations company. “Anyone who has played around with ChatGPT sees that it can be used for anything: for science or writing a poem.”

Rand argues that there are yet to be practical guidelines addressing AI risks like synthetic content generation, misinformation, and propagation of harmful biases. What about the existential risks of AI? “What will happen if an AI system [suddenly starts] making itself smarter and there's an explosion of intelligence, which leaves people behind?” Rand asks.

Putting regulations in place to address those risks on a global scale is an enormous challenge. Different countries have competing interests, making a joint regulatory body an unlikely solution, according to Rand. Instead, he suggests the development of an international scientific research body that focuses on the existential threats of AI.

The new guidelines demonstrate an awareness of risk, but they are just an early step toward addressing the big questions of AI safety and security. “I think it shows that there's still a lot of work to be done,” says Ottenheimer.

Read more about:

Regulation

About the Author(s)

Carrie Pallardy

Contributing Reporter

Carrie Pallardy is a freelance writer and editor living in Chicago. She writes and edits in a variety of industries including cybersecurity, healthcare, and personal finance.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights