Government agencies that rely upon cloud service providers have to trust that cloud providers will protect their data or services from risk and harm. In a perfect world, cloud providers would have a complete understanding of the unique missions -- and risks -- that agencies face.
In reality, cloud service operators tend to provide a "one size fits all" approach to services that often overlooks specific or unique mission risks. As a result, government agencies must ultimately accept responsibility for ensuring that cloud providers offer the appropriate amount of protection to manage risk. It also requires agencies to directly address some fundamental questions regarding risk.
According to NIST Special Publication 800-30, "Guide for Conducting Risk Assessments," risk is a function of threats that can exploit vulnerabilities that in turn can lead to an undesirable impact to the organization. There are three aspects of this equation (threats, vulnerabilities, and impact), and agencies must consider all of them to understand ongoing risk in their cloud environments.
[Federal agencies have until June 5 to certify their cloud systems. Here's what will happen if they miss the deadline: FedRAMP Deadline Looms For Agencies, Cloud Providers.]
Agencies can ascertain potential threats using information provided by NIST, US-CERT, the organization's security operations center (SOC), or several other industry threat feeds. They can then determine the potential impact of these threats, using NIST's Federal Information Processing Standards Publication 199 and the guidance provided by NIST SP 800-60.
However, where and when do agencies gain an understanding of vulnerabilities in a cloud provider's offering?
In the past, agencies would conduct vulnerability scans, penetration tests, and security assessments as part of the FISMA authorization and continuous monitoring processes. The agency controlled or owned the infrastructure and could therefore perform testing at its leisure. This led to volumes of vulnerability information in the form of scanner output and assessment findings. These results were fed into the risk management equation along with impacts and threats to determine an organization's risk posture.
With cloud computing, vulnerability information can be difficult, if not impossible, to thoroughly, regularly, and accurately obtain.
The early CONOPS -- the Concept of Operations for FedRAMP (the Federal Risk and Assessment Management Program for cloud security) -- required agencies to negotiate the exchange of vulnerability information with the cloud provider. The most recent version of the CONOPS and the Continuous Monitoring and Strategy Guide require agencies to delineate between agency controls and cloud service provider controls for reporting.
While incident sharing is covered extensively, vulnerability-information sharing is barely addressed; it amounts to an annual self-attestation event with monthly scanning reports sent to the FedRAMP information system security officer. No mention is made of sharing information in near real-time with the customer agencies that will experience the negative impact of an exploited vulnerability. Additionally, vulnerability information discovered outside of assessments and scanning is not addressed whatsoever.
Vulnerability scanning and FISMA assessments are not the only way cloud providers receive vulnerability information. Security researchers are constantly testing cloud providers' capabilities.
When agencies move to a cloud provider, they often sacrifice any sense of a staged or development environment because most cloud providers are production only. Therefore, researchers and engineers are often left to test on the production environment. Sometimes they discover massive vulnerabilities, which can lead to administrative-like access to the cloud provider systems.
Ethical researchers will report the vulnerability to the cloud service provider for confirmation and remediation. Often they will provide a proof of concept as part of the submission. A question that often remains unaddressed is:
At what point does the cloud provider have an obligation to inform a customer agency of this potential vulnerability?
It is important for agency senior management to understand that when an assessment identifies a new risk, this risk did not just materialize when the report reached their desk. Rather, it has likely been in place for some time, and the agency has been unaware but still in a default acceptance posture.
This is similar to a homeowner discovering radon in a basement. While the health risk has always been there, the homeowner didn't know about it until it was tested. In both IT and health risks, the damage may have already been done. Finding the vulnerability and quickly determining appropriate mitigation actions are musts.
Part of the risk equation is probability. This is the likelihood of the threat exploiting the vulnerability. If the likelihood is very low, the risk drops. However, the longer a vulnerability is allowed to exist, the more time and opportunity a threat has to find and exploit it.
While the Department of Homeland Security and the General Services Administration are making progress in implementing continuous monitoring in the cloud, one area that demands immediate attention is vulnerability sharing. Should an agency be expected to wait months for public notice of a potential high-risk vulnerability when possible workarounds are available?
As written, the FedRAMP polices leave agencies in this exact situation. Until DHS and GSA can require sharing of vulnerability information in near-real-time, agencies will continue to be in the dark regarding their overall risk postures. An agency can outsource IT services but not risk or responsibility.
Agencies should ensure that sharing vulnerability information is addressed in the procurement review process and contractual agreements with cloud service providers. Additionally, organizations can take active steps to form collaborative groups of cloud service provider customers to express a common voice and common concern.
In Bruce Schneier's December 3, 2012, blog post, "Feudal Security," he describes cloud security as an experience in feudalism:
Feudalism provides security. Classical medieval feudalism depended on overlapping, complex, hierarchical relationships. There were oaths and obligations: a series of rights and privileges. A critical aspect of this system was protection: vassals would pledge their allegiance to a lord, and in return, that lord would protect them from harm.
In this new world of computing, we give up a certain amount of control, and in exchange we trust that our lords will both treat us well and protect us from harm. Not only will our software be continually updated with the newest and coolest functionality, but we trust it will happen without our being overtaxed by fees and required upgrades. We trust that our data and devices won't be exposed to hackers, criminals, and malware.
The Feudal Lords of Cloud have promised to protect agencies and keep them safe from harm, and it is time they be required to prove it continuously.
NIST's cyber security framework gives critical-infrastructure operators a new tool to assess readiness. But will operators put this voluntary framework to work? Read the Protecting Critical Infrastructure issue of InformationWeek Government today.The authors are members of the (ISC)2 U.S. Government Advisory Board Executive Writers Bureau, which includes federal IT security experts from government and industry. The experts write anonymously through the Bureau so they can be more forthcoming with their analysis and ... View Full Bio