NASA is known for big science, both on Earth and in the farthest reaches of the solar system, and all of its research requires a lot of computer processing. Like many federal government agencies, NASA and its laboratories have tapped cloud computing and virtualization to cut costs and improve efficiency.
NASA's Jet Propulsion Laboratory relies on the cloud to help process and manage the massive amounts of data collected by its various science programs, including the Mars rovers. But in setting up and launching those cloud capabilities, JPL's IT staff had to meet a number of compliance, governance, and risk management requirements.
JPL, based at Caltech University in Pasadena, Calif., selected Amazon Web Services in 2009 because it wanted the flexibility to scale up its infrastructure to meet high-demand computing needs and then scale down or recommit resources to other projects on short notice, says Jonathan Chiang, JPL's chief IT engineer. Previously, NASA would buy data storage and processing capabilities up front for the requirements of an entire mission. So if a planned scientific satellite was forecast to require 100 TB of storage, "we would buy 100 TB today, and that storage would go minimally utilized for years," Chiang says.
But to use the AWS cloud to do this rapid research and analysis work, JPL had to develop a governance model that met NASA compliance requirements such as Federal Information Security Management Act rules and National Institute of Standards and Technology (NIST) guidelines. Auditability was another requirement; JPL is audited both by NASA's Office of the Inspector General and by Caltech.
Setting up governance and compliance models proved useful because the initial cloud deployment ran into problems over account credentials. When JPL launched its cloud services, it issued a number of accounts to its in-house developers and project managers, but it quickly became evident that this model was flawed.
Specifically, NASA administrators realized that they had to apply security credentials and auditability to each account. What had initially been about 60 accounts bloomed into more than 250 sublevel accounts, none of which could be traced back to a JPL user, Chiang says.
These subaccounts were created when account holders gave access privileges to colleagues. But the use charges for the subaccounts all came out of JPL's funds, Chiang says. To stop this activity, JPL's IT staff had to contact all of the initial account users and gain full administrative access to their accounts, he says.
After the IT group realized that there were multiple root accounts, it had to design and implement a governance model for managing the accounts and apply that model to the existing network infrastructure and cloud deployment, says JPL cyber security engineer Matt Derenski.
The highly automated AWS cloud system helped JPL's IT staff sort out and manage the accounts issue and then other governance and compliance requirements. Chiang notes that JPL IT staffers sometimes have difficulty tracking usage and application data on the laboratory's internal network, but the Amazon cloud offers complete visibility into the number of active accounts and which servers they're running on.
JPL maintains an IT security database that tracks all running instances in its physical and virtual servers. Previously, staff had to manually enter CPU makes, models, and memory statistics into the system. This function is now handled by an AWS API, which provides updated instance information. The details are then fed into the IT security database, Chiang says. Using AWS cloud services was a double-edged sword, he says. It initially created a problem, but it also quickly provided JPL with a means to solve it -- an API for automated servicing.
Read the rest of the story in the new issue of
InformationWeek Government Tech Digest (free registration required).