New Rating System Aims To Take Mystery Out Of Open-Source Tools

Carnegie Mellon University, Intel, and SpikeSource are developing a system to help IT departments determine which open-source tools they should be adopting.
The very qualities that have made open-source software such a sensation--its rogue nature, the fact that it's free and abundantly available, and the ease with which it can be tweaked and extended--also are the reasons that it hasn't become more pervasive in business environments. Quite simply, there's too much choice, and too much uncertainty, for IT managers to take risks on open-source applications they don't know much about and that are constantly changing.

But the triumvirate of Carnegie Mellon University, Intel, and open-source software certifier SpikeSource is looking to change that by making it easier for IT departments to determine which open-source tools they should adopt.

The three sponsors revealed plans Monday for Business Readiness Ratings, a proposed standard model for evaluating open-source software. The ratings would be determined in open-source fashion, by gathering feedback from the fast-growing community of open-source developers and letting that feedback serve as a guide for others who are assessing the readiness of open-source tools for potential adoption within their companies. The sponsors are spinning the initiative as an extension of earlier open-source "maturity models," one developed by Navica Inc. CEO Bernard Golden and a second from Capgemini.

Despite those previous efforts, selection of open-source business tools has been largely an ad hoc process that's sometimes based on considerations as arbitrary as how many posts a certain tool has generated on an open-source message board, SpikeSource CEO Kim Polese says. Often, such adoption leads to apps that don't get a whole lot of use because of some weakness that wasn't immediately apparent.

The Business Readiness Ratings, for which information can be found at, will give developers tools they can use to tune their code, thus beefing up the weak spots in open-source apps. "Open-source developers more than anything want to see their users using what they're writing," Polese says. "This is a way to ensure that."

The model also could be used to evaluate commercial software by applying a dozen evaluation categories ranging from functionality and scalability to security and documentation to products being considered. But the focus is squarely on open-source software, which is characterized by a more-chaotic and less-supported marketplace that's screaming for quality assurance.

Given the opportunity to increase adoption within businesses, some open-source software makers may try to manipulate the process by contributing lots of positive feedback about the tools they're pushing. But Polese believes the nature of a community-oriented ratings model will snuff out any hint of impropriety. She points to Zagat Survey LLC's entertainment ratings and as community-oriented efforts in which any suspected agendas are overwhelmed by the sheer volume of objective contributors. "Generally you find accurate results when you've got a large number of people weighing in," she says.

The Business Readiness Ratings initiative is expected to be ready as a resource for business IT departments and open-source developers by the end of the year. The project's sponsors, who are accumulating public feedback, will spend the next few months beating the bushes for more community participation and collecting evidence that their model works. And while the sponsors expect to stick with the framework they've established, in true open-source spirit, they're willing to tweak it if their community tells them to do so. Says Polese, "We have no preset notion of how the model will look."

Editor's Choice
Sara Peters, Editor-in-Chief, InformationWeek / Network Computing
John Edwards, Technology Journalist & Author
John Edwards, Technology Journalist & Author
James M. Connolly, Contributing Editor and Writer