Critics Say Open-Source Ratings Don't Measure Up

Carnegie Mellon and partners have a template for evaluating the business readiness of open-source applications. But be wary of making any decisions based solely on it.

Tim Wilson, Editor in Chief, Dark Reading, Contributor

August 10, 2005

2 Min Read
InformationWeek logo in a gray background | InformationWeek

You've been eyeing that open-source software for months now, but you're reluctant to make your move. It looks viable, and the price certainly is right, but is it mature enough for your organization? A new benchmarking group aims to help you decide.

Carnegie Mellon University, Intel and open-source software certifier SpikeSource earlier this month launched Business Readiness Ratings, a proposed standard template for evaluating open-source applications. The three sponsors have defined a set of parameters and metrics for measuring the maturity of just about any open-source software, ostensibly making it easier for businesses to decide whether they should bet the farm on the emerging applications. The ratings are designed to operate in open-source fashion, gathering feedback from developers over the Internet about each application's performance against the established metrics, then posting that feedback as a guide to others. Essentially, it's a report card template, and each developer who evaluates the software gets to grade the application.

As IT professionals, we applaud the sponsors for their efforts to help enterprises navigate the increasingly bewildering sea of open-source applications available for download. As hardware and software reviewers, however, we caution enterprises against making any decisions solely on the BRR.

For one thing, technology testing--and we've learned this the hard way--is not a one-size-fits-all proposition. Some software is too complex for small businesses, but works brilliantly in a well-staffed and highly skilled enterprise IT environment. Other applications don't deliver enough functionality for a large corporation, but are very effective in a mom-and-pop shop. The concept of "business readiness" is totally dependent on the size and nature of the business, and users should be wary of any evaluation process that attempts to apply a single rating to all open-source apps in all business environments.

Second, some BRR metrics are subjective. At Network Computing, we strive to keep our testing processes objective--if it can't be measured empirically in the lab, we often reject it as a criterion for evaluation. But the BRR employs criteria, such as "end user UI experience," that will clearly be a matter of opinion. True, the applications will be reviewed by a number of developers, but technology decisions should be based on hard data, not on popular vote.

We see the BRR as a potentially useful data point for open-source software evaluation, but probably not much more than that. It might help you decide whether and when you want to test an emerging app, but it can't replace the value of doing the testing in your enterprise.

Read more about:

20052005

About the Author

Tim Wilson, Editor in Chief, Dark Reading

Contributor

Tim Wilson is Editor in Chief and co-founder of Dark Reading.com, UBM Tech's online community for information security professionals. He is responsible for managing the site, assigning and editing content, and writing breaking news stories. Wilson has been recognized as one of the top cyber security journalists in the US in voting among his peers, conducted by the SANS Institute. In 2011 he was named one of the 50 Most Powerful Voices in Security by SYS-CON Media.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights