Examine this analogy closely. Open-source hardware and open-source software involve different processes with different levels of user participation.
The analogy of how Open Compute is similar to Linux was made repeatedly at the Open Compute Summit V this week. The analogy is fair -- open-source hardware shares many underlying values with open-source software -- but I found myself curiously disagreeing with the statement each time I heard it.
The Open Compute Project (OCP) is a bold initiative to put hardware designs into the public sphere and let many parties use them. Collaborative groups have formed to specify what they want in an OCP-certified server, storage device, or datacenter switch, giving hardware manufacturers the option to choose to produce it or not. The goal is to reduce vendor lock-in, put more power into the hands of end users, and standardize key pieces of hardware in the datacenter to create more interchangeable parts.
These are worthy goals, ones that potentially overturn many of the established ways of producing data center hardware. So, why, after spending two days in San Jose at Open Compute Summit V, am I thinking "but... but..." as Facebook's Frank Frankovsky and Nick Corddry assert that Open Compute is just like Linux?
References to Linux come up naturally because it is one of the most successful, sustained, and adopted open-source software projects. New releases of the Linux kernel now appear every 70 days. Each contains up to 10,000 updates and patches, a rate of change that equals 7.14 an hour. Linux's fame rests not on the fact that it's frequently modified. Rather, it's frequently modified and also respected as having a long-term future in the enterprise datacenter. The way things are shaping up, it also very likely has a permanent place in cloud architectures.
The most comparable effort might be the thousands of programmers that Microsoft has working on Windows Server, which also appears poised for a sustained run as a datacenter operating system. But even it can't compare to Linux, which periodically integrates thousands of new contributors and accepts code from hundreds of new and casually appearing, then disappearing, contributors. From the start of 2012 until the end of 2013, Linux incorporated changes from 1,100 contributors who worked at 225 different companies. And of course the results of all this work is freely given away on faith that working on Linux will be reward enough for continuing legions of programmers. That's never been true for Windows.
And in a way, it's also not true of the Open Compute Project.
The Open Compute designs are freely available, but someone still has to produce the hardware resulting from the design. There has to be a price tag attached to that hardware, even if it's an OCP design. And once a design is set, it would unwise for the end user to fiddle with it too much. An end user who wanted to substitute one component for another or perhaps give a component a different type of connector would have to convince the manufacturer to change a production line. The price tag would have to be adjusted accordingly -- and significantly.
This is another way of saying hardware isn't software. The free-wheeling nature of Linux in its early days attracted developers from around the world. It always seemed to me the Scandanavian countries had a share of contributors out of proportion to their population, although all their work on MySQL may also have also colored my view.
Also, a talented high school student can't download the bits of Open Compute design and do anything with it. In the world today, high school students are routinely learning to program with the code they've downloaded from a favorite open-source site.
By its nature, Open Compute is an organization of hardware interests and their auxiliary participants: datacenter builders, component suppliers, integrators, and custom builders. There were users at Open Compute as well, but they needed to be large, well-heeled users with an ability to order thousands of copies of a design they've influenced. And they were. Goldman Sachs, Bank of America, and Fidelity Investments are giving Open Compute a breath of life that it needs and the user influence it must try to expand.
To compare this process to the frenetic, churning, but somehow managed process of building the Linux operating system is still a stretch. It's unfair to compare the one to the other, when Open Compute at three years old is still in its infancy. But hardware is hardware and software is different than hardware. Until we can see better how open-source hardware will work, best be wary of comparisons to Linux.
Find out how a government program is putting cloud computing on the fast track to better security. Also in the Cloud Security issue of InformationWeek Government: Defense CIO Teri Takai on why FedRAMP helps everyone.
Charles Babcock is an editor-at-large for InformationWeek and author of Management Strategies for the Cloud Revolution, a McGraw-Hill book. He is the former editor-in-chief of Digital News, former software editor of Computerworld and former technology editor of Interactive ... View Full Bio
Multicloud Infrastructure & Application ManagementEnterprise cloud adoption has evolved to the point where hybrid public/private cloud designs and use of multiple providers is common. Who among us has mastered provisioning resources in different clouds; allocating the right resources to each application; assigning applications to the "best" cloud provider based on performance or reliability requirements.
InformationWeek Tech Digest August 03, 2015The networking industry agrees that software-defined networking is the way of the future. So where are all the deployments? We take a look at where SDN is being deployed and what's getting in the way of deployments.