This story was written by Keith Dawson for UBM DeusM’s community Web site Business Agility, sponsored by IBM. It is archived here for informational purposes only because the Business Agility site is no more. This material is Copyright 2012 by UBM DeusM.

Facebook Is Trying to Lower Your Costs

The social giant is opening up the design of everything from data centers to hard disks.

Facebook believes that making the data center stack transparent all the way down will reduce costs -- theirs and, eventually, everyone's. They are now designing their own open-source disk drives.

Last year Facebook shook up the world of data center design (and poked one at Google) by opening up the design of their Prineville, OR data center -- one of the most energy-efficient in the world. They also released details on their home-built servers. The company set up the Open Compute Project in the spring of 2011, and by the fall they had relinquished control of it to a non-profit foundation and its associated open-source community.

At this point the Open Compute Project has released, under an Open Web Foundation license, 1.0 specs for the data center electrical system, server racks, and battery backup. Moving down the stack, they have also set free design specs for the server chassis and power supply, as well as motherboards for Intel and AMD. Fifty percent of the contributions to the project's open-source designs now come from outside Facebook -- from Rackspace and Goldman Sachs, among others.

The design of data centers has long been viewed as a competitive advantage by the likes of Google and Yahoo. Sometimes they advanced a security argument to explain why data centers needed to operate in deep, dark secrecy. This thinking held until Facebook came along. Wired has an account of a tour of Facebook's Prineville data center and a profile of its managing director, Ken Patchett, who came to the company from Google's data center operation. Once at Facebook, Patchett was free to voice his belief that neither the security nor the competitive-advantage arguments makes much sense.

Google released some particulars about their data-center design beginning as early as 2004; but the details remain closely held. Google's data centers are built on modular shipping-container-sized components, and the modular design works well as a substrate to run Google's distributed operating system. Facebook's Prineville data center doesn't go the modular route -- its approach may best be described as "holistic." Patchett "believes [Google's] setup doesn't quite suit the un-Googles of the world," which need to deal with a more varied load of computational tasks, according to Wired.

Facebook designed its servers to be "vanity-free" -- anything that was not functional was left out. Now it is approaching the design of disk storage with the same philosophy, and will release the new storage designs in early May at the next Open Compute Summit.

Manufacturers in Taiwan and China are now making components based on Open Compute specs. Facebook expects that as more companies embrace its philosophy for data-center and server design, manufacturing will ramp up and prices will fall across the board.

This can only be good news for consumers of cloud services however delivered.