Facebook's Data Center: Where Likes Live
Welcome to the Oregon high desert, where Facebook stores all of your likes while pursuing data center energy efficiency on a new scale. Coming soon to the neighborhood: Apple.
February 28, 2013
The road to one of the world's mega data centers is lined with scattered sagebrush, tumbleweeds and gnarly, undersized junipers.
No crops grow without irrigation in the central Oregon high desert; it's not promising territory for large populations. But it serves as home to the most heavily populated Internet application, the 901 million user-strong Facebook.
Every "like," comment or other befriending action on Facebook, at least by users on the West Coast, is executed inside one of two massive 330,000-square-foot "data halls" in a complex on a mesa outside Prineville, Ore. The buildings are identical; if each could be tipped on its end, you would see two 81-story skyscrapers. They are the first data centers that Facebook has designed, owned and operated -- and they're built to a whole new scale.
[ Learn more about what's ahead for data center design. See 5 Data Center Trends For 2013. ]
From a distance, the unfenced complex has a remarkably open look. There is a manned security gate, but Building Two, closest to the gate, has an accessible air about it. The glass of the office space shows up prominently in the middle of the building, and from a distance it appears that a series of big cards line the approach to the front door. Those "cards" are two-story concrete slabs backed by heavy steel supports. The office space itself is fronted by a stone wall four feet high and thick, contained in a heavy wire mesh. No vehicle is going to get too close to the more vulnerable parts of the building.
Although it isn't self-evident, "there's a pretty sophisticated security system both in the building and on the premises," said Andy Andres, project executive for Facebook for the building's co-builder, DPR Construction of Redwood City, Calif. It teamed up with Fortis Construction of Portland, Ore., to do the project.
The Prineville complex -- there's room for a third 330,000-square-foot data hall on the site -- is Facebook's answer to leasing data center space in the Silicon Valley, which is what it did before Prineville Building One opened at the end of 2010. It's building another center on the Prineville pattern in Forest City, N.C., slated to begin operations next year. Facebook also has one operating in Lulea, Sweden, where cheap hydropower abounds. In each case, it's striving for data centers that are highly efficient to operate in automation, energy consumption and staffing. The mammoth Prineville buildings operate with a total of 72 people (although the second data hall is still being equipped).
The design, using ambient air for cooling, has cut energy consumption so much that Prineville was named the number one green engineering project of 2011 by the U.S. State Department's director of office construction and other judges selected by Engineering News-Record, a McGraw-Hill publication.
But it stands out in another way. Until recently, the giants of the Internet -- Amazon.com, eBay, Google -- didn't talk about how they built their data centers or the servers inside. These service-oriented data centers -- the first "cloud" data centers -- were different; the servers that went into them were stripped down and optimized in ways that distinguished them from servers that sit in a business department or enterprise data center.
Intel officials recognized where this class of server was being employed – in the new style of data centers being built by Google, Microsoft, Apple and similar companies. Based on the demand Intel saw unfolding for related server components, Intel calculated at the end of 2011 that $450 billion a year was being spent on data center construction. Those data centers fuel the iPhone apps, instant searches, eBay trades and Amazon.com e-commerce that make up the unfolding digital economy.
Thus, facilities like Prineville matter. If the world has an expanding appetite for compute power, it's important that the data centers providing the backend services be added to the environment in the most efficient manner possible. Facebook is a LEEDGold-certified building, meaning it has undertaken industry-leading power-conserving measures. If the typical enterprise data center pipes twice as much power into the data center as needed by the computing devices, Prineville lowers the ratio (its power unit efficiency, or PUE) to 1.06 or 1.07, one of the best ratios established anywhere.
While some of the features the data center incorporates were invented by others, Facebook is unique in publishing the details of its designs and specifications. In April 2011, Facebook founded the Open Compute Project, where it makes available as open source information the designs for its servers. "We feel our competitive advantage is in our software and the service we deliver," not in the design of the data center, said Tom Furlong, Facebook director of site operations, in the announcement of the Engineering News-Recordaward.
Joshua Crass, Prineville data center manager and a former Google operations team manager from 2006 to 2010, had a more down-to-earth way of summing up the difference: "When I was working at Google, my wife never saw the inside of my office. Here my two kids come in and play" around Building Two's sprawling open office space.
To highlight this openness, Facebook sponsored a tour of Building Two recently, led by Crass. DPR Construction, which builds data centers and other advanced buildings for financial services, healthcare and technology companies, said the Facebook example is having an impact. "Facebook has taken the lid off the secrecy about how to bring power and cooling into a modern data center," Andres said. Its example is being copied by other leading data center builders.
Who, you might ask? Standing on the roof of the Prineville facility, Crass looks to the south and can see another mega data center going up next door. At 338,000 square feet, it appears to resemble Facebook's. It's being built by Apple.
One of the keys to power conservation in the mega data center is cooling by evaporation, rather than air conditioner compressor. There are no industrial air conditioners -- chillers, they're called in data center circles -- in the Prineville facility. Some advanced data centers, such as those built by Vantage in Silicon Valley, install chillers just in case for hot spells. But in Prineville, the dry desert air is a boon to evaporation, and Facebook has built what has to be one of the most massive air-movement systems in the world to get its data center servers to the temperature that it wants.
The air warmed by running disks and servers rises into a plenum above the equipment. It's siphoned off to mix with cooler outside air, pushed through a deep set of air filters that look like square versions of the round air filters that used to sit atop a car engine's carburetor -- a thick, porous, soft corrugated paper.
The air is then pushed through something called Munters Media, an absorbent cellulose material that absorbs a small amount of water trickling down its vertical, wavy slats. The dry air cools as it absorbs some of the moisture, and Facebook's sensitive equipment faces a reduced risk of static electrical shock as the outside air is cleansed, cooled and humidified. A product of a Swedish company, Munters Media was used by European farmers for simple cooling systems based on fans and air flow to cool poultry and cattle before being drafted into use in advanced data centers.
Server fans draw the cool air over server motherboards that have been designed to be long and narrow rather than the common rectangular shape. Components are arranged to encourage air flow, and unnecessary components, like video and graphic processors, are stripped away. Instead of memory chips acting as dams to the air flowing over the hot CPU, they are aligned in parallel with the direction of air flow. Warm air from two rows of servers, standing back to back, is exhausted into a shared hot aisle, rises to the plenum, and is either pushed out of the building or sent back to the ambient air mixing room to restart its journey.
Crass, pictured here in one of his "server suites," refused to pump up the processes that he supervises. At the heart of it, he said the building management system is "deciding how much to open the dampers to bring in the cool air, putting it in the data center and getting rid of the hot air." It's also pushing air through banks of filters, controlling the valves that let water flow to Munters Media, monitoring the humidity as well as temperature of the data center air, and adjusting the speeds of six-foot fan blades for exhausting air from the last of a row of rooftop chambers.
The building's design lets fans and passive systems accomplish most of the work. Once humidified and cooled, the air flows down a nine-by-nine foot chute built between the filters and Munters Media rooms to the cold aisle of data hall below. As it falls, it hits a big deflector plate at ceiling level of the cold aisle and is spread out over the tops of running servers, where it's drawn across the warm server motherboards.
In earlier data center design, "the cool air flowed down to the floor of the data center. That didn't work as well. It didn't scatter as far and you need it at the top of the server rack, not just the bottom," said Crass.
The air-flow cooling also works because of the cool temperatures in the high desert. The hottest days of July and August will not go above 86 degrees, on average. As the Prineville complex was being designed, the American Society of Heating, Refrigeration and Air Conditioning Engineering raised the acceptable temperature for air used to cool computer equipment to 80.6 degrees. The conventional wisdom held that it was better to run computer equipment with air-conditioned air that might be in the low 60s or even 50s as it emanated from the chiller.
The Facebook data center runs hotter than many enterprise data centers at the peak of the summer, but its servers are not enclosed in metal cases; rather they sit on simple, pull-out trays in server racks that maximize the air flow over their components.
Crass thinks electrical equipment in his data hall can be adequately cooled with 85-degree air, given the powerful air flows possible with the building's design, and Facebook engineers have raised the upper margin of the cold aisle to that limit without ill effects. The higher the operating temperature, the less energy that needs to be poured into pumps for moving water and cooling fans.The small operating staff is seldom discomforted. All network connections and servicing of the equipment is done from the front of the racks -- the cold aisle. None is done from the back, the hot aisle. During our visit on Feb. 20, the hot aisle was about 72 or 74 degrees, mainly because the temperature outside was 30, with snowflakes in the air. In the fan room on the roof, most of the big exhaust fans were idle and the surplus heat was going into heating the cafeteria, office space and meeting rooms.
Facebook has applied for a patent on how it steps down 12,500-volt power from a grid substation to the server racks. It brings power at the level of 480 volts into the data center to a reactor power panel at each cold aisle of servers. It delivers 240-volt power to three banks of power supplies on a server rack. The process eliminates one transformer step, an energy-saving move since some power is lost with each step down in the conversion process. Most enterprises lose about 25% of the power they bring into the data center through these steps; Facebook loses 7%.
Not every idea implemented at Prineville was invented by Facebook. Facebook executives give some credit to Google for the idea of a distributed power supply unit with battery on the server, as opposed to operating at a central point where power feeds into the data center. The difference is that the conversion from alternating current to direct, and back to alternating (which cost 5% to 8% of power at predecessor data centers) was cut to a much smaller percentage at Google data centers. The conversion was required to ensure that battery backup was fully charged and ready to go the instant the grid supply failed. Google found a way around that power penalty by distributing battery backup to each server.
But Facebook is happy to take credit for its own innovations as well. And perhaps more importantly, it's publishing the details of its power conserving servers in the Open Compute Project and opening up its data centers for wide inspection.
Crass, at an athletic 36 years old, garbed in a Facebook hoodie, jeans and sneakers, seems like he would be as much at home posting his latest achievements on the surfboard or ski board to a Facebook page as managing its massive complex day after day. But he says it's the job he was cut out for.
Much of the real work of managing the facility is done by software regulating the air flow and monitoring the systems. The servers themselves, he said, are governed by a system that can invoke auto-remediation if a server for any reason stalls.
"Maybe a server is really wedged and needs a reboot. The remediation system can detect if the image is corrupted on the drive and can't reboot. Then it will re-image the machine" with a fresh copy, he explained. No technician rushes down the cold aisle to find the stalled server and push a reboot button. The remediation system "just solves most problems," he said.
Crass isn't allowed to offer a count on the total number of servers currently running, so he explains "tens of thousands," when asked. For purposes of comparison, Microsoft built a 500,000-square-foot facility outside Chicago that houses 300,000 servers. Reports on the capital costs for one building at Prineville show a total expense of $210 million, but that's not a total for the fully equipped building. Microsoft and Google filings for large data centers in Dublin, Ireland, show a cost between $300 to $450 million.
The Prineville complex sits in the middle of a power grid that pipes hydroelectric power from the Bonneville and other dams in the Northwest to California and Nevada. Visitors pass under a giant utility right of way that consists of three sets of towers not far from the Prineville site.
The mega data center is a new order of compute power, operated with a degree of automation and efficiency that few enterprise data centers can hope to rival. For Crass, it's the place he wants to be. He and his wife lived in Portland before he took a job on a project in Iowa. Given the option to take on Prineville, he jumped at it. He knew it would be an implementation of the Open Compute architecture and a working test bed for its major concepts.
"I love it. It's an amazing place to work. It's open to everybody. You're able to be here and walk through it and take pictures," he noted at the end of the tour. Everybody likes to be running something cool and letting the world know about it, he said.
The Prineville data center incorporates the latest cloud server hardware, a huge picture storage service and a lean staff, Crass points out. For at least a while, this complex sports the best energy efficiency rating of any major data center in the world, and the lessons being learned here will reverberate through data center design into the future.
Attend Interop Las Vegas May 6-10 and learn the emerging trends in information risk management and security. Use Priority Code MPIWK by March 22 to save an additional $200 off the early bird discount on All Access and Conference Passes. Join us in Las Vegas for access to 125+ workshops and conference classes, 300+ exhibiting companies, and the latest technology. Register today!
About the Author
You May Also Like