Search giant talks disk drives, file systems, data repositories in research report
February 24, 2007
Google has fleshed out more of its storage infrastructure story, explaining how it monitors the performance of its systems and can store data "forever."
The information was revealed in a recent report by Google researchers Eduardo Pinheiro, Wolf-Dietrich Weber, and Luiz Andre Barroso and provides a glimpse into the internal workings of the notoriously secretive search giant. (See Google Grumbles and Google Earnings Up.)
According to the report, Google has built an infrastructure that collects vital information about its infrastructure "every few minutes" and a repository that can store this data "essentially forever."
At the heart of this effort is a what Google describes as its "System Health infrastructure," a large distributed software system collecting information from all the firm's servers.
Google keeps its technology specifics close to its chest, and the report does not name any vendors, although it is said that Google uses around 10,000 Linux-based servers in 13 data centers dotted around the world. (See Tracking Google's IT Booty, Gettin Googly, and Tech Cash Slashed? Learn From Google.)The researchers reveal that Google runs a daemon, which is a program for extracting data, on all of these servers. This, in turn, provides information such as temperature and disk-drive performance.
Information from the daemons is then stored in a distributed data repository. (See Storage Gets Scattered.) Google's interest in big, distributed storage technologies is well known, and the firm has built a repository called "Bigtable" for its server and drive performance data.
Bigtable "takes care of all the data layout, compression and access chores associated with a large data store," according to the report, and was at the center of a recent Google study into its disk drive performance.
The search giant examined 100,000 drives used since 2001. These, apparently, were a combination of serial and parallel ATA "consumer grade" hard-disk drives, ranging in speed from 5400 to 7200 rpm, with 80 to 400-Gbyte capacities.
Google also received some surprising results. "We found very little correlation between failure rates and either elevated temperature or activity levels," said the report.Surprisingly, Google found that frequently used drives that were less than three years old were much less likely to fail than infrequently used drives of a similar age.
Although Google execs remain tight-lipped whenever questioned on their storage infrastructure, the report reveals that BigTable is built on top of the Google File System (GFS). GFS is essentially the firm's storage backbone, and consists of thousands of inexpensive commodity storage devices clustered to share large, multi-Gbyte files.
According to a note on the Website of Google's research labs, the firm's largest cluster provides hundreds of terabytes of storage across thousands of disks on over a thousand machines. Exactly who provides this kit remains something of a mystery.
SEC documents show Google spent more than $1.5 billion on property and equipment in the nine months ending September 30, 2006, almost triple the $592 million spent in the same period last year. And at least one Google exec has confirmed that the company consumes storage, compute power, and bandwidth at an alarming rate. (See Google Groans Under Data Strain.)
The sheer compute and storage power available within Google was highlighted by the firm's recent technology partnership with NASA. (See NASA and Google's Space Oddity.)With Google expected to announce an enhanced version of its enterprise search platform this week, the vendor's storage infrastructure is likely to come under increased scrutiny during the coming months. (See Content Classifiers Glom Onto Google.)
— James Rogers, Senior Editor Byte and Switch
EMC Corp. (NYSE: EMC)
Google (Nasdaq: GOOG)
Kazeon Inc.
Securities and Exchange Commission (SEC)
StoredIQ Corp.
You May Also Like