What's So Great About Grid?

Financial services firms are jumping on the grid computing bandwagon.

June 22, 2004

4 Min Read
NetworkComputing logo in a gray background | NetworkComputing

Grid technology uses divide-and-conquer tactics to distribute computationally intensive tasks among any number of separate computers for parallel processing. It allows unused CPU capacity - including, in some cases, the downtime between a user's keystrokes - to be used in solving large computational problems. While this technique has long been used to satisfy the insatiable computational needs of Wall Street's "quant jocks" at trading firms and investment banks, grid computing is taking hold in other areas of financial services, including insurance, corporate banking and even retail finance.

Like the Internet, grid computing started in academia and the national defense industry and has moved into the commercial marketplace, along with its proponents. Phil Cushmaro used to work on grid computing applications for the defense industry, designing device drivers and operating systems for Duluth, Ga.-based Concurrent Computer Corp. Now, he's CIO of Credit Suisse First Boston, the corporate and investment-banking arm of the Credit Suisse Group with $689 billion in assets.

Grid technology has come a long way since Cushmaro's earlier experiences. Fifteen or 20 years ago, he says, "distributed-computing" solutions existed, but they were heavily embedded in vendor solutions. "You couldn't re-deploy those solutions, or the components of those solutions," he explains. Now, the software market has evolved to the point where it's possible to allow disparate applications to take advantage of any available computing power in a heterogeneous computing environment, at any time of day.

Drawing upon the work of the Globus Alliance, a technology consortium that publishes industry standards for grid computing, IT administrators can mix and match applications and processors. "Now, you can basically take anybody's computer and anybody's software, and as long as it obeys this standard, it can participate in the grid," says Jason Bloomberg, analyst at Waltham, Mass.-based technology consultancy ZapThink.

That's in theory, at least. True interoperability will take time. "Once you agree on a standard, you have to implement it, and once you implement it, you have to do interoperability testing on different products using that standard," says Lawrence Ryan, director of the financial-services industry practice at Hewlett-Packard. "There are standards today, yes, but these standards are also evolving very rapidly. They're not there yet." That's why companies including HP, Fujitsu Siemens Computers, Intel, NEC, Network Appliance, Oracle, Sun Microsystems and others have formed the Enterprise Grid Alliance, based in San Ramon, Calif., which promotes standards for enterprise grid applications.The relative youth of standards hasn't stopped firms from pressing ahead. For its part, CSFB uses grid-management software from DataSynapse to mediate between requests by specific applications and the pool of available processor capacity. Part of that job is figuring out which requests should get priority. "It could be based on the relative time-sensitivity of each department, or even on how profitable a given trader is," says Frank Cicio, chief marketing and strategy officer, DataSynapse.

So the most profitable traders are rewarded with quicker response times. But grid isn't just about speed. Even if CSFB didn't enjoy a substantial speed boost from grid (which it does), Cushmaro would still embrace the technology. That's because it's easier to guarantee the availability of a centrally managed grid than it is to provide the same quality of service for mission-critical applications in separate areas, he says. "Complex environments tend to break and they're very difficult to fix," explains Cushmaro. "If you look at the reasons for outages on the Street today and the time duration of coming back from an outage, it is usually somehow related to the complexity of the environment."

Grids reduce complexity by de-emphasizing the importance of any given processor. A malfunctioning node in a grid doesn't bring a process grinding to a halt. In fact, even individual transactions can make it through an outage unharmed. "When a transaction is sent out to be executed, if the network or the power fails, we automatically move that over to an available processor, right where it left off," says DataSynapse's Cicio.

For some firms using commodity computing resources, it's hardly worth it to troubleshoot a hardware problem. "With one of our clients, when there's a problem with a particular node in the grid, they don't even bother repairing the node," says Bob Boettcher, vice president of financial services for Platform Computing, a Toronto-based grid software company. "They just rip out the board and insert another" - a far cry from caretaking Wall Street's legacy systems.

But massive legacy systems aren't the only area on Wall Street that can benefit from grid computing. Platform Computing introduced an adapter that supports data-crunching on a smaller scale. "Microsoft Excel is ubiquitous on the trading floors of banks," says Platform Computing's Boettcher. "We can allow people to just highlight a range of cells within a spreadsheet, and those calculations can be run anywhere on the grid."0

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights