The Next xAN

The Next xAN We know about SANs. But what's next for enterprise data centers?

March 15, 2003

7 Min Read
Network Computing logo

We're all familiar with SANs, but what about BAN, CAN, DAN, PAN (no, not Personal Area Network), and other variations of the "S" in SAN?

Nothing appeals to a marketing manager more than creating a new term of art, particularly one that can stick in the way that LAN, the acronym for local area networks, did over a decade ago. In recent months, the number of new acronyms popping up to describe and define new networking paradigms is remarkable, and though potentially excessive, it does reflect an important new trend in networking namely, the renewed focus on optimizing and scaling the enterprise data center.

We've been talking with a few companies focusing on networking solutions specifically targeting next-generation enterprise server environments. We began our search at the grand payoff – the "Grid" – and worked our way back to today's reality: the increasingly complex and unwieldy data center. We talked with every big server vendor out there, including Sun Microsystems Inc. (Nasdaq: SUNW), IBM Corp. (NYSE: IBM), Hewlett-Packard Co. (NYSE: HPQ), and Silicon Graphics Inc. (SGI) (NYSE: SGI). Then we met with the Ethernet switch vendors, and ended with a number of interesting startups ranging from pure-play software vendors to others offering complete data center solutions, with a mix of CPU, storage, interconnect, and switching technologies all in a chassis managed as a single unit.

The ambitions of these companies couldn't be more diverse. One wants to improve Oracle database performance by improving I/O on servers and SANs. Another is looking to enable on-demand computing by virtualizing storage and CPU within a data center, using commodity servers and Linux OS with a distributed computing software platform. Yet another envisions building the Grid by developing the APIs, tools, and middleware necessary to support the distribution of massive data sets across a global computing infrastructure.

One thing is undeniably clear: While telecom vendors decide what product lines to cut, what R&D to lay fallow, which customers to abandon and which to court, companies focusing on the enterprise are tirelessly attempting to reinvent the entire notion of the data center. The activity here is truly progressive.During this exploration, we've been hearing new terms, as upstarts and incumbent vendors alike try to position for the next xAN. Here's our running list, after four months listening to pitches. You'll see that inventing new acronyms is no easy business. It's simple enough to envision Server Area Networks, for example, but how do you go out and market "SANs" without customers expecting to see racks of RAID arrays?

Here's our running list, after four months listening to pitches:

  • Blade Area Network (BAN): Blade servers brought together via high-speed interconnect and switching to create a dynamic, available pool of processing to the enterprise.

  • Cluster Area Network (CAN): Networking computing clusters clustered together to increase performance and availability.

  • Compute Area Network (CAN): More generic than those above, built from any kind of compute resource, from PCs to servers or even mainframes.

  • Database Area Network (DAN): A resource structure built from servers, switching, and storage optimized around the demands of large-scale databases.

  • Processor Area Network (PAN): We're not entirely sure what this means, but it's not hard to imagine. Much like compute area network, we would presume.

  • Server Area Network (SAN): Like the BAN, but not necessarily based exclusively on the blade server, yet somehow different from today's server clusters.

  • System Area Network (SAN): An ambitious term that is general enough to approach meaninglessness.

What's the endpoint? The all-powerful Datacenter Area Network (DAN)? Hard to say, but the enterprise data center is undergoing a top-to-bottom makeover, conceived by many as a unified, addressable resource, without the artificial boundaries imposed by the sheet metal around individual boxes.

These new concepts all relate to improving the utilization and/or performance, via network enhancement, in multiserver and cluster environments. Many enterprise server environments are underused to the tune of 80 to 90 percent, so there's a need to improve that. Many clustered server/computing environments experience application performance degradation caused by latency, throughput, and misuse of CPU for tasks other than processing. Solving these problems, and others, is part of the charter for such xAN initiatives.

These improvements are achieved through a variety of solutions including network interface cards (NICs), I/O controllers, switches, and networking software. The newer solutions take it beyond existing products such as TCP Offload Engines (TOEs), simple Layer 2/3 switches, InfiniBand switches, and proprietary, high-speed interconnect technologies like Myrinet and Quadrics. The common thread for the newer solutions is Ethernet, but with enhancements like Remote Direct Memory Access (RDMA), hardware-assisted protocol acceleration, and intelligence (for example, database-routing decisions enabling optimal use of server resources). (See RDMA Rumbles Along, Cenata Plots 'Transparent' Clusters, Vendors Chip Into IP Storage, and Will Offload Chips Be Uploaded?.)The result is reduced latency, increased throughput, improved utilization, and network awareness into the server characteristics. The significance of this is that enhancements to such a ubiquitous technology – Ethernet – could finally serve to transfer these types of deployments from niche-oriented implementations to more widely spread enterprise adoption, as blade servers continue to proliferate in enterprise data centers and IT efficiency remains a top priority.

However, there are a few high-level usage problems that persist with existing technologies:

  • TCP Offload Engines serve to offload some of the processing from the host server, but they actually process protocols more slowly – so they don't really help the latency problem.

  • InfiniBand has yet to take off and continues to struggle for wide adoption. Cost issues and the lack of adoption are signals that enhancements to Ethernet will likely be the technology of choice over the long term. We'll likely know the answer this time next year (see Whither InfiniBand?).

  • Technologies like Myrinet and Quadrics are closed systems that require end-to-end technology (server interface, switch port/fabric, etc.) to solve the problem. They do relatively well in university labs and R&D centers, but their outlook in the mass market remains limited.

  • A tight binding between applications and servers yields application provisioning inefficiencies and underutilized database server resources.

In some enterprise clusters that use Ethernet or Gigabit Ethernet switches today, the switch itself can become the weak link by dropping packets and not performing at wire rate. In some cases, simple switch configuration changes can help this problem (e.g., by turning off all higher-layer services). In other cases, optimized switching solutions that are yet to be developed could solve performance and, in the future, coordination problems.

A couple of the companies involved in this general area are Ammasso Inc. and Savantis Systems Inc. Though they're not doing the same thing, both are dealing with some of the matters discussed above. Savantis is involved specifically with Database Area Networks; Ammasso is still in stealth mode, but it says it's focused on high-performance computing clusters. Even behemoth networking vendor Nortel Networks Corp. (NYSE/Toronto: NT) recently called out the future importance of "Server Area Networks" as computing and networking continue to converge.

In addition to the benefits provided by the technology, there are several other market forces that suggest now is the time for such developments to take shape, including:

  • Increased deployments of blade servers;

  • Companies like Dell Computer Corp. (Nasdaq: DELL) viewing clustering as a path up the value chain;

  • Enterprise IT initiatives, including increased interest in virtualization, cost reduction (blade clusters vs. costly symmetric-multiprocessing machines), and better equipment utilization;

  • Ethernet advances including 10-Gigabit Ethernet and RDMA over Ethernet; and

  • Growing availability of clustered middleware and applications for commodity platforms.

Most of what we've been talking about so far applies to one cluster, or group of servers, that is "virtualized" through various technologies to increase use or improve performance of demanding applications. In the evolution of enterprise computing, this represents a critical step in moving toward grander visions of "autonomic computing" (IBM's terminology), grid networking, and the distribution of computing resources on a global scale. The path forward is essentially defined by the limits of virtualization. Today, storage is virtualized through the implementation of SANs. Going forward, computing resources are being virtualized in clusters, and soon all the resources of an enterprise data center will be virtualizable.

The next step – if it can be technically and economically justified to those businesses relying on high performance computing – extends that model of virtualization across the campus area, the metro area, and ultimately the wide area. Once that happens, we'll have arrived at what got us interested in the first place: the Grid.

— Scott Clavenna, Director of Research, Light Reading, and Rick Thompson, Principal Analyst, PointEast Research LLC

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox
More Insights