The Survivor's Guide to 2004: Storage and Servers

It's a buyer's market for servers, users are making their voices heard, and AMD is coming on strong with a new 64-bit processor. Swing into action as consolidation and commoditization

December 19, 2003

13 Min Read
NetworkComputing logo in a gray background | NetworkComputing

Another trend that should make you smile: The price of storage and storage networking hardware is falling dramatically. There was a time when only the largest organizations could afford networked storage. Now, smaller companies are demanding some of the benefits of storage networking, and vendors are responding with lower-priced products, though not just from the goodness of their hearts. The impetus: ATA-based storage (parallel ATA or Serial ATA) is surging into the mainstream for both near-line purposes and nontransactional storage. On the networking side, competition among Fibre Channel vendors and a looming threat from iSCSI and Ethernet/IP storage-based networks is adding pressure.

Mind you, not every small business requires a full-blown SAN (storage-area network)--SAS (serial-attached SCSI) can suffice in some situations. Still, we'll see a profusion of new products next year designed and priced to help companies with their storage needs, from management and reporting to capacity and speed.

Other segments of the storage market are on the move, too. Electrical limitations, cable bulk and costs have brought traditional parallel SCSI to the end of its useful life, and SAS is the replacement for Ultra320 SCSI, the last parallel version of the protocol. SAS boasts a simplified physical layer--it shares a common physical layer with SATA (Serial ATA). This, coupled with new silicon that will handle both SATA and SAS, lets vendors design one box and sell the customer whichever storage is appropriate for its application. The result is significant cost savings for vendors and increased flexibility for customers. Look for the first SAS boxes in early 2004.

On the protocol front, iSCSI will continue to blossom. Companies large and small can take advantage of iSCSI's commodity-level pricing and easy installation. Other benefits include control on a single wire and protocol, as opposed to Fibre Channel's out-of-band Ethernet management scheme.

Not up for rip and replace? Don't worry: New products to turn your current storage into an iSCSI target will continue to come on the market, while native iSCSI wares will also roll out apace. In addition, we expect new iSCSI accelerator cards to debut in 2004, providing even more choices for consumers.

Does all this point to iSCSI replacing Fibre Channel? Not yet. Although iSCSI is great technology riding on two of the most predominant technologies available, Ethernet and TCP/IP, the reality is that for at least the next year, iSCSI will remain at 1 Gbps, half the raw speed of top-end 2-Gbps Fibre Channel. For now, iSCSI's place is in businesses that don't need the top-end speed of FC, in secondary SANs for departmental use and in the SMB (small and midsize business) market.

Another protocol that will make news next year is 4-Gbps FC. This controversial middle implementation of the FC specification uses the same fiber cabling and many of the same parts as 2-Gbps FC, and it will likely be priced the same as 2-Gbps FC. Still, we expect 4-Gbps FC to be adopted wholly only for green-field installations, with few 2-Gbps FC users upgrading. That's because vendors are already showing 10-Gbps FC products, and this makes 4-Gbps an oddity.

At the same time, many vendors say they are planning to replace their 2-Gbps offerings with 4-Gbps FC by 2005 at the latest, with 2-Gbps positioned as the less expensive, lower data-rate version of FC if 10-Gbps takes off. Don't worry too much about 4- or 10-Gbps FC in 2004, because the short-term market for 10-Gbps FC is interswitch links. Although 4-Gbps products may start trickling into the channel as early as the first quarter of next year, they will be backward-compatible to 1- and 2-Gbps devices, so they shouldn't pose a problem.Next up in the buzzword category: information life-cycle management. Or, for your continued acronym pleasure, ILM. ILM is similar to the HSM (hierarchical storage management) of days past, with one big difference: HSM never really enjoyed full market acceptance, but ILM can--and should--be welcomed with open arms by storage admins suffering massive headaches induced by HIPAA and Sarbanes-Oxley. These government regulations and new best practices for data retention have spawned a need for more sophisticated administrative tools.But beware: Many a vendor is dusting off old HSM strategies and relabeling them as ILM. Don't be fooled--ILM means more granular control. We need to manage data at more than just a backup-and-restore level, not only because of regulations but to optimize our best, fastest and most expensive storage by handling only the data that requires that kind of storage. How? We need a layered approach. At the top is fast, ultrareliable (and, therefore, ultraexpensive) storage for our constantly accessed data. From there, we move down the stack in speed and price until we reach long-term retention.

That's where ILM comes in. Generally speaking, data needs to be accessed often only during the first stage of its life. As data gets older, access to it becomes more infrequent. After a time, data may need to be kept only for archival reasons. ILM lets us manage data according to government and business rules, moving it down the stack and into permanent storage as needed. For example, monthly sales data can walk down the stack as it becomes accessed less, freeing up valuable space on our primary disk arrays. The beauty is that the need for data retention is met, with the added benefit of using the optimal storage based on frequency of access.

You'll hear an earful about ILM in the coming year, if you haven't already. Major end-to-end storage players, including EMC and Hewlett-Packard, are formulating plans to make ILM technology available to the masses. Veritas, Fujitsu Softek and other software vendors are planning plays in this space, too.

But, as tempting as it might be, early adoption is a no-no. Right now, vendors are simply retooling their current products to play to this market trend. To do real ILM, you need a scheme for access-frequency-based data migration, data naming and the boatloads of standards the industry seems unlikely and unwilling to build, partly because of commoditization fears. ILM will happen, but it needs to be well-thought-out and driven by users, not by the industry's need to sell complex solutions.

Tape TimeTape technology in the data center will continue on its modified course. Disk-to-disk-to-tape backup is going to become the norm--if you haven't implemented DDT in your backup scheme, next year might be the time. Many companies, including Quantum and StorageTek, are offering disks that emulate tape drives to help with a seamless install in your environment. A key reason for using disk with tape emulation is the lack of support for direct disk access by much of today's tape-backup software, though by next year we expect backup software vendors to address this problem.

Remember, however, disk isn't a replacement for tape, but an augmentation. Only in certain, very specialized situations where off-site backup isn't an issue can inexpensive disk take the place of tape. Don't believe the hype about tape being dead; it simply isn't true. (Need convincing? See "Don't Count Tape Out Just Yet".)

On the tape-technology front, 2004 will see more of the same: bigger and faster tape drives and automation. There is also a push to make tape automation more affordable for the masses as DDT appliances begin to appear on the market from companies like Breece Hill and Spectra Logic. These appliances bring the convenience and speed of DDT to small and midsize businesses. New products will continue to come to the aid of small businesses in an attempt to bring them up from their current abysmal level of data protection.

Optical storage will see some changes too. Technology breakthroughs this year that allow dual-layer DVD will continue to receive attention in 2004. In addition, consumer-level Blu-Ray (a blue-laser DVD standard) DVD products in Japan are providing a capacity of 27 GB per disk. Seeing as there is still a standards war being fought for the next generation of DVD, we recommend holding off for the time being. The standards mess spawned during the first generation of writeable DVDs, coupled with the continued paralysis of the DVD Forum standards body, means the smart money will wait until there is a clear market winner. As we said, the 2004 server market will be eerily similar to the 2003 server market, thanks to our old friend commoditization. Case in point: Many times, the big three vendors, Dell, HP and IBM, use the same chipsets, most often those from Broadcom subsidiary ServerWorks.

On the whole, this is a boon to customers. Today's servers are rock-solid and have many advanced features--including hot-swappable memory and two Gigabit Ethernet ports on the main board--that yesterday's servers could only dream about. Vendors know it's getting hard to differentiate the mainboard features of their systems, so they're competing with other differentiators: price, software and the specific features your organization is seeking.For software, Dell, HP and IBM include some useful code with their systems to simplify access and maintenance, putting vendors that do not offer such ease-of-use software at a disadvantage. Point features include the comprehensiveness of the remote-management package, number of internal disks, and type and locations of ports on a given server. Price is the easiest: If everything else is equal, and you're satisfied with the companies bidding for your business, price can be a deciding factor.

As for specifications, there has been some movement on the slot-interconnect front. The PCISIG (Peripheral Component Interconnect Special Interest Group) is moving forward with two new specs that should be hitting in late 2004. The first, on the server front, is the new PCI-X 2.0 specification. This is the latest version of the popular PCI-X 1.0 spec commonly found in servers today. The 2.0 version encompasses two new speeds, 266 MHz and 533 MHz, and is backward-compatible to the older PCI specifications with the exception of the oldest 5.0-volt PCI (5.0-volt PCI is rare these days, existing only on some very old PCI cards). This specification will help companies take full advantage of 10 Gigabit Ethernet and other high-bandwidth technologies.

The other specification to watch is PCI Express, which is targeted at desktops. With its maximum theoretical speed of 133 MBps, the current PCI specification for desktops is woefully inadequate considering the speed of today's hard disks and Ethernet connections. PCI Express will provide 2.5 GBps per lane and is software-compatible--but not electrically or mechanically compatible--with the current PCI specification. Keep an eye out for desktop systems equipped with this new technology.

The Emperor Has No Clothes

There is a brouhaha fomenting in the server-processor market, and it goes something like this: It is becoming increasingly apparent that Intel's Itanium processor line, with its EPIC architecture, is not taking off. The lack of software for the Itanium processor's native mode and mediocre performance in legacy 32-bit mode have made the 64-bit offering something of a lame duck.At the same time, AMD has released its Opteron 64-bit processor, a 64-bit server-class chip based on classic x86 technology, with extensions to 64 bits. This means it runs 32-bit x86 code, billions and billions of lines of it, at full speed. Other processor enhancements make the Opteron one of the top processors in the 32-bit space as well as a player in the 64-bit space. IBM has even released an Opteron-based server that has posted some pretty impressive numbers (see "IBM Champs at the 64 Bit").

We aren't advocating that you run right out and buy an Opteron server, but you should keep a close eye on the chip in the next year. Keep an even closer watch on Intel and see if it makes a chip that takes advantage of the x86-64 extensions that AMD has developed: Cross-licensing agreements between Intel and AMD give Intel full access to the technology, and if it looks like AMD might be too successful, you can bet Intel will release a processor based on that same chip.

The bottom line on servers for 2004 is better, cheaper and faster. A consumer's delight.

Steven J. Schuchart Jr. covers storage and servers for Network Computing. Previously he worked as a network architect for a general retail firm, a PC and electronics technician, a computer retail store manager, and a freelance disc jockey. Write to him at [email protected].

Post a comment or question on this story.

AMD: AMD's new Opteron processor could be a winner.Breece Hill: Breece Hill reinvented itself this year and enters the market at year's end with a combo autoloader/disk array priced to move in the SMB market (where the pain of compliance with regulations around data protection is acutely felt).

EMC: EMC has made several company and software acquisitions and is repositioning itself; look for more changes.

Intel: Intel is sure to make a move against AMD's Opteron.

Nexsan: Delivers lots of low-cost ATA arrays.

Revivio: Revivio is fixing Point-in-Time Mirroring and returning wasted disk to productive use.Veritas: Veritas' core business is being threatened by the emergence of intelligent switching platforms that are moving storage intelligence to the switch from the host.

Network Computing storage & server technology white papers

Network Computing storage & server technology research reports

"Utility Computing: Have You Got Religion?"

"Reach for the Masses""HP Takes New Aim At Small, Medium Businesses"

"IT May Want Its 'Storage On Demand'"

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights