Survivor's Guide to 2005: Storage and Servers

The upcoming year will bring new technologies and options that extend the life of your SAN while 64-bit computing and blade servers will continue to evolve.

December 17, 2004

12 Min Read
NetworkComputing logo in a gray background | NetworkComputing

The IP SAN field had been left to small players like Network Appliance and Adaptec's acquisition, Snap Appliance. But in 2005, this market will hit the mainstream. Most major vendors are planning support for IP SANs, using iSCSI along with either FC-IP (Fibre Channel over TCP/IP) or iFCP (Internet Fibre Channel Protocol).

FC-IP and iFCP both aim to interconnect and extend your SANs, but they differ in approach. FC-IP makes Fiber Channel the dominant element, sending FC data through an IP tunnel, so your SAN management software shouldn't see a difference between a local SAN and a remote one. In contrast, iFCP enables IP networking's management capabilities so you can manage your SANs separately. At this writing, only McData Corp. supports iFCP. The other vendors support FC-IP.

Vendors are rolling out IP-enabled SANs: full-blown FC SANs with IP connectivity. They're also rolling out FC SAN switches with IP functionality that will let you hook machines outside the data center to your existing SAN. In the short term, this is the ultimate solution to growing storage space needs outside the data center. Hooking a remote server that requires access to some kind of storage into your SAN means you can leverage that SAN investment outside its original bounds.As long as your server doesn't need FC speeds to communicate with the SAN, you'll improve the SAN's ROI without much overhead. If you choose FC-IP, you'll still need Fiber Channel cards in your servers and connectivity to an FC switch that can convert between FC and IP. For remote data centers or departmental server rooms, nevertheless, this is an ideal solution--the one you're most likely to consider for extending the life of your SAN investment in 2005.

Feature-Complete iSCSI SANs

If you don't yet own a SAN, you may skip the Fibre Channel option altogether and move straight to iSCSI. This Internet technology has finally come of age, and the market now offers a wealth of new iSCSI targets. In fact, the field is getting so crowded with both new and familiar faces that vendors are striving for ways to differentiate their products. Among the hot features: multiple iSCSI ports, and built-in redundancy and storage management.

Year over Year Revenue Growth, Q2 2004

Click to Enlarge

The iSCSI vendors think of themselves as "SAN killers," but we don't see it that way. Rather, Fibre Channel and iSCSI address two different needs. It takes a lot of specialized knowledge to maintain a FC SAN. Customers who don't want a dedicated Fibre Channel knowledge base but need large amounts of storage will turn to iSCSI. Some price-conscious customers will choose this technology, too, as a feature-rich iSCSI SAN still runs cheaper than the equivalent FC SAN. In the long run, iSCSI will fare better than Fibre Channel, simply because it uses technologies your networking staff already understands.Still, enterprises with full-blown FC SANs are unlikely to replace them for no reason over the next few years. Even if 10-Gigabit Ethernet brings the promised performance boost to iSCSI SANs, you won't want to scratch your original FC SAN investment unless iSCSI suddenly and dramatically improves its ROI potential. Upcoming enhancements in IP technology--10GigE primarily--will improve the performance of iSCSI SANs, making them very appealing against FC SANs. But even today, multiple 1-Gbps cards are standard fare in higher-end IP SANs.

What if you had a collection of applications that were installable on your SAN switch and worked out-of-band to handle data tasks ranging from security to optimization? Those days are coming upon us rapidly. Brocade Communications Systems and Cisco Systems already have programmable blades for their SAN switches, and McData won't be too far behind. Both Brocade and Cisco have gotten other vendors to write apps for those blades, and have subsequently verified them--an interesting concept that we believe will lead to more real-world solutions.

Look for compression and encryption applications, followed by ones for bandwidth monitoring, authentication and throughput. EMC Corp. and StoreAge are at the forefront of this field, with products to be deployed directly on your SAN switch.

If you wish to monitor your SAN throughput, the best place to do so is the switch. The same can be said about authentication. Both of these applications can be implemented without dragging down performance because they don't require intrusive manipulation of the payload. Switch vendors already have minimal-throughput tools built into their systems, but watch for health-monitoring applications in which these minimal-throughput tools are part of a larger package to help you manage your SAN resources.

Although compression and encryption are available today, they're handled at the target or host level. At the target level, you introduce a single point of failure; at the host level, you make many round-trips to get the data where it needs to be. Authentication is also currently handled at the application level, but that is shortsighted. If a hacker gets access to the host in question, he or she has access to the SAN. By putting authentication at the switch, you can use the context (application name) to determine access from a given host. Thus, Explorer.exe lacks access, but "customerServ.exe"--an application developed internally--has it.

Our only concern with smart SAN switches is that they may compromise performance. Some security measures--such as authentication, which happens only once per resource--can be taken with minimal interference. But others, such as encryption, must occur as data passes through the switch. Whenever you introduce encryption, there's some effect on data transfer rates, but switch vendors have nonetheless kept their code as tight as possible to maintain good performance. In the coming year, we'll watch how the vendors control emerging applications' impact on performance.

ILM: Real or Imagined?

Vendors will continue to push ILM in 2005. There is common sense to what they're trying to sell. Do you store the company logo on a drive whose cost per megabyte is high or low? Most of us should answer "low." Your high-cost, high-performance disk space should be reserved for high-volume applications like databases and e-mail.

But ILM isn't the only way to prioritize your storage. Many storage vendors are keeping the ILM-speak down to a minimum. They offer solid ways to reduce storage costs, such as moving files to a cheaper disk after going unopened for several days. That's a common-sense way to free up space on your disks. EMC offers good advice for implementing ILM incrementally.Other vendors, such as Advanced Digital Information Corp. and OuterBay Technologies, are betting the farm on this concept, eager to sell you their entire ILM packages. But before diving in, you'll have to do months of analysis. First, you must determine what storage you have, where it is and what it costs you per megabyte. Next, you'll need to classify each document or file your organization turns out and give it a life-cycle definition that will move it from more expensive storage through archival to tape over its lifetime. ILM software tools will help, but you must make the manpower investment.

The problem with the complete vision of ILM is that you won't have the time for the analysis or the upkeep. The savings are there, but we recommend sticking to the low-hanging fruit in 2005. Move unaccessed files and those of employees who have left the company to cheaper storage media, and spend your time on one of the other 10,000 problems that storage professionals face daily.

Tape is still cheaper than disk, but that margin continues to shrink. Most organizations that are not moving petabytes of data daily are in a position to utilize layered disk as an ILM enabler.

Blade servers are here, and they're real. The obvious benefit of blade servers is how much computing power they pack into a small amount of rack space. If you need many systems, blade servers are a cost-effective route to centralizing those servers. But the real payoff for blade servers--the possibility to dynamically redistribute your computing power across multiple applications as demand increases--will come in 2005.

Imagine being able to say, "From 6 a.m. to 10 a.m., this Oracle instance will get five CPUs and 5 GB of memory, but from 10 a.m. to noon, it needs only two CPUs and 1 GB of memory." Of course, the OS and/or app in question must support this type of provisioning, but OSs with this support built in are available today, as are blade servers. High-end machines have had this capability for a while, but Intel- and AMD-based blade servers are the first lower-cost machines to implement such scalability. When you need more power, you buy another blade.The cost of entry is higher than buying a standard server--first you must buy the blade-server enclosure. But the incremental cost of adding power to an app or adding apps will be lower as long as you have room in your blade server to expand. If your shop is adding or upgrading servers all the time, blade servers are worth a look.

64-Bit Computing

The age of 64-bit computing arrived with a quiet replacement of old technology. AMD's Opteron is fully backward-compatible with 32-bit applications, as is the newest 64-bit Intel Xeon family of chips. Prices will rapidly erode, making it affordable for your OEM to ship you a machine with a 64-bit chip in it. Soon, whether you care or not, you're likely to have 64-bit computers running in your data center.

Linux vendors and Microsoft are preparing to take advantage of 64-bit extensions. That means you'll slide into the world of 64-bit computing without the major overhaul of systems that Intel had originally planned for its Itanium rollout. Intel had envisioned a massive upgrade to Itanium of all your core servers. As the Itanium moved toward reality, it became clear that the cost to upgrade would outstrip the benefits in the short term for many apps. Then, AMD brought out the 64-bit Opteron with 32-bit backward compatibility to fill the space.

Eventually, Intel saw the usefulness of such a chip. The vendor now offers 64-bit Xeon processors for most purposes while selling the Itanium for high-end uses.In many ways, CPUs with 32-bit compatibility on-chip are good, but there will be problems along the way. When 64-bit servers enter your data center in 2005, you'll need to watch for hidden incompatibility issues. The transition won't go flawlessly, and you'll undoubtedly find applications that perform poorly or don't even run in 64 bits. Testing is your safeguard. Before you move an application to a 64-bit server, test it thoroughly in the new environment.

Some true 64-bit applications will be available in 2005, but many of the server applications you run today won't be available until 2006. This is just a reflection of market adoption trends. For application vendors to make money on a platform, there must be enough customers that want it.

Don MacVittie is a technology editor at Network Computing, specializing in storage and servers. Write to him at [email protected].

Apple Computer and Xiotech Corp. are the front-runners in what will turn into a growing trend: Fibre Channel SAN in a box, a full-blown SAN with a low price tag and easy configuration. The pressure from other storage sources--primarily iSCSI, but also Windows Storage Server derivatives and traditional NAS appliances--will continue driving prices down and ease of use up. This will open a window of opportunity for heterogeneous SAN management tools to take a true grip in the enterprise. If you can drop a Fibre Channel SAN in a remote location or department for $15,000, then configure it in 15 minutes or less, you'll probably do so. When competing technologies are nearly the same price and no easier to configure because your staff already has the know-how, it becomes difficult to justify not doing so.

The biggest stumbling block we envision for low-end FC SANs is the continued requirement for an FC host bus adapter in each server configured to access the SAN. This requirement won't go away--you need a method of communicating with the SAN. The configuration time, coupled with the cost, make FC less appealing than alternative technologies that use software to interface with storage. For the data center of a large enterprise, FC SAN-in-a-box storage space may be too limited to consider. But for the data center of a small or midsize organization, such a SAN may be just the thing to provide storage on demand at the performance of Fibre Channel and the cost of iSCSI or NAS.0

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights