Mark Lewis

"People say, 'I need to get some IP-based storage.' But no, you really don't."

June 26, 2001

21 Min Read
Network Computing logo

The Byte and Switch Interview

In the Spotlight: Mark Lewis, vice president and general manager of Compaq's Enterprise Storage Group

If you are like many people, you may think of Compaq Computer Corp.(NYSE: CPQ) as a PC vendor. While the company has its roots in the PC business, Compaq is also a big player in both the server and storage markets.

To illustrate, during the company's last fiscal year, its enterprise computing business segment pulled in $14.316 billion in revenue, representing 34 percent of the company's total revenue of $42.383 billion. Within the enterprise computing business segment, storage products accounted for $5.24 billion in revenue, better than 36 percent of that segment's total revenue.

Additionally, the recently released Gartner/Dataquest storage study ranks Compaq number two in worldwide host-attached external RAID storage with 27.3 percent of that market, behind Sun Microsystems Inc. (Nasdaq: SUNW) with 31.6 percent; and number two in worldwide external hardware RAID controller-based storage at 10.8 percent, behind

EMC Corp. (NYSE: EMC) with 34.6 percent.Within the field of enterprise computing players, Compaq has a unique position. Through the acquisition of Tandem Computers in 1997 and Digital Equipment Corp. in 1998, Compaq has the most diverse high-end server product line of any of the major systems vendors, supporting its Intel-based ProLiant servers, its Alpha-based AlphaServers, and its MIPS-based NonStop Himalaya line of parallel-processing systems. (Compaq's NonStop Himalaya line is scheduled to switch to Alpha processors in 2003.) Supporting that diverse server product line poses an unusual challenge for the company's Enterprise Storage Group, but also provides it with an edge.

Thus, the direction Compaq takes with its storage networking products has a major influence on the market. That direction is under the watchful eye of Mark Lewis, vice president and general manager of Compaq's Enterprise Storage Group. Lewis joined Digital Equipment Corp. back in 1984, and served as that company's director of engineering for enterprise storage; he was instrumental in the development of the StorageWorks product line.

Based on that time in the trenches, Lewis sees virtualization as a key element of storage networking's future, and has a wide-angle view of issues like interoperability and the battle between Fibre Channel and the emerging storage transport protocols such as iSCSI.

Here's how Lewis sees the world of storage networking:

Interoperability
Vaulting into the Virtual

Protocols and the Future

Byte and Switch: Through its various mergers and acquisitions over the years, Compaq has one of the broadest product lines around. To what extent have you been able to maintain cross-platform interoperability for storage networking products within your various product families, and how do you manage the process?

Mark Lewis: We've done very well in that respect. Since the [Compaq and Digital] merger we had various high-end products, entry to mid-range products, and truly integrated the organization and got traction that way. The good news, and it goes to our ability to support across both Compaq platforms and those of other vendors, is that we have developed a heterogeneous product line. While EMC has done a good job in this area, too, as server suppliers go, when you look at

Hewlett-Packard Co. [NYSE: HWP] or IBM Corp. [NYSE: IBM] or Sun, these other companies are really struggling to support even their own platforms. We are not only supporting our own systems, but also Dell Computer Corp. [Nasdaq: DELL] and other NT systems, Solaris, AIX, HP/UX as well.

Process-wise, we constantly work to improve this process, because it costs real money to have Sun expertise, the HP expertise, and Tru64 and VMS expertise, at least to some degree within the storage group to build storage systems. I don't think we have any huge secret in how we do this, but suffice it to say you have to spend money, build labs, and build expertise. It does not come free. If folks aren't willing to pony up to some serious investment, it's going to be difficult for them to support the number of platforms we do.Byte and Switch: What do you see as the biggest interoperability issues facing the storage networking industry today?

Lewis: I think the largest issue we face is building in a true process and methodology to validate the operation of large, heterogeneous SANs. What you have in this environment is lots of configurations that you couldn't possibly test. We need ways of developing testing to validate those. Storage is going through the same growing pains as when we first adopted networking. Obviously, no one could build a NIC or a router today and test it on the Internet. With these large heterogeneous storage networks, you have different people's components, so we're working a lot on standardization and test planning to be able to assure customers that a large storage array added into a SAN is going to operate properly.

Byte and Switch: Do you maintain other vendors' servers in your labs for ongoing interoperability testing? And if so, which vendors?

Lewis: Absolutely. I walked through the lab here in Colorado Springs this morning; we probably have over 100 Sun servers, for example, just in this one building. So we maintain not just some Sun servers, but one of almost every model.

With the open SAN environment, we're starting to build more of our competitors' products into those SANs to make sure that we don't have any interoperability issues there. That means buying EMC Symmetrix, IBM Sharks, and other products to put in our labs for our own interoperability testing.We support all the major UNIX platforms, including Linux. We support NT across the board, and do specific testing on the major NT supplier platforms as well. For example, we bring HP servers and Dell servers into the labs to make sure there is nothing unique about those servers.

Byte and Switch: Do any particular platforms present more interoperability issues than others?

Lewis: Each has its advantages and disadvantages. I don't think I'll name any names, but in getting support from certain suppliers, there are some operating systems that are fairly open in understanding how they operate and what they do, and there are one or two that work a little harder at being a little more closed about how you interact with that operating system. I think there is a general push to be more open, particularly at this interface layer. Customers are starting to push vendors to open up more there.

Byte and Switch: What does the Compaq interoperability-testing budget look like? You must spend a ton of money on this.

Lewis: Even before you look at the capital, it is definitely many millions of dollars. I'm not going to provide any specific number, but it's a lot. We're hoping it will get easier. Storage networking is improving that greatly. We're doing better things with networking to make things more interoperable. We're not just investing in brute-force testing, but we're investing in better specifications, more work with the Storage Networking Industry Association (SNIA) to put in more interoperability groups, more work with the outside groups to develop certification testing, and so forth. In the end, we want to spend less.Byte and Switch: What is the typical interoperability testing process that you go through? For example, do you go through testing real backups using leading software packages to see if real data flow breaks anything?

Lewis: That's a great example. If we build backup solutions into SANs, both the access of online storage as well as near-line backup are absolutely critical. We spend a lot of time with the backup apps because that is such a direct part of storage and so interactive. We also build large Oracle databases, for example, to be sure the applications don't have any timeouts built in that might interact with the environment.

Another thing that we do a lot of -- a lot of people put everything in a nominal state and then do a backup, and it works fine. You actually need to put things into a degraded state, meaning that while that backup is running, you need to be failing redundant disk drives, failing data paths, you need to be doing things that are perturbing the system. That's the hard part. Most of this stuff works right out of the box in ideal configurations and situations. What customers pay us for is for everything to work when things aren't going so well. If you look at the firmware, most of the code in an array controller is error-handling code, code that deals with operation in certain events. That's the code that really needs to be exercised in these tests.

Byte and Switch: Compaq is among the founding members of the SNIA Supported Solutions Forum. How did that group come about, and what is your opinion about the impact it will have within the storage networking industry?

Lewis: The group came about, really, I'd say, because of our frustration with making progress with the idea of open SANs support. We wanted to start this forum, it had to be open, but we wanted to get it jump-started with just a few people so it could move quickly. Committees are good but can often take forever, so you have to compromise. But if you do everything in a closed-door session, you're not inclusive enough. We chose the top two switch providers and the top four storage providers. We felt that was enough critical mass to ensure that this got momentum.Byte and Switch: Virtualization would appear to require that some intermediate layer deal with operating system-specific issues, such as file system format differences and the like, in order to extend across platforms. If that is accurate, what is Compaq's approach to virtualization, and how do you deal with those issues?

Mark Lewis: That view is what I would call somewhat accurate, and I'll explain why. It is a part of the issue we want to address with virtualization, and very key to the reason we designed our VersaStor virtualization the way we did. As we looked at virtualization, we saw that there were many ways to approach it. One of those was a file-system-based approach, where you have an installed file system that has all these capabilities, is all virtual, and you could do a lot with that, but it would require the [replacement] of file systems across any server you wanted to have in this virtual environment. A move like that would require us -- as I joke -- Bill Gates, Scott McNealy, and Larry Ellison in a room and agree that they are all going to scrap their file systems and go with ours. We didn't think that was realistic.

We decided to make the virtualization layer so simple, in terms of its interface, that it will interface with any legacy system, any existing system, any file system -- and, over time, would interface with any application, if the application decides it doesn't want a file system in the way and wants to talk directly to the virtual storage layer.

So we built our virtual storage layer as a block-to-block virtualization layer, meaning that the host thinks it is seeing a disk drive. Even though we have complex arrays, if you look at operating systems, they still look at volumes as if they were a physical disk drive. So, we felt that in terms of compatibility, the best approach was to leave that model in place.

So for all the people who have existing systems that they will be running for years to come, they are going to be able to use our virtualization right out of the chute. There is nothing special to install. Over time what we expect is that applications will see what they can do by interacting directly with the virtual layer, and will start to say, "I don't need a file system if I have my database here; I'm going to write an API and write directly to the virtual layer, giving me more functionality." But, while that's happening, you can still get most of the functionality out of the base virtual layer.It is very critical for us to be able to provide most of the functionality across all platforms and then even greater functionality once applications start to write directly to the virtual layer.

Byte and Switch: There are also two aspects of virtualization. One is the ability to accommodate multiple server platforms within a given storage system. The second aspect is to extend the storage pool across storage systems from different vendors. Have we gotten to the point, for example, of being able to combine a Compaq storage system and an EMC box into the same virtual pool? Or is that still in the future?

Lewis: Well, it is still in the future for us. The product we're designing, the VersaStor product I mentioned, will be generally available in the second quarter of next year, 2002. That product is designed so that there are no requirements as to brand or type of storage. That product will do what you are asking about.

If you look at the steps we are taking, every step is moving us down that road. So, we're telling our customers, "No, you can't do that today, realistically. But, we are building more-or-less firewalled SANs that partition out the storage." Before, you could only build one-vendor SANs. Now there are ways of putting multiple vendors together in the SAN, but you still need to have some partitioning. Over time, we'll continue to add to the functionality to make it more ubiquitous.

Byte and Switch: Is virtualization a bigger problem between servers, or between different storage systems? For example, can you pool a Compaq storage system with an EMC box? And once you do, is that pool of storage available to all operating systems?Lewis: That is still in the future for us. The VersaStor product is designed to place a virtual layer in a storage network such that the storage itself doesn't carry any requirement for brand or type, so that product will do that. Every step we are taking moves us further down that road, so we're telling our customers, "Nope, you can't do that today, but we are building multivendor SANs. You still have to have some partitioning, but over time we'll build the functionality to make it more ubiquitous."

Byte and Switch: What sort of administrative or storage management savings can be achieved with virtualization?

Lewis: We believe right now with virtualization, you are going to get savings focused in three or four areas. The first is a pure capacity savings. We believe that virtualization can be up to 40 percent more efficient at storing data than even a present SAN is today. To put that another way, we believe we can take storage utilization from roughly 70 to 75 percent to over 95 percent utilization of raw capacity. So, virtualization will let you use more of the storage you've already got lying around. The pure management savings through virtualization -- we believe that by putting in a common virtualization and management layer, you'll be able to manage three times more storage with the same number of administrators. So there are major savings there.

Another big savings of this format is in reducing unplanned downtime and, as a corollary to that, improving business flexibility. These are both soft costs, but very important ones. Virtualization will enable you to grow volumes, shrink volumes, move or reconfigure your SAN -- do all of these things that used to require you to take down your servers, do reboots, and so forth. [With virtualization] you'll be able to do that online. Elimination of planned and unplanned downtime is huge for companies right now. This will save companies literally millions of dollars in terms of lost productivity.

One of the issues businesses have always had was having the wrong resources in the wrong place at the wrong time. It's the old manufacturing line problem of having two lines, one building product A and the other building product B in accordance with sales forecasts. Then, reality comes along and you need double the amount of product A and half the amount of product B -- you can't switch. By pooling your storage, it's like building one manufacturing line that can build anything. You have complete flexibility to respond to whatever happens with an application. It's hard to quantify that savings, but it's one that customers understand.Byte and Switch: A point I'd like to clarify is that in moving from a distributed-storage architecture to a networked environment, there is one level of savings. Then, in moving to virtualization, there is a second level of savings. How would you characterize those two levels?

Lewis: You're right. One of the reasons we're talking so much about virtualization now, even though we're a year away from delivering product, is simply that you have to have the base network in place in order to add virtualization. It is a value-add software, that requires the network to operate. If you haven't built the storage area network, then the virtualization layer is ineffective.

There is a study published by The Enterprise Storage Group Inc.'s Steve Duplessie. His findings were that [by moving from distributed storage to a SAN] you increase management effectiveness by 3.7 times. That was customer feedback. Capacity utilization also has the same sort of effect. In a distributed storage environment, utilization is only at 50 percent, while I was talking about 75 percent in current SANs.

Byte and Switch: Do you see virtualization and visualization going hand-in-hand?

Lewis: Absolutely. And you need multiple layers of visualization. We have SANWorks Network View today, which is a physical visualization of the SAN -- it draws the SAN out, how it is connected -- the box-to-box connections, which is an important element of management. When you are looking at performance management, looking at choke points, and so forth, you need a physical view.When we add in the virtualization product, we'll add in visualization capabilities that go beyond the physical view. This will show you how the storage is virtually allocated -- how you have pooled the resources.

Byte and Switch: What is your opinion regarding the battle among different storage transport protocols -- that is, the competition among FC, iSCSI, InfiniBand, etc.? Do you see one protocol winning the market? If so, which? Or, do you think they will coexist within the same SANs?

Mark Lewis: We think that they will coexist, that there are needs. If you look at the setup, we definitely see Fibre Channel SANs lasting for a long, long time, due to its focus on storage protocols, acceptance in the market, and how robust they are today. We see good product needs for iSCSI in wide-area connectivity, and extending the SAN into new areas -- and potentially even using it for low-end SANs as well, where you don't need the bandwidth.

InfiniBand will come along. I'm not sure how needed it will be as a storage interconnect. But the interesting thing that I see is they talk about different protocols, but it is really everyone else acquiescing to the storage protocol we know as SCSI. So, it is really running SCSI over Fibre Channel, or running SCSI over IP. People say, "I need to get some IP-based storage." But, no, you really don't. You're going to get SCSI-based storage that happens to have an IP chip on it, or a GigE chip. To the people building hardware, software, and arrays, it is still SCSI. We're happy as a clam that Cisco Systems Inc. [Nasdaq: CSCO] acquiesced to using SCSI. It was really an IP-versus-SCSI war, and they had to layer SCSI on top of it because they were going to lose.

Byte and Switch: Do you see a particular time frame for the dust storm settling itself out?Lewis: I was on a panel yesterday, and I heard both extremes. Gartner was saying that iSCSI will really be here in 2004, potentially 2005, while Cisco is saying "ready for prime time in Q3" of this year. Obviously, Cisco has a lot more bet on iSCSI than Gartner. While I think that Gartner may be a tad conservative, I think they may be closer to right than Cisco. My expectation is that iSCSI will have some limited market applications, particularly in the area of data replication, initially. It will be the latter part of next year at least before we start to see any significant traction in that area. Then 2003, 2004, where products start to get real traction in that area.

You can use history to predict the future. I remember in 1995 and 96 when I heard all of these same things about Fibre Channel, but it took us three or four more years to really get momentum in the marketplace, all the standards set out, and all the pieces together.

Byte and Switch: With the impact of optical networking advances, storage networks are being extended over metro- and wide-area networks. What sort of demand are you seeing from Compaq customers for metro- or wide-area storage networks?

Lewis: It is becoming a strong market, especially in terms of data replication. We don't see a large number of customers that are putting all their storage offsite, or some extreme distance from their servers. But, we do see a lot of customers who want to replicate their storage, using DWDM, metro, and even wide-area ATM connectivity. So, bandwidth will definitely help to continue to fuel actual storage consumption.

Byte and Switch: Do you have an estimate of how much of the market will shift to metro or wide-area?

Lewis: I don't see it as so much of a shift. They are still going to have their base site being local for the most part. But in a year from now a small percentage -- single-digit percentages -- of people will be replicating their data for disaster tolerance. I think we'll see a huge uptick there -- close to a third of the enterprise folks wanting to have some site-level disaster-tolerant data protection. A third may be low.Byte and Switch: What sort of distances do you think the majority of customers need to traverse with their storage networks?

Lewis: It goes in banding. We find a small band that just want protection from a fire, vandalism, and so forth. They are talking about a single campus -- under 5 kilometers. The next level is talking to their insurance companies about fire and flood, and are talking in the 5 to 20km range. And then there is a group that is in more of the earthquake protection range, and they are talking about 100km, minimum. Then there is a final group that have offices around the world. The biggest grouping is in the 5 to 20km range.

Byte and Switch: IDC's Disk Storage report published in December projected that 2001 would see a downturn in the growth of storage shipments, compared to the 1999-2000 leap. Was their projection accurate? If so, when do you see storage volumes picking up again?

Lewis: We don't want to predict the market at this point.

Byte and Switch: Are your storage sales on track for your internal projections?Lewis: I can't answer that one either. I can tell you that in Q1 we did well -- 26 percent year-over-year growth. It turned out to be a little shy of the expectations we set in December, but given how the economy turned out, we were pleased with that. We grew faster than EMC in all our competitive market segments -- external enterprise storage, SANs, and software. So we were happy with our Q1 numbers, but we aren't going to make any forward comments about the other quarters.

Byte and Switch: In the context of the developments we're in the midst of, what do you see as the things that people ought to be paying attention to over the next couple of years?

Lewis: I think one is the change in interoperability that we're seeing -- that is starting to become real, and I think that's going to open up some doors for people. Obviously, our bet, strategically, is on virtualization -- that it will be the killer piece of software that really makes storage utilities incredibly efficient. The third thing I'd comment on is storage utilities in general. The idea of delivering storage as a resource has really come of age, I think, and almost every mid- to large-size company should be thinking about how they are going to get there. The final thing I'd say, in a vertical market, you'll continue to see storage prices go down. One of the reasons that we believe the market will have a lot of help, even in coming years, is that we continue to see what I call analog-to-digital conversion as prices come down. So, as prices come down, more movie studios, for example, decide to digitize all their old films. Hospitals decide to digitize all their old X-rays, and all their new ones will be digital. So, we continue to see new applications open up.

Byte and Switch: Do you see the push from the network service providers and long-haul carriers having a significant impact on the direction that the storage networking industry takes?

Lewis: To some degree, wide-area networking has to have an impact, because you have to get the data delivered to where the consumer is. So we have to accommodate that. I don't know that they are a driving force in saying how it's going to be done, but we do have to pay attention to what's happening in the telco arena, because these are the people actually delivering the data.- Ralph Barker, Editor in Chief, Byte and Switch http://www.byteandswitch.com

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox
More Insights