Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

FCoE vs. SSD

Fibre Channel over Ethernet (FCoE) has, in many supplier camps, been preordained as the heir apparent to the infrastructure. Solid State Disks (SSD) have been preordained by many, myself included, as the primary storage device for storage systems. Contrary to popular opinion I think that SSD will take over its market long before FCoE dominates the infrastructure.  
Let me be clear, as the videos on FCoE and SSD that I did for Information Week point out, I think that both technologies have their place but neither should be looked at as a wholesale change out for the legacy technology it will replace just yet. I just believe that SSD should be looked at sooner.
FCoE is going to have a longer, even unassured, road to domination. This is primarily because there are not as many compelling reasons to switch to FCoE. As we point out in our article "Planning for FCoE" its true that if you are building a new data center or even adding a rack of servers to be used for server virtualization, you may want to look at the protocol and as the standards are ratified it becomes more compelling to do so, but if you are not in that mode there is less impetus to move on the technology than there may be with SSD.
If you don't want to go to FCoE however you don't have to. You have an overwhelming set of choices available all of which are available today and are ratified. On the storage or "F" side of FCoE there is 8GB fibre that is available now and Brocade repeatedly has committed to 16GB in the future. They are also providing a prioritization capability that leverages NPIV. On the IP side or "E" side of FCoE there is 10GB ethernet available today from many suppliers. Further companies like SolarFlare and Neterion are both shipping cards that have QoS type of functionality in them. As I wrote about in a prior entry "The Need for Card Based QoS" I/O Prioritization at the card level is critical to develop denser virtualized server infrastructures. 
With SSD on the other hand you really don't have a choice. As I have written before, mechanical drives have seemed to hit their limits, at least from a performance standpoint, and now there is growing concern about their reliability. From a performance perspective mechanical drives have been stuck at 15k RPMs for almost a decade now but the need for storage I/O has increased. While companies like 3PAR and Isilon have done a lot to get great performance out of today's mechanical drives you can only wring so much performance out of 10 year old technology. Ironically some of the companies that are best positioned to take full advantage of SSDs are those that are pulling the high performance out of mechanical drives today. 
On the reliability front you have to be concerned about good old RAID 5 and 6's ability to get you back up and running when a drive fails. The first challenge is that drive count in array groups are getting larger and larger, the likelihood of a single, dual or even triple drive failure goes up as the number of drives in the RAID group goes up. Also as capacity increases the time it takes to rebuild the RAID leaves you in a degraded mode and forces you to be more vulnerable for a lot longer period of time. 
Then there is something called the Bit Error Rate (BER). The chances of an BER happening is about one in every 100 trillion bits, unfortunately that is only about 12TBs. Most data center administrators that I talk to now have more than 12TBs of storage and as drive capacities continue to increase it is only going to take a few drives to get to that size. While mechanical drive manufacturers are quick to respond to the "RAID is falling" argument, the facts can't be denied that as capacities increase, RAID errors will continue and become more prominent.
This is the main reason why SSD becomes more dominant faster than FCoE does. SSD is faster now, the alternative will never catch up and it is more reliable than mechanical drives. In the future when you need high performance and high reliability, I'm not convinced that there will be better alternatives to SSD where in the infrastructure there may be many options.