Howard Marks

Network Computing Blogger

Upcoming Events

Where the Cloud Touches Down: Simplifying Data Center Infrastructure Management

Thursday, July 25, 2013
10:00 AM PT/1:00 PM ET

In most data centers, DCIM rests on a shaky foundation of manual record keeping and scattered documentation. OpManager replaces data center documentation with a single repository for data, QRCodes for asset tracking, accurate 3D mapping of asset locations, and a configuration management database (CMDB). In this webcast, sponsored by ManageEngine, you will see how a real-world datacenter mapping stored in racktables gets imported into OpManager, which then provides a 3D visualization of where assets actually are. You'll also see how the QR Code generator helps you make the link between real assets and the monitoring world, and how the layered CMDB provides a single point of view for all your configuration data.

Register Now!

A Network Computing Webinar:
SDN First Steps

Thursday, August 8, 2013
11:00 AM PT / 2:00 PM ET

This webinar will help attendees understand the overall concept of SDN and its benefits, describe the different conceptual approaches to SDN, and examine the various technologies, both proprietary and open source, that are emerging. It will also help users decide whether SDN makes sense in their environment, and outline the first steps IT can take for testing SDN technologies.

Register Now!

More Events »

Subscribe to Newsletter

  • Keep up with all of the latest news and analysis on the fast-moving IT industry with Network Computing newsletters.
Sign Up

See more from this blogger

SSDs and Understanding Storage Performance Metrics

Steven Hill, our resident tummler, left a comment on my recent post on SSDs working their way down to SMB storage devices. He questioned the logic, if not the sanity, of putting SSDs in a storage system that used 1-Gbps Ethernet for server connections. Steve reasoned that the skinny little pipes running out of the Drobo B1200i would be saturated long before the SSD's performance was realized.

The problem with that analysis is storage performance isn't as simple as bandwidth alone. In this, the first installment in a three-part series, we'll discuss the basic storage performance metrics: throughput, IOPS and latency. In the second installment we'll cover how you have to consider IOPS and latency together; in part three, we'll look at how RAID affects performance.

More Insights


More >>

White Papers

More >>


More >>

We tend to concentrate on storage network bandwidth because it's the only performance metric that's right there in the system specs. When you buy a disk array or HBA, you know if it has 4-Gbps Fibre Channel or 10-Gbps Ethernet connections. Network bandwidth defines the absolute maximum throughput a storage system can deliver, but very few of us run into throughput limits with our mission-critical applications.

Throughput is the limiting factor for applications that read and write data sequentially in large chunks. These include many media applications like video editing or surveillance and, of course, backup. Backup appliance performance is all about how quickly the appliance can ingest one or more streams of data from the backup servers. As a result, backup appliance spec sheets prominently list ingest rates.

So, while throughput is important, other than backup most of us don't have mission-critical applications doing large sequential data transfers. The applications we want to run faster are almost always some sort of database that reads and writes data more or less randomly in 4-Kbyte or 8-kbyte pages. A typical database transaction will require tens or hundreds of small data accesses as the database searches an index for the right record in each affected table, reads the record in, and then writes it back out with new data in the fields that the transaction changed.

For databases and other random-access applications, throughput is much less important than I/O latency and the number of I/O operations per second (IOPS) the storage system can perform.

Latency is simply the amount of time a device takes to store or retrieve data. On a spinning disk, total latency is the seek time plus the rotational latency as the drive waits for the right block of data to come under the heads, plus the data transfer time.

In a real world, storage system latency can be added as requests travel up and down through the server operating system's I/O stack, cross various network switches and work their way through an array's controller. Synchronous mirroring across data centers is a big source of write latency because data has to be written to both the primary and replication target arrays before the write is acknowledged back to the application. Taken together, these often add enough latency to noticeably affect application performance.

Since today's disk drives have just one head positioner, the number of IOPS a drive can deliver is the reciprocal of its latency. So a 15K RPM drive that has an average total latency of 7 milliseconds can deliver 1/0.007 or 140 IOPS.

SSDs and disk arrays can satisfy some I/O requests from their RAM caches and access their multiple flash chips or disk drives in parallel. This parallelism allows them to deliver more IOPS than their latency would imply. Just as parallelism in storage systems reduces the effect of latency on IOPS applications making requests in parallel, a database server satisfying many requests for many users all at the same time can consume more IOPS than if the application was doing all its work sequentially.

So does it make sense to add SSDs to a storage system with 1-Gbps connections? Sure does, if that storage system is going to run a database application like Oracle, MySQL or even Exchange, all of which manage data in small pages. To saturate even one 1-Gbps connection would take 15,000 8K IOPS, while a 12-drive SATA system without SSD would struggle to deliver 1,500.

Now, Steve does have a point that just sticking a bunch of SSDs into a low-end storage system that doesn't have the CPU, memory or proper software to manage them is a fool's errand.

Performance metrics for some basic storage devices:

Transfer Rate MBps (Read/Write) Avg Latency ms (Read/Write) IOPS (Read/Write)
5400RPM Disk 123 15 67
7200RPM Disk 155 13.7 75
10K RPM Disk 168 7.1 140
15K RPM Disk 202 5.1 196
Micron P400e SATA MLC SSD 350/140 0.5/3.5 50000/7500
Micron P320h PCIe SLC SSD 3200/1900 0.009/0.042 785000/205000

Related Reading

Network Computing encourages readers to engage in spirited, healthy debate, including taking us to task. However, Network Computing moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing/SPAM. Network Computing further reserves the right to disable the profile of any commenter participating in said activities.

Disqus Tips To upload an avatar photo, first complete your Disqus profile. | Please read our commenting policy.
Vendor Comparisons
Network Computing’s Vendor Comparisons provide extensive details on products and services, including downloadable feature matrices. Our categories include:

Research and Reports

Network Computing: April 2013

TechWeb Careers