Special Coverage Series

Network Computing

Special Coverage Series

Commentary

Howard Marks
Howard Marks Network Computing Blogger

Windows Server 2012 R2 Beefs Up Storage Spaces

Microsoft improves on the storage functionality in its upcoming Windows Server release with new SSD-supported automated tiering and write-back caching.

I must admit I wasn’t terribly impressed when I first heard about Windows Server 2012 Storage Spaces. Microsoft was building direct-attached storage (DAS) support into applications like Exchange and SQL, and I viewed Storage Spaces as another way for Microsoft to shift IT costs from hardware to software. I looked at Storage Spaces as a cluster-aware volume manager: an easy way to turn a shared SAS JBOD and a pair of Windows Servers into a clustered NAS with software RAID. I sure never thought about it as a performance play.

But with Windows Server 2012 R2, scheduled for release later this year, Microsoft has improved Storage Spaces by adding the ability to use SSDs for automated tiering and/or write-back caching. It might be because I've seen a half-dozen "software-defined storage" packages in recent years and have become more open to the idea, but I find Storage Spaces a bit more impressive now. Let's take a look at the new functionality.

More Insights

Webcasts

More >>

White Papers

More >>

Reports

More >>

Microsoft beefed up Storage Space’s strengths, including the software RAID engine, which, like a Compellent or 3Par array, works by spreading the stripes data and parity information across all the drives in the storage space. The company added double parity using a Microsoft Research-developed algorithm that rebuilds with less I/O than the standard Reed-Solomon RAID-6. Microsoft also added support for new SAS JBODs with expanders and enclosure services, making Windows Servers aware of enclosure events and temperatures.

The automated tiering feature uses the SSDs and HDDs in a Storage Space as a pool. It then tracks the access frequency of data chunks at a 1-Mbyte granularity so it can dynamically promote and demote chunks based on their I/O “temperature.” Tiering is a scheduled Windows task that by default runs at 1 a.m. every day. Administrators can adjust the schedule and set limits on the amount of the SSD tier that can be replaced in each scheduled tiering adjustment.

In addition to the automatic tiering, administrators can pin high I/O, or otherwise latency sensitive files, like VDI “gold images” to the SSD tier without having to dedicate volumes for them. The tiering traffic is only sent to disks when the queue for that disk is one or less, limiting the performance impact of a tiering operation. However, like any other daily tiering system, it will be most effective with repeatable workloads.

The write-back caching feature creates a separate write-back cache for each accelerated volume from available SSDs in the Storage Spaces pool. The cache is intended to be stored on an SSD that’s shared in an external SAS JBOD to make the cache persistent across server failures in a Windows Server, including Hyper-V and clusters using cluster shared volumes (CSV). Microsoft has been careful to promote the write-back cache as being a solution for short write bursts, which leads us to believe that they’re only supporting a limited write-cache size in the initial release.

[SSDs have a range of storage uses, including supporting VDI performance. Find out how in "Solving VDI Problems With SSDs and Data Deduplication."]

Conspicuously missing is any form of read caching. When we asked the Microsoft folks about this, they pointed at the various RAM caches in Windows, but those are very small compared to a modern SSD read cache. I found it interesting that all the ISVs--such as FlashSoft, VeloBit and Intel--decided to build read caches, but Microsoft choose to do the trickier write back caching and automated tiering.

Storage Spaces is intended as a replacement for traditional SAN storage with the Windows Server directly addressing disk drives, SDDs and HDDs. While it may be theoretically possible to expose SDD and HDD logical drives from a SAN array to a Windows server, and to use Storage Spaces to create a tiered pool from those LUNs, this configuration is not supported by Microsoft. That leaves space in the marketplace for caching ISVs as an acceleration tool for SAN arrays.

Storage Spaces may turn out to be a perfectly good way to build a server cluster and storage. I’m afraid as good as it may be, it has two strikes against it:

1. It’s for shared SAS JBODs, not pooled internal server disks, which is becoming the new fashionable software defined storage model. Those SAS enclosures still cost some money.

2. Longtime storage administrators think of software RAID as a technology they’ve outgrown. Unless it’s in a shiny SDS, new technology wrapper, they’re not going to trust it. Frankly, for these guys, the new version of Windows isn’t that shiny of a wrapper.

Luckily, there are as many Microsoft loyalists as there are haters, and some of them will fire up Storage Spaces. I’d like to hear about your Storage Spaces experience--good, bad and especially ugly. Share your thoughts in the comment section below.



Related Reading



Network Computing encourages readers to engage in spirited, healthy debate, including taking us to task. However, Network Computing moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing/SPAM. Network Computing further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | Please read our commenting policy.
 

Editor's Choice

Research: 2014 State of Server Technology

Research: 2014 State of Server Technology

Buying power and influence are rapidly shifting to service providers. Where does that leave enterprise IT? Not at the cutting edge, thatís for sure: Only 19% are increasing both the number and capability of servers, budgets are level or down for 60% and just 12% are using new micro technology.
Get full survey results now! »

Vendor Turf Wars

Vendor Turf Wars

The enterprise tech market used to be an orderly place, where vendors had clearly defined markets. No more. Driven both by increasing complexity and Wall Street demands for growth, big vendors are duking it out for primacy -- and refusing to work together for IT's benefit. Must we now pick a side, or is neutrality an option?
Get the Digital Issue »

WEBCAST: Software Defined Networking (SDN) First Steps

WEBCAST: Software Defined Networking (SDN) First Steps


Software defined networking encompasses several emerging technologies that bring programmable interfaces to data center networks and promise to make networks more observable and automated, as well as better suited to the specific needs of large virtualized data centers. Attend this webcast to learn the overall concept of SDN and its benefits, describe the different conceptual approaches to SDN, and examine the various technologies, both proprietary and open source, that are emerging.
Register Today »

Related Content

From Our Sponsor

How Data Center Infrastructure Management Software Improves Planning and Cuts Operational Cost

How Data Center Infrastructure Management Software Improves Planning and Cuts Operational Cost

Business executives are challenging their IT staffs to convert data centers from cost centers into producers of business value. Data centers can make a significant impact to the bottom line by enabling the business to respond more quickly to market demands. This paper demonstrates, through a series of examples, how data center infrastructure management software tools can simplify operational processes, cut costs, and speed up information delivery.

Impact of Hot and Cold Aisle Containment on Data Center Temperature and Efficiency

Impact of Hot and Cold Aisle Containment on Data Center Temperature and Efficiency

Both hot-air and cold-air containment can improve the predictability and efficiency of traditional data center cooling systems. While both approaches minimize the mixing of hot and cold air, there are practical differences in implementation and operation that have significant consequences on work environment conditions, PUE, and economizer mode hours. The choice of hot-aisle containment over cold-aisle containment can save 43% in annual cooling system energy cost, corresponding to a 15% reduction in annualized PUE. This paper examines both methodologies and highlights the reasons why hot-aisle containment emerges as the preferred best practice for new data centers.

Monitoring Physical Threats in the Data Center

Monitoring Physical Threats in the Data Center

Traditional methodologies for monitoring the data center environment are no longer sufficient. With technologies such as blade servers driving up cooling demands and regulations such as Sarbanes-Oxley driving up data security requirements, the physical environment in the data center must be watched more closely. While well understood protocols exist for monitoring physical devices such as UPS systems, computer room air conditioners, and fire suppression systems, there is a class of distributed monitoring points that is often ignored. This paper describes this class of threats, suggests approaches to deploying monitoring devices, and provides best practices in leveraging the collected data to reduce downtime.

Cooling Strategies for Ultra-High Density Racks and Blade Servers

Cooling Strategies for Ultra-High Density Racks and Blade Servers

Rack power of 10 kW per rack or more can result from the deployment of high density information technology equipment such as blade servers. This creates difficult cooling challenges in a data center environment where the industry average rack power consumption is under 2 kW. Five strategies for deploying ultra-high power racks are described, covering practical solutions for both new and existing data centers.

Power and Cooling Capacity Management for Data Centers

Power and Cooling Capacity Management for Data Centers

High density IT equipment stresses the power density capability of modern data centers. Installation and unmanaged proliferation of this equipment can lead to unexpected problems with power and cooling infrastructure including overheating, overloads, and loss of redundancy. The ability to measure and predict power and cooling capability at the rack enclosure level is required to ensure predictable performance and optimize use of the physical infrastructure resource. This paper describes the principles for achieving power and cooling capacity management.