David Hill

Network Computing Blogger

Upcoming Events

Where the Cloud Touches Down: Simplifying Data Center Infrastructure Management

Thursday, July 25, 2013
10:00 AM PT/1:00 PM ET

In most data centers, DCIM rests on a shaky foundation of manual record keeping and scattered documentation. OpManager replaces data center documentation with a single repository for data, QRCodes for asset tracking, accurate 3D mapping of asset locations, and a configuration management database (CMDB). In this webcast, sponsored by ManageEngine, you will see how a real-world datacenter mapping stored in racktables gets imported into OpManager, which then provides a 3D visualization of where assets actually are. You'll also see how the QR Code generator helps you make the link between real assets and the monitoring world, and how the layered CMDB provides a single point of view for all your configuration data.

Register Now!

A Network Computing Webinar:
SDN First Steps

Thursday, August 8, 2013
11:00 AM PT / 2:00 PM ET

This webinar will help attendees understand the overall concept of SDN and its benefits, describe the different conceptual approaches to SDN, and examine the various technologies, both proprietary and open source, that are emerging. It will also help users decide whether SDN makes sense in their environment, and outline the first steps IT can take for testing SDN technologies.

Register Now!

More Events »

Subscribe to Newsletter

  • Keep up with all of the latest news and analysis on the fast-moving IT industry with Network Computing newsletters.
Sign Up

See more from this blogger

Kaminario’s K2 Climbs the SSD Storage Performance Mountain

One of the most exciting development areas for the ICT (information and communications technology) industry is in the attempt to address the I/O gap challenge, whereby servers are often frustrated in their attempts to read and write data as fast as they would like from traditional storage. Various approaches using solid-state devices (SSDs) are targeting the problem. Innumerable smaller companies have been attempting to address the I/O gap, and more recently large IT vendors--notably, EMC with its VFCache and IBM with SSD caching for its XIV storage systems--have been joining the fray. The smaller vendors want to shout out, "Don’t forget about us!" Many of those companies are worth paying attention to, including Kaminario, which has a clear and solid vision.

The term SSD is often used interchangeably with flash memory devices, but while flash memory is indeed configured as an SSD, DRAM (Dynamic Random Access Memory) is another common technology used in SSDs. (And a number of other SSD technologies are tickling the interest of R&D gurus.) Key differentiators are that flash memory is persistent, whereas DRAM is not. Persistence means that no data is lost when the power is turned off on flash memory solutions like USB memory sticks and flash components in tablet computers and smartphones. The opposite is true of DRAM, which suffers total memory loss. That is, the data that was in the DRAM device has to be recovered from another source.

Flash memory is also much less expensive on a per-unit basis than DRAM, but that’s because it’s much slower. In contrast, DRAM has traditionally been used at the top of the hardware server host-network-storage hierarchy--for example, from main memory all the way down to the storage array sitting in front of the storage drives as a cache.

Traditional approaches have not been able to bridge the widening I/O gap between what servers demand for information and what storage arrays have been able to deliver to the servers. Enter new techniques using SSDs to overcome the performance limitations of HDDs (hard disk drives). That can involve placing SSDs as if they were disk drives in an array itself, using SSDs as another cache layer in the array itself, putting them in the network between servers and storage, or locating SSDs in the same box as the host server. These implementations may be in several forms, including, but not necessarily limited to, the following: host cache, which implies that the data is transient; the cache supports only one physical server; storage array, which implies a number of tiers (including SSD) that contain working production data; and server-storage network-based storage appliances, which focus on solving the problem of storage I/O performance for multiple servers.

The number of combinations appears nearly endless today, and the creativity of storage architecture designers is by no means exhausted. The I/O bottleneck challenge is a large one, and there is a lot of money to be made in delivering solutions that correct the problem. The conundrum that has yet to be resolved--and that is likely to permeate the market during the next few years--is what portion of the business will devolve to the large IT vendors in the form of some kind of standard satisfactory SSD solutions, as opposed to specialized and targeted solutions from the innumerable new kids on the block. In essence, the "general-purpose" solutions of the large IT vendors will never satisfy all use cases, but what is the percentage of use cases that only the (for now) smaller players can satisfy and how much revenue does that represent? That is a multibillion-dollar question that continues to propel the SSD gold rush.

Let’s examine Kaminario, which is one of the upcoming challengers.

Kaminario sits squarely in the storage appliance camp with its family of K2 products. The name of the product line invokes thoughts of the mountain K2, which is the second-highest mountain on earth and has a reputation for being very hard (as well as dangerous) to climb. In context, the company’s K2 products offer customers a way to effectively ascend the steep slope of I/O performance. (Sorry, I couldn’t resist the comparison!)

Recall the discussion on SSD choices. Kaminario offers three models: K2-F uses flash memory, K2-H offers both flash memory and DRAM, and K2-D is purely DRAM. Choosing which to buy will be a decision based on performance requirements, workload type and budget. Note, by the way, that Kaminario is a partner of Fusion-io, as it uses that company’s flash cards. The K2-F (flash only) can go up to 100 Tbytes with up to 600,000 IOPS and has a price of about $20,000 per Tbyte. The application focus is on analytic applications, such as those using a data warehouse. The K2-H (hybrid flash and DRAM) can go up to 100 Tbytes with IOPS up to 800,000. The price depends on the mix but usually starts at $25 per Tbyte. The application focus is on high end OLTP/DBMS and analytics applications. The K2-D has a maximum capacity of 25 Tbytes, a maximum of 1.5 million IOPS and a price of $100,000 per Tbyte. The application focus is on the most demanding high-end OLTP/DBMS applications that don't require a huge amount of storage.

Kaminario’s product family uses an architecture that it calls the Scale-out Performance Architecture (SPEAR). The SPEAR storage OS manages a number of DataNodes (where the data is stored) through what it calls ioDirectors. Among the capabilities of SPEAR are automated data\distribution, such as automatic scaling out when adding new DataNodes and no tuning (which administrators should find attractive), and intelligent I/O processing that parallelizes all reads and writes, which leads to higher performance than would have otherwise been possible.

The solution also features self-healing data availability, which means that not only is there no single point of failure (always a must), but there is automatic recovery. That shortens the time of recovery because an administrator does not have to waste time figuring out what the problem is and then taking corrective action.

Page:  1 | 2  | Next Page »

Related Reading

More Insights

Network Computing encourages readers to engage in spirited, healthy debate, including taking us to task. However, Network Computing moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing/SPAM. Network Computing further reserves the right to disable the profile of any commenter participating in said activities.

Disqus Tips To upload an avatar photo, first complete your Disqus profile. | Please read our commenting policy.
Vendor Comparisons
Network Computing’s Vendor Comparisons provide extensive details on products and services, including downloadable feature matrices. Our categories include:

Research and Reports

Network Computing: April 2013

TechWeb Careers