Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Kaminario’s K2 Climbs the SSD Storage Performance Mountain

One of the most exciting development areas for the ICT (information and communications technology) industry is in the attempt to address the I/O gap challenge, whereby servers are often frustrated in their attempts to read and write data as fast as they would like from traditional storage. Various approaches using solid-state devices (SSDs) are targeting the problem. Innumerable smaller companies have been attempting to address the I/O gap, and more recently large IT vendors--notably, EMC with its VFCache and IBM with SSD caching for its XIV storage systems--have been joining the fray. The smaller vendors want to shout out, "Don’t forget about us!" Many of those companies are worth paying attention to, including Kaminario, which has a clear and solid vision.

The term SSD is often used interchangeably with flash memory devices, but while flash memory is indeed configured as an SSD, DRAM (Dynamic Random Access Memory) is another common technology used in SSDs. (And a number of other SSD technologies are tickling the interest of R&D gurus.) Key differentiators are that flash memory is persistent, whereas DRAM is not. Persistence means that no data is lost when the power is turned off on flash memory solutions like USB memory sticks and flash components in tablet computers and smartphones. The opposite is true of DRAM, which suffers total memory loss. That is, the data that was in the DRAM device has to be recovered from another source.

Flash memory is also much less expensive on a per-unit basis than DRAM, but that’s because it’s much slower. In contrast, DRAM has traditionally been used at the top of the hardware server host-network-storage hierarchy--for example, from main memory all the way down to the storage array sitting in front of the storage drives as a cache.

Traditional approaches have not been able to bridge the widening I/O gap between what servers demand for information and what storage arrays have been able to deliver to the servers. Enter new techniques using SSDs to overcome the performance limitations of HDDs (hard disk drives). That can involve placing SSDs as if they were disk drives in an array itself, using SSDs as another cache layer in the array itself, putting them in the network between servers and storage, or locating SSDs in the same box as the host server. These implementations may be in several forms, including, but not necessarily limited to, the following: host cache, which implies that the data is transient; the cache supports only one physical server; storage array, which implies a number of tiers (including SSD) that contain working production data; and server-storage network-based storage appliances, which focus on solving the problem of storage I/O performance for multiple servers.

The number of combinations appears nearly endless today, and the creativity of storage architecture designers is by no means exhausted. The I/O bottleneck challenge is a large one, and there is a lot of money to be made in delivering solutions that correct the problem. The conundrum that has yet to be resolved--and that is likely to permeate the market during the next few years--is what portion of the business will devolve to the large IT vendors in the form of some kind of standard satisfactory SSD solutions, as opposed to specialized and targeted solutions from the innumerable new kids on the block. In essence, the "general-purpose" solutions of the large IT vendors will never satisfy all use cases, but what is the percentage of use cases that only the (for now) smaller players can satisfy and how much revenue does that represent? That is a multibillion-dollar question that continues to propel the SSD gold rush.

Let’s examine Kaminario, which is one of the upcoming challengers.

Kaminario sits squarely in the storage appliance camp with its family of K2 products. The name of the product line invokes thoughts of the mountain K2, which is the second-highest mountain on earth and has a reputation for being very hard (as well as dangerous) to climb. In context, the company’s K2 products offer customers a way to effectively ascend the steep slope of I/O performance. (Sorry, I couldn’t resist the comparison!)

Recall the discussion on SSD choices. Kaminario offers three models: K2-F uses flash memory, K2-H offers both flash memory and DRAM, and K2-D is purely DRAM. Choosing which to buy will be a decision based on performance requirements, workload type and budget. Note, by the way, that Kaminario is a partner of Fusion-io, as it uses that company’s flash cards. The K2-F (flash only) can go up to 100 Tbytes with up to 600,000 IOPS and has a price of about $20,000 per Tbyte. The application focus is on analytic applications, such as those using a data warehouse. The K2-H (hybrid flash and DRAM) can go up to 100 Tbytes with IOPS up to 800,000. The price depends on the mix but usually starts at $25 per Tbyte. The application focus is on high end OLTP/DBMS and analytics applications. The K2-D has a maximum capacity of 25 Tbytes, a maximum of 1.5 million IOPS and a price of $100,000 per Tbyte. The application focus is on the most demanding high-end OLTP/DBMS applications that don't require a huge amount of storage.

Kaminario’s product family uses an architecture that it calls the Scale-out Performance Architecture (SPEAR). The SPEAR storage OS manages a number of DataNodes (where the data is stored) through what it calls ioDirectors. Among the capabilities of SPEAR are automated data\distribution, such as automatic scaling out when adding new DataNodes and no tuning (which administrators should find attractive), and intelligent I/O processing that parallelizes all reads and writes, which leads to higher performance than would have otherwise been possible.

The solution also features self-healing data availability, which means that not only is there no single point of failure (always a must), but there is automatic recovery. That shortens the time of recovery because an administrator does not have to waste time figuring out what the problem is and then taking corrective action.

  • 1