Let’s first look at the specifications for the two new Oracle ZS3 models, which both feature automated data migration from DRAM to flash and disk storage:
• Model ZS3-2: 15.2 TB of cache; 768 TB of raw uncompressed capacity; 12 PCIe slots for 10GbE, 40Gb Infiniband (IB), 16Gb Fibre Chanel (FC).
• Model ZS3-4: 25.3 TB of cache: 3.5 petabytes of raw uncompressed capacity; 22 PCIe slots for 10GbE, 40Gb IB, 16Gb FC.
Oracle added new cache memory architectural capabilities in the ZS3 models and claims significant improvements in cached data handling efficiency. First, an in-memory data deduplication capability reduces cache consumption on an average of 4X, claims Oracle. Additionally, Oracle made modifications to the write flash algorithms that provide parallel access sequencing, which reduces response time latencies and resulted in better cache I/O responsiveness. These two developments double the performance of the ZS3 compared to its predecessor, according to Oracle.
From the specs, which include extensive modifications to L1ARC algorithms, it’s clear that ZS3 uses and leverages a massive amount of front-end cache memory. The cache size and architectural enhancements, combined with 2TBs of DRAM (up to 80 cores), 12.8 TBs of read cache, 10.5TBs of write cache as well as 10GbE and IB, are significant and should provide accelerated performance.
Application Engineered Storage
In developing the new ZS3 products, Oracle said its focus was on what it refers to as Application Engineered Storage (AES), which involves leveraging engineering activity in the areas of servers, virtual machines, OS, middleware, database, storage and applications. Having insight into features, functions and operational developments throughout the Oracle stack can help boost the efficiency of the stack and the storage systems.
One example of the AES effort is the release of Oracle Intelligent Storage Protocol (OISP). The OISP protocol opens up a direct line of communication between Oracle database software and ZS3 storage arrays. By passing metadata to storage with information about the incoming database data, the storage array can dynamically prepare and optimize itself for the precise incoming data. Oracle claims this results in a reduction of 65% of manual database-to-storage tuning and optimization procedures and helps eliminate human errors.
[Read how Microsoft's ReFS stacks up against Oracle ZFS in "Microsoft ReFS and Oracle ZFS: How They Compare."]
Another outcome of the AES effort is Automatic Data Optimization (ADO) with Hybrid Columnar Compression (HCC). This capability dynamically moves data across storage tiers and compresses the data based on algorithms. Heat maps and usage patterns drive the automatic tiering of data and compression can be suited for usage activity -- for example, 10X compression for querying for fast analytics, or 15X compression for archive of prior years of application data.
These two AES-driven capabilities represent a unique form of dynamic database-to-storage automation, and if Oracle gets this right it can be a significant competitive differentiator for the company. Moreover, the symbiotic relationships spurred by AES will deliver greater value for users who deploy Oracle software products with Oracle storage.
From a virtual machine support perspective, the new ZS3 models support a greater number of VMs compared to previous models and conventional NAS filers. Oracle's publicly available test results show that the ZS3s can serve up more than 2,300 VMs vs. 250 VMs on a conventional NAS filer configuration. I believe these kinds of numbers are a result of ZS3’s symmetric multiprocessing (SMP) operating system, which can process hundreds of thousands of threads concurrently. VMs being a highly threaded environment are a natural fit for the ZS3 Series.
Oracle continues to support its DTrace analytics capability on the new models. Users have virtual machine level visibility in real-time with application-aware performance monitoring and measurements as well as a view of system health metrics. At the storage level, users can see CPU utilization, cache activity and consumption concurrently. Oracle claims the highly granular analytics enable over a 40% faster troubleshooting time.
Now, let’s take a look at the published Storage Performance Council (SPC) and the Standard Performance Evaluation Corporation (SPEC) benchmark results comparing Oracle's ZS3-4 with competing products.
|Measurement||HP 9500 XP Array||IBM Storage DS8870||Oracle ZS3 - 4|
|MBs Per Second||13,147.87||15,423.66||17,244.22|
|Price / Performance||$88.34||$131.22||$25.53|
Table 1. In SPC-2 tests, the ZS3-4 storage system provided greater throughput at a lower price performance point vs. HP and IBM storage systems.
|Measurement||NTAP Dual-node FAS6340||NTAP Dual-node FAS3250||Oracle ZS3 - 4|
|Response Time||1.17ms Overall Resp Time||1.76ms Overall Resp Time||700us Resp Time|
Table 2. In SPECsfs (system file storage) testing, ZS3-4 performed more operations per second at a faster response time and at an overall lower cost (i.e., a higher number in the value row indicates an overall better value according to SPEC) vs. the NetApp Dual-node FAS3640 and Dual-node FAS3250.
To be sure, performance benchmark testing results are not a guarantee that you will experience similar results. But they are one factor to consider as you shop for products that fit in your infrastructure.
Some believe that when a vendor makes improvements or creates unique and proprietary capabilities, they are locking you into their products for the long haul. The reality is you always have freedom of choice. Comparatively, if you need a Ford Festiva, purchase a Ford Festiva. If you need a Porsche twin turbo V-8, don’t settle for six motorcycle engines and expect similar results. The bottom line is that Oracle has posted some impressive storage performance characteristics versus its competitors and has independently validated its performance capabilities. The question I have now is who will step up and try to unseat ZS3’s position?
Are you an Oracle storage user? Would you consider the new ZS3? Please share your comments in the space below.
[Get deep insight into the technology foundation of the data center network for the next 10 years in Greg Ferro's session "Building Your Network For the Next 10 Years" at Interop New York Sept. 30-Oct. 4.]