Special Coverage Series

Network Computing

Special Coverage Series


Best of Interop 2013 Winners Announced

The Best of Interop award recognizes innovation in eight tech categories, including networking, cloud, security, mobility and more. Winners include Arista, ExtraHop, Talari Networks and Citrix.

Interop and Network Computing are pleased to present the 2013 Best of Interop awards. For the past several weeks our expert judges have been pouring through 149 product submissions in eight categories.

I had the opportunity to read through all of the submission, and I can tell you there were a surprising percentage of nominees who showed the kind of forward-thinking approach and innovation we love to recognize in the Best of Interop program.

Our judges worked long and hard to give our nominees fair consideration; one judge even viewed a finalist conference presentation while 35,000 feet in the air. But only a small number of products take the prize. After many hours of deliberation it's my honor to present to you our Best of Interop winners for 2013. – Steven Hill, Lead Judge, Best of Interop 2013

Best of Interop Grand Award and Networking Winner

Arista: 7500E Data Center Switch

Category Judges: Kurt Marko, Contributing Editor, InformationWeek and Network Computing

Eric Hanselman, Chief Analyst, 451 Research, LLC

Amidst all the buzz over cloud services, SDN and mobile networks, it's easy to forget that without a foundation of beefy switching hardware, none of those things are worth much. This year, Arista proved that big switches are back and that Moore's Law doesn't just apply to servers and smartphones. Providing a nice bit of investment protection, Arista builds upon the same 8-slot chassis that won the BOI Infrastructure category three years ago with completely re-engineered guts to create a switch worthy of huge networks brimming with virtual hosts.

The 7500E was the winner in both the Networking and Grand Award categories, and its specs are part of the reason: 30 Tbps backplane, or 3.84 Tbps per slot, supporting a mix of 10, 40 and 100 GbE, maxing out at 1,152, 288 and 96 ports respectively, with sub 4 μsec latency within a chassis. Addressing a complaint with the first-generation product, Arista expanded address tables and buffer space by more than an order of magnitude. Combined with its full set of L2 and L3 features, including MLAG, ECMP, and hardware accelerated VXLAN, the 7500E supports a number of cloud network designs, scaling out to over 100,000 VXLAN nodes in a two-tier architecture.

Aside from the specs, what really sets the 7500E apart from the competition is the embedded optical modules, which are used in place of standard CFP modules, in the 100 GbE cards. This means there's no need for expensive, separately packaged optics. The result is a dramatic reduction in the total cost per port. Using a 12:1 10:100 Gbps breakout cable, Arista claims to get the total cost per 10 GbE port, including optics, down to $1,220. Because the embedded optics only support multimode fiber, the company takes a risky bet that cost and convenience will trump distance for its data center customers, but we think Arista is right.

Arista faced stiff competition this year, with a deep field of 30 entrants in the networking category and two worthy co-finalists. The NetScout nGenius 3900 modular monitoring switch is an outstanding work of hardware engineering in its own right. It can slice and dice the available 11.5 Tbps of capacity across a mix of 1, 10 and 40 GbE ports, making it an outstanding foundation for a scalable monitoring network with long-term investment protection.

But there's more to networking than just hardware, and Aryaka's Network-as-a-Service is applying cloud economics and SaaS architecture and pricing model to the world of WANs. With its high availability infrastructure and various TCP and WAN optimization features, Aryaka's service allows businesses to deliver MPLS functionality with broadband Internet convenience. – Kurt Marko

Cloud Computing & Virtualization Winner

ExtraHop Networks: ExtraHop for Amazon Web Services

Category Judges:

Charles Babcock, Editor at Large, InformationWeek

David Linthicum, Founder, Blue Mountain Labs

ExtraHop for AWS does something that's not been possible before – offers visibility into how all parts of a customer's systems are performing in the cloud. Amazon has its own list of operational statistics that it feeds to customers, but it's not enough to find out what users really need to know. ExtraHop applies a "virtual tap" to the network traffic inside each running virtual machine, producing a copy of the traffic though "distributed forwarders" and performing full stream reassembly and content analysis.

It knows about the first DNS request from an interaction, the authentication clearance, database access, the middleware response and the last byte to be served out of storage. It can analyze the network traffic for transaction content, the events that are taking place as software modules talk to each other. With that analysis, an IT manager can get an answer to the question "What's happening in the environment right now?"

Application Inspection Triggers can isolate and highlight certain events. IT teams can define an event and parameters for an event, and can be notified if a system exceeds those parameters. One customer, Concur Technologies, uses the triggers to watch for systems that go beyond a 1MB cache, which identifies applications that are running up memory use. If it didn't have ExtraHop, Concur would have to configure logging to capture the stats on each memcache server, then analyze the log information; but that's cumbersome. ExtraHop does it automatically with a simple trigger.

ExtraHop works across all of a customer's workloads, both on premises and in AWS, automatically discovering and classifying applications and servers. If a new server appears on the network or an old virtual machine is decommissioned, ExtraHop detects that and adjusts accordingly. It's a real time monitoring tool for a dynamic and changeable cloud environment. - Charles Babcock

Data Center & Storage Winner

OCZ Technology: ZD-XL SQL Accelerator

Category Judges:

Howard Marks, Chief Scientist, Deep Storage.Net

Steven Hill, Lead Judge, Best of Interop 2013

OCZ Technology takes an interesting and unique approach to its new, SSD-based, storage/caching product. Rather than use a generalized caching method like most other PCIe SSD options do, the fourth-generation OCZ ZD-XL SQL Accelerator aims to improve the performance of Microsoft SQL server. This PCIe card offers a potent combination: a highly advanced, SQL-optimized "decision engine," lightning-fast flash memory, and wizard-based implementation software that lets database admins tweak caching variables and optimize performance based on a wide variety of workloads.

The secret sauce here is a low-latency, Data Path Cache Director that filters commonly called data requests to flash. It works in lockstep with a Cache Analysis Engine that makes advanced and statistically-optimized decisions on what data to cache. Not only does the system constantly monitor and dynamically tune current caching needs, it also offers a rule-based, pre-warming cache engine that lets administrators pre-load cache contents to accommodate specific workloads that run at scheduled times.

Many database tasks can be extremely storage-intensive, so for SQL Server customers it's easy to see how the ZD-XL SQL Accelerator solution goes beyond generalized caching algorithms used by many other SSDs. OCZ claims the ZD-XL SQL Accelerator improves database performance between 3 and 20 times, but as always, your actual mileage may vary. – Steven Hill

Management & Monitoring Winner

ScienceLogic: ScienceLogic EM7 v7.3

Category Judges:

Steven Hill, Lead Judge, Best of Interop 2013

Andrew Conry Murray, Editor, Network Computing

EM7 v7.3 begins with agentless auto-discovery that's capable of rapidly assimilating a huge array of vendor APIs, which allows it to build a detailed and extremely customizable map of your network environment. From there, an administrator can get anything from a 100,000 ft view to the granularity necessary to drill down to specific devices within the infrastructure.

What's even more interesting is EM7's ability to build reports and visualizations based on a wide variety of contexts - from small, local-office performance monitoring to global-scale carrier issues. It provides a large library of commonly-needed templates, as well as ability to craft custom HTML5 dashboards suitable for framing using your favorite tablet or smart-phone. But the big kicker that ScienceLogic EM7 introduced this year for version 7.3 is the ability to visualize your resources within the cloud.

One of the key weaknesses of the cloud today is the lack of a common API, something that would dramatically simplify the integration of cloud services and provide the degree of freedom of choice that cloud advocates initially promised.

Well, just like with our friends in the hardware community, it seems that standardizing APIs for the cloud is still like herding cats, which leaves a vacuum for a company like ScienceLogic to fill. And fill it they do, with new capabilities that now extend to public services like AWS; as well as to hybrid and private cloud applications. It also provides support for converged compute stacks like Cisco UCS, VCE Vblock, and Flexpod; not to mention granular visibility all the way down to the hypervisor and VM level for virtualized workloads. The EM7 v7.3 is pretty much the network administrator's equivalent of Batman's amazing utility belt, offering practically any customizable tool you need when you need it. You can see the EM7 in operation for yourself in the InteropNet NOC on the expo floor at Interop Las Vegas. – Steven Hill

Next page: Performance, Security, Mobility, Best Startup

 1 | 2  | Next Page »


Related Reading


More Insights



Network Computing encourages readers to engage in spirited, healthy debate, including taking us to task. However, Network Computing moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing/SPAM. Network Computing further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | Please read our commenting policy.
 

Editor's Choice

Research: 2014 State of Server Technology

Research: 2014 State of Server Technology

Buying power and influence are rapidly shifting to service providers. Where does that leave enterprise IT? Not at the cutting edge, thatís for sure: Only 19% are increasing both the number and capability of servers, budgets are level or down for 60% and just 12% are using new micro technology.
Get full survey results now! »

Vendor Turf Wars

Vendor Turf Wars

The enterprise tech market used to be an orderly place, where vendors had clearly defined markets. No more. Driven both by increasing complexity and Wall Street demands for growth, big vendors are duking it out for primacy -- and refusing to work together for IT's benefit. Must we now pick a side, or is neutrality an option?
Get the Digital Issue »

WEBCAST: Software Defined Networking (SDN) First Steps

WEBCAST: Software Defined Networking (SDN) First Steps


Software defined networking encompasses several emerging technologies that bring programmable interfaces to data center networks and promise to make networks more observable and automated, as well as better suited to the specific needs of large virtualized data centers. Attend this webcast to learn the overall concept of SDN and its benefits, describe the different conceptual approaches to SDN, and examine the various technologies, both proprietary and open source, that are emerging.
Register Today »

Related Content

From Our Sponsor

How Data Center Infrastructure Management Software Improves Planning and Cuts Operational Cost

How Data Center Infrastructure Management Software Improves Planning and Cuts Operational Cost

Business executives are challenging their IT staffs to convert data centers from cost centers into producers of business value. Data centers can make a significant impact to the bottom line by enabling the business to respond more quickly to market demands. This paper demonstrates, through a series of examples, how data center infrastructure management software tools can simplify operational processes, cut costs, and speed up information delivery.

Impact of Hot and Cold Aisle Containment on Data Center Temperature and Efficiency

Impact of Hot and Cold Aisle Containment on Data Center Temperature and Efficiency

Both hot-air and cold-air containment can improve the predictability and efficiency of traditional data center cooling systems. While both approaches minimize the mixing of hot and cold air, there are practical differences in implementation and operation that have significant consequences on work environment conditions, PUE, and economizer mode hours. The choice of hot-aisle containment over cold-aisle containment can save 43% in annual cooling system energy cost, corresponding to a 15% reduction in annualized PUE. This paper examines both methodologies and highlights the reasons why hot-aisle containment emerges as the preferred best practice for new data centers.

Monitoring Physical Threats in the Data Center

Monitoring Physical Threats in the Data Center

Traditional methodologies for monitoring the data center environment are no longer sufficient. With technologies such as blade servers driving up cooling demands and regulations such as Sarbanes-Oxley driving up data security requirements, the physical environment in the data center must be watched more closely. While well understood protocols exist for monitoring physical devices such as UPS systems, computer room air conditioners, and fire suppression systems, there is a class of distributed monitoring points that is often ignored. This paper describes this class of threats, suggests approaches to deploying monitoring devices, and provides best practices in leveraging the collected data to reduce downtime.

Cooling Strategies for Ultra-High Density Racks and Blade Servers

Cooling Strategies for Ultra-High Density Racks and Blade Servers

Rack power of 10 kW per rack or more can result from the deployment of high density information technology equipment such as blade servers. This creates difficult cooling challenges in a data center environment where the industry average rack power consumption is under 2 kW. Five strategies for deploying ultra-high power racks are described, covering practical solutions for both new and existing data centers.

Power and Cooling Capacity Management for Data Centers

Power and Cooling Capacity Management for Data Centers

High density IT equipment stresses the power density capability of modern data centers. Installation and unmanaged proliferation of this equipment can lead to unexpected problems with power and cooling infrastructure including overheating, overloads, and loss of redundancy. The ability to measure and predict power and cooling capability at the rack enclosure level is required to ensure predictable performance and optimize use of the physical infrastructure resource. This paper describes the principles for achieving power and cooling capacity management.