Vendorspeak Exposed!

Don't let manufacturers wow you with sensational stats, inflated claims and empty promises. We shoot down some favorite vendor come-ons.

September 22, 2003

10 Min Read
Network Computing logo

Right off the bat, you'll notice that our fictional vendor hasn't disclosed who tested the IT Pro or who paid for that testing. Even independent testing labs can be biased in presenting results because they're getting paid to produce marketing collateral. Vendor-funded reports rarely show that company's product anywhere but on top. Beyond that bias, beware these other common traps vendors set.

1. Deceptive Eye Candy

"More than twice as many transactions per second."

Colorful performance comparison graphs can make bold statements about the capabilities of such networking products as firewalls, Web and application servers, load balancers and cryptographic accelerators, but they rarely are a true indicator of performance.

In our example, the chart clearly shows that Vendor B's Ethernet Thingy can perform more than twice as many transactions per second as its competition. Great, but how does the vendor define transaction? This is an important omission, as that definition varies not only from product to product, but also from test to test. An HTTP transaction could be defined as a single get request or a page that requires multiple get requests. The size of the data transferred is also an important piece of the equation. Many products are tuned to produce high-performance metrics with relatively small pieces of data (1 KB) but degrade dramatically as the size of data increases.This chart also is missing the load under which the performance results were gathered. The more processing the device must perform, the more likely its ability to process requests will degrade as load increases. We've seen products that can serve 10 users and perform like champions, but they fail to maintain performance levels when forced to serve 50 users. You'd never know it from the charts on the data sheet.

Changing the performance charts' scale can also make a product's numbers look more appealing. Consider the "Throughput Comparison" chart for Products A, B, and C. It appears that Product A has much better throughput than its competitors, but closer examination shows that the differences are actually quite small. Vendors create this optical illusion by using a scale that exaggerates the performance differences. If a scale of 0 to 1,000 Mbps were used, the difference in throughput capabilities between the products would appear minimal (and, in fact, it's less than

3 percent from best to worst). By using a scale of 850 to 900 Mbps, however, Product A appears to have much higher throughput.

2. False Precision

"Can be up and running in an average of 23 minutes."You are more likely to believe this line because of a seemingly precise number like 23. Using such exact numbers tends to convey a sense of accuracy that is not always present in the underlying data. When you see such precise numbers, therefore, you should doubt the figures instead of accepting them at face value.

To illustrate, imagine the following survey regarding the product in question:

How long did it take to configure the IT Pro 853045?

  1. less than 15 minutes

  2. 15 minutes to 30 minutes

  3. 1 hour to 2 hours

  4. more than 2 hours

To come up with a single value that can be used in marketing material, survey takers then assign arbitrary numbers to represent each choice:

By choosing arbitrary numbers, the vendor can average all the vague data and come up with a number such as 23. Varying the numbers used to compute the average changes the result, which can be manipulated to present exactly the data the vendor wants to represent. The most accurate way to generate such a statistic would be to allow survey takers to specify the answer values themselves, rather than have them choose from a list."Hardware-accelerated SSL provides 32,000 transactions per second."

Throughput refers to how much data a product can process in a specific time period. The more throughput the better. But vendors often intentionally provide total throughput without referencing the size of the data used to benchmark the product. Since each product set has its own peculiarities, you must do your homework to determine whether the throughput numbers presented represent real-world traffic or sculpted patterns designed to produce performance metrics suitable for marketing material.

In the case of the IT Pro's hardware-accelerated SSL, the number of transactions per second quoted is likely to be the number of RSA operations per second that the product can execute. The number of these encryption and authentication operations per second is not directly translatable into transactions per second, as each transaction involves multiple cryptographic operations. Also curiously missing from the performance claims is the size of data being encrypted. Vendors often use 1-KB files in their performance tests to show a higher number of operations per second because bulk encryption rates decrease as the size of data being encrypted increases.

Firewall performance also varies based on data size. Using large packets and small numbers of sessions produces better performance numbers than using small packets and many sessions, which more closely resembles real-world traffic. Rather than show the devices' true capacity, therefore, vendors run the tests using the less taxing setup--large packets and few sessions--to make the products appear to perform better.

Also keep in mind that the number of rules configured on devices can dramatically affect their performance. Firewalls, IDSs and other packet-inspection products must inspect and direct packets based on a configured set of policies. The more rules or signatures (for IDSs and antivirus gateways) that a single packet must be compared against, the more work the device must do, which in turn degrades performance. Vendors will generally run performance tests with the fewest rules or policies required to show the best possible results.4. Forgot-to-Mention-the-Configuration Syndrome

"Throughput greater than 10 Gbps"

At first glance this appears to be a valid claim, but the IT Pro 853045 is an appliance, which should immediately trigger the flashing red warning lights. There are no 10-Gbps NICs. Only in the past year have 10-Gbps switches appeared, and they are primarily used

for MANs (metropolitan area networks) and as aggregation points.

So how can an appliance claim to have such unbelievable throughput? It's all in the configuration--which consisted of more than 10 IT Pro Ethernet Thingies.Although this is an extreme case, vendors will often portray a product's throughput and performance without specifying the configuration used to generate the data. When software or appliance vendors make such astonishing claims, be sure to inquire about the specifications of the machine on which the tests were run. A product tested on a dual P4 with

2 GB of RAM is certain to perform better than one tested on a PIII with 256 MB of RAM.5. Statistics 101: By Any Means Necessary?

"1.2-second average response under heavy load (5,000 users)."

Sometimes it seems that marketing people have all read Darrell Huff's How To Lie With Statistics (W.W. Norton & Co., 1993), because they seem to try every trick in the book on unwitting customers. One of the worst is the use of the average (what statisticians call the "arithmetic mean") to represent a bunch of numbers that should never have gone together in the first place.

Without any knowledge of the data points used to come up with the 1.2-second average response time cited by our data sheet, it's impossible to derive anything meaningful from the number. If the majority of the data used to arrive at this value was very small compared with real-world data sizes, the average response time from the vendor's test is useless.Let's take a throughput example to further illustrate this deception. Vendor X makes the claim, "In independent tests across a variety of sessions, our firewall product performed signature-based inspections on an average of 31,000 packets per second." Well, this sounds fantastic, especially if other vendors typically achieve rates of only 5,000 to 10,000 packets per second. But look at the data from which this average was calculated:

That last data point is what statisticians call an "outlier." Outliers are any oddball data points that lie way, way out in the tail of a skewed distribution. Setting the statistical jargon aside, it's easy to see that this one extreme data point makes the average entirely unrepresentative of the data. There are no data points at all near the claimed average of 31,000, so in this case the average does a very poor job of representing what is usual in this group of numbers.

A better, and more honest, choice for these numbers would have been the "median." If each data point is a stepping stone, the median represents the point where you are halfway across the river. In this case the median would be 5,600--the middle number in the sorted list above. As a representation of what is typical or usual in a set of data, the median is relatively unaffected by outliers and is thus a safe choice for unruly data. If Vendor X had used the median instead of the average, we would have gotten a much clearer idea of what the product could do in a realistic situation.

6. Connections vs. Transactions"Can handle 1 million connections"

Connection rates are almost always highlighted on any networking product that operates at Layer 4 and above. While this type of testing does stress a device's ability to handle incoming connections, it is not a true indicator of how the product will perform in a real-world situation. That's because a TCP connection--or, in testing parlance, a TCP tap--refers only to the connection, not to any traffic that may pass over the connection.

The IT Pro, for example, can handle 1 million connections, but that's all--the connection is opened and immediately closed without any application-layer activity occurring. Opening a connection is one thing; processing the connection's application traffic is quite another.

Connection rates rarely match up with transaction rates, even when transactions consist of minuscule data sizes. The number of connections a product can open should never be used as an indicator of how well it will perform its intended duties. Unless the data sheet indicates that data was actually flowing across the connection--and includes the size of that data--you're better off ignoring this statistic whenever you see it.

7. Percentages"150% better performance"

Product A claims to perform 50 percent better than Product B. Product C claims to have 150 percent the performance of Product B. Which product would you choose?

Did you say Product C because the numbers look better? In reality, both Product A and Product C perform at the same level. If we assume that Product B can perform 100 transactions per second, then consider the math:

50% faster would be 150 transactions per second
  50% of 100 = 50
  100 + 50 = 150
150% of Product B would be 150 transactions per second
  150% of 100 = 150

There is no discernible difference between the performance of Product A and Product C, except in how it is marketed to you. Don't be fooled by larger percentages in marketingspeak; always do the math yourself.Even more important, if you see a phrase such as "150 percent better" (as in our data sheet), be sure to question that number. The marketers might have been trying to represent 150 transactions per second, but 150 percent better throughput would really be 250 transactions per second, since you'd have to add the 150 additional transactions to the original 100.

Also be wary of statements like, "have migrated 60 percent of their applications to this exciting new platform." Without knowing the total number of applications in an organization, a number such as 60 percent is meaningless. An organization that has migrated six of its 10 applications is much different from an organization migrating 60 percent, or 600, of its 1,000 applications.

Furthermore, by mentioning its Fortune 500 customers immediately after stating that customers have migrated 60 percent of their applications, the vendor hopes you'll infer that the Fortune 500 customers have done that migration. In fact, the two statements have no relationship to each other.

Lori MacVittie is a Network Computing technology editor working in our Green Bay, Wis., labs. Write to her at [email protected].

Post a comment or question on this story.

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox
More Insights