For many years, VMware has promulgated its own benchmarks. At times, these have been used by the larger industry, especially in the early days of benchmarking server loads. I remember tracking them when I was benchmarking servers at InfoWorld. Although we never ran VMware's benchmarks (we tended to prefer those from SPEC, which were consortium-created), we certainly heard from server vendors about their discomfort when being rated by VMware. It was difficult to know whether this was just standard vendor griping or whether the benchmarks did indeed disfavor certain designs, but my sense was that it was the latter. I don't think this was intentionally done by VMware. Rather it illustrates the difficulty of creating useful benchmarks that can be run on multiple architectures and deliver results that can be compared fairly.