Storagebod Rotating Header Image

Storagebod’s Performance Check

Every now and then I see a press release which really irritates me; all vendors have done it and often it is when they have a dig at a competitor! The only person who is allowed to have digs at vendors is me (okay and other users). This time, Pillar got on my nerves with an email which pointed me to this; you see EMC have decided not to take part in SPC for whatever reason and NetApp decided that they were going to go ahead and benchmark the Clariion anyway. I thought it was wrong at the time and if I'd been blogging, I'd have said so. And it's even more wrong now for Pillar to be using those figures now; especially since the CX3 is 'old technology'.

Obviously, the solution is for EMC to simply take part in SPC…wrong! The solution is for the other vendors to criticise and even suggest that EMC are hiding something; that's fair game! But to benchmark EMC kit and somehow suggest those figures are fair is cheap!

So, I have a suggestion; on an annual basis, a council of users chaired by myself will sit in a pub and fortified with several pints of Old Peculiar, we shall come up with a performance benchmark. It will change every year and will have the beauty that it reflects today's infrastructure and application design process. We will then calculate a budget; this will be decided by throwing three darts in the general vicinity of a dart boad; trying to multiply them together in our addled brains and probably sticking a few zeroes on the end for good measure.

The vendors will then have the task of specifying an array for the budget stated to perform the benchmark. After the council has sobered up, we will follow the best practise guide-lines to configure the arrays. The benchmark will then be run and the winner will be declared 'King of the Hill' for the next twelve months. This will give them the right to be mean to all other array vendors!


11 Comments

  1. Chuck Hollis says:

    I love it.
    Certainly makes more sense than the current approach.
    And, yes, I do like Old Peculiar …
    — Chuck

  2. Louis Gray says:

    Please do it. We would be delighted to participate.
    However, as you can expect, it is typical that first, the market leader embraces the benchmark, and then, if passed, will deny that the benchmark has relevance – or even go so far as to deny that performance itself is relevant (or putting it way down on the list).
    BlueArc has a good record of both participating in industry-approved benchmarks, and participating in customer bake-offs if needed. We have also participated in reviews from eWeek and InfoWorld, to let the editors bang on Titan. We aren’t going to make the best rap video on YouTube, but we are pretty pleased with the product and stand by what it can do.

  3. Old Peculiar? I have a preference for the Dog’s Bollocks!
    Can we use the arrays supplied for the tests to do some billing runs and save some $. Credit crunch and all, if we do an array per month and overlap it we can go a whole year with billing runs without having to pony up for hardware.

  4. I made a batch of Old Peculier at University, using a beer recipe book (also made Newcastle Amber and a batch of distinctly dodgy lager which a bloke from Bury call Derek consumed). I recommend all standards’ council meetings be preceeded/fortified with significant quantities of strong English bitter. It should a pre-requisite.

  5. Oh..and I hate to be fussy, but peculiEr… πŸ™‚

  6. Martin G says:

    I would have suggested Dogbolter…but Bass the bastards killed the Firkin beers!!

  7. John Edwards says:

    Why don’t you skip out the middle man and just throw darts at the kit and stand your pints on it?
    The one that has the least holes in it at the end of the night and still works despite all the beer spilt into it is declared winner.
    Less time spent on configuring and testing kit that way πŸ™‚

  8. Devang says:

    Let me know when you are ready…i will join the club…

  9. Spitfire. I loved the advertising; so politicalliy incorrect, it made the beer all the better! Remember the “No Fokker Comes Close” advert?
    So to the SPC claims…
    NetApp did the benchmark for a specific reason. EMC’s claims to performance superiority have, for a long time, not been based on any repeatable or public benchmark. They’re not alone in doing this. HP figures are notoroius for being based on cache IOPS, for example. But EMC are certainly the most vocifierous; the CX4 is claimed to be “up to twice as fast” as a CX3. How that helps anyone make a sensible purchasing decision based on performance criteria is beyond me. Yes, we can argue until we’re blue in the face that SPC or SpecSFS is flawed, but they’re out there, they’re repeatable, verifiable and in the public domain.
    Steve Daniel at NetApp was in charge of the CX3 testing. There was a fair amount of internal criticism about this, and he said at the time;
    “If EMC can get better performance on this equipment than NetApp did there is absolutely nothing to stop them from publishing the benchmark themselves.
    “Furthermore, the SPC requires that we make a good faith effort to get the best possible performance on a competitors gear. The auditor went over or work very carefully to ensure we did. EMC was free to challenge the result. If EMC had challenged and the SPC had found that we had not demonstrated the best possible performance for the EMC gear NetApp would have been required to withdraw the result. EMC choose not to challenge the result.
    “For both of these reasons we worked very hard to make the EMC results as strong as possible. It would have been very embarrassing to loose a challenge or to have EMC show us up by publishing a better result themselves.”
    And it wasn’t cheap either. We bought kit, ran tests overal several weeks (some of these individual tests are days long), paid the SPC submission and audit fee, and employed the best people we could find — ex-EMCers — to configure the tin.
    Honestly, I’d love to have EMC do some benchmarking so we can get away from the “NetApp made it look bad” mantra. But they won’t entertain SPC, or SpecSFS, and seem more intnt on developing a benchmark that they feel comfortable with. But from what I can see on Kartik’s website ( http://dotconnector.typepad.com/ ) they’re stuck.
    The reasons for not submitting SPC benchmarks, according to EMC; “In a nutshell, the SPC-1’s cache hostile workload profile reduces its utility to counting the number of spindles in an array.”
    Steve Daniel of NetApp again;
    “The benchmark is not overly cache hostile. Somewhat, but not that much. If you look at the ratio of cache size to dataset size you will see that the systems with more cache get more IOPS/disk, an indication that they are doing something right with their cache.
    “IOPS/spindle varies by a factor of 3 over the published benchmarks, which disproves the “it just counts spindles” argument.
    “There have been a couple of short-stroked publications, but the majority use all or almost all of the disks. I have no reason to believe the configuration are that unrealistic. Certainly ours was very realistic. … The SPC recently adopted a minimum capacity utilization requirement to prevent the worst short-stroking.”
    The SPC is open — anybody can join. EMC can do so, and then they can work to fix the problems they perceive with the workloads. Until then, it’s all anyone has.

  10. Martin G says:

    The point is Alex, they chose not to be part of the SPC. That is their choice.
    It leaves them open to a whole lot of FUD but their arrays do get benchmarked with pretty exacting benchmarks on a regular basis and they tend to do okay otherwise EMC would be getting their arrays returned to them on a regular basis. I have many issues with EMC but performance does not tend to be one.

  11. EMC’s choice is theirs, you’re right. Our choice was to have something to compare with from EMC exactly because of FUD; to attempt to dispel it. Who wants to sell their products on the basis of FUD? We don’t.
    The SPC isn’t perfect. Here’s a case in point; array performance under snapshot, or RAID-1. We tested with snapshots on RAID-6.
    Snapshots are a way of life for most storage users, and RAID-1 is wildly impractical for the majority of production use. But NetApp don’t get all hot and bothered about the fact that 3Par, or Pillar, or whoever compare their performance with ours, without their snapshots and using RAID-1.
    We might get the SPC to change or add a category for this, but until then, we’ll work with them on what’s available. EMC doesn’t have that option, because it won’t even talk with SPC, or offer an alternative.
    That’s not going to move anything along any time soon. All that’s left is that trip down the pub, a good chin wag and a curry to finish.
    On second thoughts, that’s a lot more attractive than benchmarking… Sign me up!

Leave a Reply to Devang Cancel reply

Your email address will not be published. Required fields are marked *