Storagebod Rotating Header Image

Storage Fuel Efficiency Measure

Alex McDonald is suggesting a measure for the effectiveness of storage capacity here; he has come up the term Effective Load Factor which he shortens to ELF and then accompanies with which is either a picture of a dwarf or a gnome, it’s certainly not an Elf! BTW I am surprised that EMC have not picked on this blatant error!

I am going to suggest another measure, the Megabytes Per Gigabyte; to be known as the MPG! Per Gigabyte of disk, how much data can I store? In the UK, cars often have their MPGs quoted for different types of driving, so we have the Urban (City) Cycle which generally has really terrible MPGs and Motorway Cycle which has much better MPGs. So for storage, we could have the VMware MPG, the OLTP MPG, the streaming video MPG; perhaps all quoted at a standard IOP with a known read/write profile.


  1. Hehe!
    Finding copyright free pictures is a challenge; I take what I can find.
    Perhaps we need a standardised set of data for the ELF/MPG thing. But that would be a serious challenge; imagine, as a customer, having several TB worth of data to download just to run a capacity benchmark.
    On the other hand, no customer attempts to run an SPC benchmark. This might just belong in the same category as the SPC or SpecSFS benchmarks where we (vendors) post the MPG based on an agreed set of standard data, and you as customers accept that Your Mileage May Vary.
    That might work, no?

  2. I like where you’re going with this, but would like to point out that “your mileage may vary”. Therefore, I propose we establish a sliding scale of load factors, ranging from “thin provisionable” to “only Ocarina claims to compress this”. Then let every vendor establish their own test data and procedures and duke it out in the market place. Oh wait, that’s exactly what we have now…
    And how exactly do you know what an elf looks like? I’ve never seen one… Hee hee!

  3. Martin G says:

    I don’t mind every vendor establishing their own test data and procedures but I would like them to
    1) Fully disclose the test data.
    2) Fully disclose the procedures and ensure that all procedures follow their published best practices.
    3) Stand-by their results and practices ensuring that any customer who follows them and find the product not to perform as expected has some kind of comeback with regards to remedial action.
    And oh yes, when a problem is found; don’t tie me up with an NDA as precondition to fixing my issue.
    As for Elves? That’d be telling but I’m sure Zilla is one in his spare time!

  4. When I discuss benchmarks with any customer, I’m very honest about it. Microsoft’s ESRP typically contains full disclosure, but is so specific for a single application profile, it rarely is helpful outside of an Exchange deployment.
    Benchmarks like SpecSFS and SPC-1 are all about beating the benchmark.
    I always tell a customer when I throw up competitive benchmarking slides, that if I don’t have data that shows my product as the clear winner, I’m not doing my job, and I expect my competitors to do the same.
    I use the automotive Truck industry as am example regularly. In a given year, Toyota will win, GM will win Motor Trend, and Ford will win J.D. Powers. The storage industry is the same, EMC will win something, NetApp something else, etc. Why? Maybe marketing spend has something to do with it, maybe it relationships.

  5. @Steven; Microsoft’s ESRP isn’t a benchmark at all, and you should *not* use it as such.
    All the ESRP submissions, including Dell/EQLs, have this statement; “The ESRP Storage program is not designed to be a benchmarking program. Its tests are not designed for achieving the maximum throughput for a given solution. Rather, they are focused on producing recommendations from vendors for the Exchange application. Therefore, the data presented in this document should not be used for direct comparisons among the solutions.”

Leave a Reply

Your email address will not be published. Required fields are marked *