Storagebod Rotating Header Image

Half-Empty Post

One of the EMC announcements earlier in the year got me thinking; now it might come as a bit of surprise to most of you, I’m really not a glass half-empty kind-of guy and generally try to look at the positive things in most announcements but there was an announcement which really took me aback and almost immediately a cynical ‘point of view’ popped into my mind.

No, it wasn’t the slightly cynical rebrand of Clariion and Celerra into a ‘single’ product called VNX; that was pretty transparent and I’ve never had a real problem with the Frankenstorage, I’m all about the ease of management anyway. As far as I am concerned, you can lash half-a-dozen products together, stick a pretty unified interface on it and as long as I can manage it as a single device; that’ll pretty much do.

No, it was the VMAX announcement. The doubling in performance with a software upgrade; what’s not too like you ask? A performance upgrade for free….well, as long as you have maintenance on the array.

Now if I was a VMAX customer, I might actually be asking another question? Why have I been running my VMAX at half its performance capability for the past year or so? Did I really need all the VMAX kit that I bought and did I need to swell EMC’s coffers by quite so much when I was trying to seriously curtail my IT spend? And am I going to see a similar ‘free’ kicker in performance next year?

It does make one wonder if the VMAX was actually released earlier than intended and before it was really ready.

FAST-2 seemed to take a long time to come to fruition with FAST-1 being a re-brand of Symmetrix Optimizer, a place-holder as opposed to anything really revolutionary?

Was the VMAX Enginuity code release running poorly translated/emulated PowerPC code? And the optimisations are nothing  more than the Enginuity team have actually had time to understand and write a true native Enginuity release?

This is obviously speculation and it probably says a lot that VMAX was still performant even though it was running under-baked code. But if EMC continue to improve VMAX performance without hardware kickers, I’d be looking in their code for sleep statements, timing loops and all other kinds of tricks that programmers pull.

I’ll be watching carefully….


7 Comments

  1. I think as arrays become more and more virtualized, this will be more common. In fact, when I worked at NetApp it was very common for software upgrades to improve performance. Since ONTAP uses logic to determine the optimal placement of data on disk, engineering was always finding ways to improve the logic based and boost performance. I had customers repeatedly delay hardware improvements due to software improvements.

    Many vendors are also working to improve how their software utilizes multi-core and many-core CPUs. I’m not sure if that is where VMAX is getting it’s boost from or not.

    I would expect any virtualized system to improve in performance with software updates. Virtualization is about working smarter, not harder.

    The comment about “sleep statements” made me smile though 🙂

  2. Martin – I have to ask – what’s up with all the dark Evil Machine Corp perspective you have been sporting lately? Do you need a hug? 🙂

    As Mike notes, performance is routinely improved through software optimizations. And since the standard maintenance warranty on VMAX is 2 years, 100% of the installed base are entitled to the upgrade to Enginuity 5875 at no additional charge.

    You seem to overlook the fact that VMAX was launched as 2-3x a DMX-4 at GA back in April 2009, with a scale-out architecture that literally redefined enterprise storage performance and scale. There were virtually zero performance complaints about VMAX before 5875, and IDC data shows that VMAX gained significant market share at the expense of both IBM and Hitachi who offered no real competition in response. Both updated their offerings late last year, forcing their customers into new hardware platforms to get new function in one case, and giving them no reason to move by NOT delivering existing features on their new array in the other.

    Then EMC releases a software update that increases the value of the product customers already have with FAST VP, Federated Live Migration, accellerated and simplified provisioning, VAAI and T10 integration, all along with an unexpected performance boost to boot.

    And here you go and try to make it sound like some sort of a conspiracy.

    FWIW, the particular improvements that more than doubled the realizable back-end bandwidth were designed to mitigate the added workload of dynamically rebalancing tiers that sub-LUN FAST VP would add (dynamic rebalancing that is clearly a differentiating feature across all the attempts at auto-tiering, I might add). Interestingly, the benefit of these optimization was expected to be closer to 1.5x, but we were able to squeeze out even more than originally planned – and we als found that they benefited much more than just FAST VP.

    If you are a VMAX customer, a free mid-life speed boost is hardly something to complain about – it reduces one of the perceived risks of moving to auto-tiering, and it could well extend the life of the VMAX you already have.

    And if you’re NOT a VMAX customer…you might want to ask your vendor why they keep dragging you through expensive hardware upgrades to get new capabilities (or not).

  3. Martin G says:

    Barry,
    no I don’t need a hug….but it does seem that the only way we can get you to post anything about your own products is to post something negative about them ;-).

    But 100% performance improvement is interesting; it’s a significant enough improvement to be worth commenting and it is worth exploring the reasons and giving you the chance to explain where the improvements come from.

    If Microsoft announced a 100% improvement in Windows performance with no new hardware; I pretty much guarantee that people would be asking similar questions about the initial code quality and did someone forget to take out the debugging code.

    I think ultimately though the Breaking Records thing, although amusing and quite good fun was actually really a hollow drum. It made a lot of noise and there was not a huge amount of substance. The VNXe is probably the only really interesting development in the lot.

    If you think I’m the only customer who harbours such thoughts/comments, you must only be visiting people who have been drinking EMC kool-aid recently.

  4. stuiesav says:

    Concur with your response Martin – we are getting x2 performance quotes thrown at us… Why on earth, if the hardware was capable – didnt we see this in the first instance.

    Lots of the scalability concerns that have been knocking around go away if claims are correct – and we see uplift for effectively free.

    Barry – from your perspective would be really interested in understanding where the performance gains have come from, and which code has now been appropriately profiled and re-compiled to give us this level of increase.

    Of course – with the uplift of 75 code comes all of the remediation that we need to do to allow legacy hosts to see this – what is being done to get this problem sorted (I.e. when does “grandfathering” become a reality and interop is not as much of a concern).

    Thoughts / comments?

    Thanks,

    Stuart.

  5. stuiesav says:

    in fact… thinking this through further – i have had to scale my vmax’s out more so that I can get the things performing, i am wondering if EMC will give my hard-earnt capex back when i get my x2 uplift and decommision the additional server engines that i had to purchase? Thoughts / comments?

  6. Stuie-

    No compiler would have accomplished such gains – they came about from architectural adjustments of such a scope that we delayed them out of the original VMAX code (5874) so that we could ensure that they didn’t destabilize the platform.

    The improvements were in back-end bandwidth – large block sequential I/O. They came about from optimizations of the way the virtual matrix is used, by reducing the amount of inter-director communication required to deliver large-block I/O, and by increasing the probability that when cache slots are assigned to support an I/O they are created on the director that is managing the target drive.

    The result is that customers will realize better large-block sequential I/O performance in general, and that FAST VP customers will achieve the benefits of FAST VP promotion quicker than they would have without these optimizations.

    As to the reduced remediation requirements, you should be discussing this with your implementation team. I cannot discuss anything publicly until it is announced.

    Martin –

    You admonish me to write about my own products…yet you’ve not mentioned anything about the two-part “FAST VP – world’s smartest storage tiering” post I did back at the launch. In it, I systematically explain the customer benefits of, and analysis behind, VMAX’s automated tiering implementation. If you haven’t read it, please do – I believe I made only comparative and contrasting remarks about competitive approaches, and even then only in general terms (with one exception, I’ll admit).

    I got no feedback on that post – it seems perhaps that people only comment when I make polarizing observations!

  7. Martin G says:

    Barry,
    I religiously read every blog that you post. And, it does appear that often only polarising observations draw comments….I find that myself but hey, it’s good to see that you are still alive. One has to check!

Leave a Reply to the storage anarchist Cancel reply

Your email address will not be published. Required fields are marked *