Storagebod Rotating Header Image

Virtually Pragmatic?

So why have EMC joined the storage virtualisation party and although they are calling it federation, it is what IBM, HDS and NetApp amongst others call storage virtualisation? So why do this at this time after warning about dire consequences about doing so in the past.

There are probably a number of reasons to do this; there have certainly been commercial pressures to do so, I know of a number of RFPs which have gone out from large corporates which have mandated this capability; money talks and in an increasingly competitive market, EMC probably have to tick this feature box.

The speed of change in the spinning rust market appears to be slowing, certainly the incessant increase in the size of hard disks is slowing and there means that there might be less pressure to technically refresh the spindles and a decoupling of the disk from the controller makes sense. EMC can protect their regular upgrade revenues at the controller level and forgo some of the spinning rust revenues. They can more than make up for this out of maintenance revenues on the software.

But I wonder if there is a more pressing technological reason and trend that means that it is a good time to do this; that is the rapid progress of flash into the data-centre and how EMC can work to increase the acceleration of adoption. It is conceivable that EMC could be looking shipping all-flash arrays which allow a customer to continue to enjoy their existing array infrastructure and realise the investment that they have made. It is also conceivable that EMC could use a VMAX like appliance to integrate their flash-in-server more simply with a third party infrastructure.

I know nothing for sure but the size of this about turn from EMC should not be understated; Barry Burke has railed against this approach to storage virtualisation for such a long time, there must be some solid reasoning to justify it in his mind.

Pragmatism or futurism, a bit of both I suspect.


  1. PaulP says:

    That’s all well and good to promote SSD only disk systems, but as with all traditional storage systems, inserting SSD’s where spinning disks normally live is a very poor use of SSD technology. Just take a look at the latency figures (measured in milliseconds) of these systems and compare with that capable from SSD’s (microseconds), or choose I/O’s, similar issue. The (spinning disk) limits placed on SSD is just too great today, connecting these devices via SAS backends (or worse FC) is a waste of customers money.

    SSD requires vendors to re-design the whole storage system. Until this occurs, there is not too much to be excited about, do the math.

  2. PaulP – you are promoting a fallacy. Perhaps you have been biased by commercial SSD performance statistics, but here are some real numbers for your math exercise:

    Both VMAX and VNX consistently deliver sub-millisecond (0.1–>0.5ms) response time from (well-designed) enterprise SLC and/or eMLC SSDs. And for reference, response times for cache hits (DRAM) get down to 0.065ms.

  3. Martin –

    Thanks for granting me the privilege of the doubt. Clearly FTS “checks the box” while addressing the concerns I have raised over UVM over the past 8 years – we’ve added data integrity validation AND delivered the feature with lower impact on response time than Hitachi has managed to squeeze out.

    But you know me/us well – there’s more to FTS than simply neutralizing another set of competitive features. There’s definitely more to come (I can’t say what just yet).

    Also note that VPLEX is doing quite well against SVC, delivering high-availability, network-based virtualization that can also deliver active/active concurrent data access over distance. Neither Hitachi UVM nor SVC can deliver even similar active/active that remains HA after a site failure.

Leave a Reply

Your email address will not be published. Required fields are marked *