Storagebod Rotating Header Image

V(per)PLEXed?

So we have VPLEX and despite some scratching of heads as to what it is; it is really quite simple, 

'storage access is further decoupled from storage physicality'

And this really is nothing especially new; decoupling the storage access from storage physicality has been going on for some time. Servers are getting further and further away from their physical disk. We have been adding abstraction layers to storage access for some time, the big question is whether we need another abstraction layer? 

Actually I think that the additional layer is useful; the ability to present 'real LUNS' from 'storage arrays' as a single 'plexed LUN' and keeping these LUNs in-sync might actually be useful in a number of use-cases. I can see it simplifying and transforming DR for example; I can see it making migration a lot easier and I can see EMC heavily leveraging their VMWare investments. I've said it before and I'll say it again; ever since EMC spun VMWare off, they have acted more in concert than when VMWare were wholly owned. 

Is it useful enough to warrant EMC's claim to have invented a new class of storage device?

I think I'll let the vendor bloggers rip themselves to shreds over that.  

I also think it is interesting that they have at long last decided to pretty whole-heartedly support third party arrays; if anything, this makes it an interesting announcement for EMC. Will they sell any? Well, it's going to be an uncomfortable experience for your run-of-the-mill account manager when faced with a Storage Manager who says 'Well you've just re-invented SVC/USP-V etc…you told me that they were rubbish, so why is yours any good?'

I think the heavy-hitters in EMC are going to be very busy supporting their account-teams.


4 Comments

  1. Barry Whyte says:

    Need to read up a bit more but the implications that you still need to use the backend array services (copies, etc) in Mr Burke’s post means its pass through virtualization, done by simply intercepting and forking I/O…
    We’ve seen that before… Invista… and well…
    If it does cache and use the YY IP for caching, then you can’t use underlying storage services… mixed messages.
    Mr Mellor also suggests they are using SVC to support backend storage… hmmm… not without our qualification testing they won’t as we won’t recognise the host type nor support it until it has been through our stringent interop team – like all supported systems with SVC…
    As I say, need more details…

  2. Han Solo says:

    So let me see if I understand this?
    a) They use SVC for non-EMC hardware support because its awesome? hahaha!
    b) It STILL DEPENDS on the underlying storage array for snapshots, replications etc? HAHAHA!
    c) VPLEX Local starts from $77,000 while the subscription license starts from $26,000.
    So…on TOP OF THAT I see have to buy SRDF? TIMEFINDER? FAST? licensees for my array? HAHAHA!
    What a piece of junk.
    Surely this is worthy of the nickname of “IN-VEST-A more”.

  3. The VPLEX support matrix includes just about every block storage array sold since 2002. And if your array/host/HBA/device driver hasn’t yet been tested – let us know, and we’ll get it onto the list, usually in no more than a couple of weeks.
    SVC support was included because customers asked us to provide them with a quick and simple way to get OFF of their SVCs so that they could take advantage of VPLEX features.
    IBM support isn’t going to be necessary, because the SVC’s we link into won’t be around that long 😉
    Supporting back-end storage services is a strategic decision, intended to leverage and extend the value of the arrays, rather than the SVC approach of relegating them to RBOD.
    And of course, VPLEX will add its own native services so that RBOD can be utilized, but the strategic focus is on EXTENDING the value of storage, not reducing it. VMware/HyperV/OVM will soon deliver API integration that allows the hypervisor to make requests of the storage (clone, snap, replicate, unmap unused blocks, etc.), and storage vendors will inevitably support these APIs lest they lose access to a significant subset of the market.
    The VPLEX strategy is not only support these APIs, but to intelligently leverage these same APIs to ask the array(s) to perform tasks on its behalf. Want to clone a template without consuming disk storage? Ask the Symm to make a Snap of the template. Want to mirror a LUN to both a CLARiiON and a USP-V? VPLEX Local does the mirror. Want to stretch a LUN from New York to New Jersey? VPLEX Metro provides what no other vendor can do today – active/active access in multiple sites.
    BarryW and I have discussed frequently how SVC limits the performance of CLARiiON arrays (customers repeatedly complain that their CX runs slower behind an SVC than it does directly-connected to the host). And when challenged by the customer over this, IBM’s typical response is to assure them there is no such issue with an IBMDSx000, which they will happily sell at a steep discount.
    Customers grow weary of such tactics. And VPLEX gives them an alternative that does play such silly games – and in fact ADDS value to custoemrs’ sizable investment in storage.
    Put simply, with SVC in front of an intelligent array, 1+1=1.
    With VPLEX, 1+1=3.

  4. VPLEX pricing was quoted at list.
    While I can’t discuss specific discount percentages, I can assure you that the ASP of an entry level VPLEX Local (single engine/two director) is below the ASP of an entry level SVC (single two-node IO Group).
    Let the fun begin.

Leave a Reply

Your email address will not be published. Required fields are marked *