Storagebod Rotating Header Image

Shared Storage Device

I think that some people are forgetting about one very important aspect of storage arrays, be it block or file; that is their shared nature. We can get all excited about putting SSDs (Solid State Disks) close to the server, Fusion IO certainly are; inside the server indeed but it kind of breaks a number of important paradigms. And then I see comments about DAS coming back, the revenge of SysAdmins who want control of the storage me thinks.

At the moment, I can provide highly available service by ‘twin-tailing’ my storage. So when one server dies; the other server picks it up. Indeed, if I am feeling really brave, I can allow concurrent access to my shared storage. If I’ve just stuck all my storage in my server and my server dies; I’m going to take a significant hit on my RPO. My highly-performant database/application, may actually be the one that I can’t afford to go down. Okay, so I can do this with DAS (cold sweat, twin-tailed SCSI) but what about that expensive SSD in the server with my latest transactions; you know that multi-million pound trade.

Remote replication is going to be more challenging if all my storage is in my server. Sure I can do it but my server is going to do extra I/O. In a virtualised environment where I am driving my tin a lot harder, I have less CPU cycles to do this work (this could also be a potential problem with software initiators and iSCSI).

BackUps; do I want my primary production server doing my backup? Or do I want to snap or clone; then present to a backup server to do it’s stuff?

Dedupe; deduplication is going to be big (or small) but do I want to be throwing primary production CPU cycles at it?

And that’s just a few thoughts slung down in a couple of minutes.

I think that we all have to be a bit careful and look at things carefully; it’s certainly too soon to start predicting the death of enterprise arrays. Solid State Disks are going to be important but don’t start planning to replace your enterprise shared storage devices anytime soon.

More shared infrastructure, not less; that’s what I’m seeing. Shared Storage Devices are here to stay in some form or another.


3 Comments

  1. Chuck Hollis says:

    Strange global channeling going on here — I had the exact same discussion with Chris Mellor at (now) TheReg over the same issue just yesterday.
    The technology in question was FusionI/O, who must have a corker of a PR agency.
    I said — wait a minute — the same reasons people put storage into arrays will be the reason they put something like EFD in an array — they want to share it, manage it, protect it, etc. like — well — storage!
    Now, if you position the same NAND stuff as “cheaper DRAM”, sure, it makes sense to put it in the server, where it belongs.
    Strange to read the same discussion here …
    — Chuck

  2. Martin G says:

    It maybe that we’ve all been brainwashed and believe that arrays are the answer.
    It maybe that some of us have actually done this in the real world tho’ and understand that it’s not all about performance; it’s about availability and all boring things like that.
    A million IOPs is very cool but so would someone getting up and saying our arrays never go down, they never break etc.

  3. Barry Whyte says:

    A million IOPs in a shared – array based virtualized environment – but then that was IBM taking FusionIO and putting it in the SAN, not directly attached to a host system. There is a place for both, some local very fast storage, but as you say we have SANs for a reason.

Leave a Reply

Your email address will not be published. Required fields are marked *