Storagebod Rotating Header Image

Defined Storage…

Listening to the ‘Speaking In Tech’ podcast got me thinking a bit more about the software-defined meme and wondering if it is a real thing as opposed to a load of hype; so for the time being I’ve decided to treat it as a real thing or at least that it might become a real thing…and in time, maybe a better real thing?

So Software Defined Storage?

The role of the storage array seems to be changing at present or arguably simplifying; the storage array is becoming where you store stuff which you want to persist. And that may sound silly but basically what I mean is that the storage array is not where you are going to process transactions. Your transactional storage will be as close to the compute as possible or at least this appears to be the current direction of travel.

But there is also a certain amount of discussion and debate about storage quality of service, guaranteed performance and how we implement it.

Bod’s Thoughts

This all comes down to services, discovery and a subscription model. Storage devices will have to publish their capabilities via some kind of API; applications will use this to find what services and capabilities an array has and then subscribe to them.

So a storage device may publish available capacity, IOP capability, latency but it could also publish that it has the ability to do snapshots, replication, thick and thin allocation. It could also publish a cost associated with this.

Applications, application developers and support teams might make decisions at this point what services they subscribe to; perhaps a fixed capacity and IOPs, perhaps take the array-based snapshots but do the replication at an application layer.

Applications will have a lot more control about what storage they have and use; they will make decisions whether certain data is pinned in local SSD or never gets anywhere near the local SSD; whether it needs sequential storage or random access..It might have it’s RTO and RPO parameters; making decisions about what transactions can be lost and which need to be committed now.

And this happens, the data-centre becomes something which is managed as opposed to the siloed components.

I’ve probably not explained my thinking as well as I could do but I think it’s a topic that I’m going to keep coming back to over the months.

 

 

 


4 Comments

  1. Martin, I fully subscribe to your general vision here. I have done a lot of thinking and concept tinkering myself over the last couple years re potential first usable steps along this path; I always come back to a three-layered approach: a metadata layer describing the capabilities of the storage infrastructure, another describing the abstracted storage requirements of the stored objects, and a mapping “patch panel” layer in between both that allows a storage admin (whose role will need to evolve to resemble that of a db admin, thank you @ianhf) to connect one to the other given the then-current capabilities of the data center. If you are thinking of persisting data objects for 100 years (or even forever, like some of our customers), you need a fairly flexible way to express characteristics of devices yet to be invented. Also, the storage admin may be either human or AI – the metadata should be rich enough to fully support that model.

  2. This is the future of Software Defined Storage – or Enterprise Storage as a Service, which needs to support differentiated storage.

    We’re looking at libstoragemngt APIs as a possible path to the storage array query (http://fedoraproject.org/wiki/Features/StorageManagement)

    For more on this subject.
    http://www.tonian.com/wordpress/?tag=softwaredefinedstorage

  3. Lee Johns says:

    The concept is right but there will be many iterations depending on company sophistication and size. I wrote alot about this when I was at HP and defining Converged Infrastructure. It was clear then that storage was really a software application on top of industry standard hardware. Note the aquisitions of LeftHand and IBRIX.

    There is an interesting intersection point at the moment as people realize that Converged Infrastructure, Private Clouds and Software Defined Infrastructure (Inc Storage) are really all one and the same thing. You want to build on standardized hardware, define the use cases with software and coallesce it into a service that you can automate.

    It does not mean everything will have a common shape though. Far from it it will liberate innovation at a faster pace than ever.

    Old architectures beware!!!!!!

Leave a Reply

Your email address will not be published. Required fields are marked *