Storagebod Rotating Header Image

Five Years On (part 3)

So all the changes referenced in part 2, what do they mean? Are we are at an inflection point?

The answer to the latter question is probably yes but we could be at a number of inflection points both localised vendor inflection points but also industry-wide ones as well. But we’ll probably not know for a couple more years and then with hindsight we can look back and see.

The most dramatic change that we have seen in the past five years is the coming of Flash-based storage devices; this is beginning to change our estates and what we thought was going to become the norm.

Five years ago; we were talking about general purpose, multi-tier arrays; automated tiering and provisioning but all coming together in a single monolithic device. The multi-protocol filer model was going to become the dominant model; this was going to allow us to break down silos in the data centre and to simply the estate.

Arrays were getting bigger as were disks; i/o density was a real problem and generally the slowest part of any system was the back-end storage.

And then SSDs began to happen; I know that flash-based/memory-based arrays have been around for a long time but they were very much specialist and a niche market. But the arrival of the SSD; flash in familar form-factor at a slightly less eye-watering price was a real change-bringer.

EMC and others scrambled to make use of this technology; treat them as a faster disk tier in the existing arrays was the order of the day. Automated Storage Tiering technology was the must have technology for many array manufacturers; few customers could afford to run all of their workloads on an entirely SSD-based infrastructure.

Yet if you talk to the early adopters of SSDs in these arrays; you will soon hear some horror stories; the legacy arrays simply were not architected to make best use of the SSDs in them. And arguably still aren’t; yes, they’ll run faster than your 15k spinning rust tier but you are not getting the full value from them.

I think that all the legacy array manufacturers knew that there were going to be bottle-necks and problems; the different approaches that the vendors take almost points to this and the different approaches taken by a single vendor..from using flash as a cache to utilising it simply as a faster disk…using it as extension of the read cache to using it as both a read and write cache.

Vendors claiming that they had the one true answer….none of them did.

This has enabled a bunch of start-ups to burgeon; where confusion reigns, there is opportunity for disruption. That and the open-sourcing of ZFS has built massive opportunity for smaller start-ups, the cost of entry into the market has dropped. Although if you examine many of the start-ups offerings; they are really  a familiar architecture but aimed at a different price point and market as opposed to the larger storage vendors.

And we have seen a veritable snow-storm of cash both in the form of VC-money but also acquisition as the traditional vendors realise that they simply cannot innovate quickly enough within their own confines.

Whilst all this was going on; there has been an incredible rise in the amount of data that is now being stored and captured. The more traditional architectures struggle; scale-up has it’s limits in many cases and techniques from the HPC market place began to become mainstream. Scale-out architectures had begun to appear; firstly in the HPC market, then into the media space and now with the massive data demands of the traditional enterprises…we see them across the board.

Throw SSDs, Scale-Out together with Virtualisation; you have created a perfect opportunity for all in the storage market to come up with new ways of fleecing providing value to their customers.

How do you get these newly siloed data-stores to work in harmonious and easy to manage way? How do we meet the demands of businesses that are growing ever faster. Of course we invent a new acronym that’s how….’SDS’ or ‘Software Defined Storage’

Funnily enough; the whole SDS movement takes me right back to the beginning; many of my early blogs were focused on the terribleness of ECC as a tool to manage storage. Much of it due to the frustration that it was both truly awful and was trying to do to much.

It needed to be simpler; the administration tools were getting better but the umbrella tools such as ECC just seemed to collapse under their own weight. Getting information out of them was hard work; EMC had teams devoted to writing custom reports for customers because it was so hard to get ECC to report anything useful. There was no real API and it was easier to interrogate that database directly.

But even then it struck me that it should have been simple to code something which sat on top of the various arrays (from all vendors); queried them and pulled back useful information. Most of them already had fully featured CLIs; it should have been not beyond the wit of man to code a layer that sat above the CLIs that took simple operations such as ‘allocate 10x10Gb LUNs to host ‘x’ ‘ and turn them into the appropriate array commands; no matter which array.

I think this is the promise of SDS. I hope the next five years will see the development of this; that we see storage with in a data-centre becoming more standardised from an programmatic point of view.

I have hopes but I’m sure we’ll see many of the vendors trying to push their standard and we’ll probably still be in a world of storage silos and ponds…not a unified Sea of Storage.

 

 


Leave a Reply

Your email address will not be published. Required fields are marked *