Storagebod Rotating Header Image

Storage

In With The New

As vendors race to be better, faster and to differentiate themselves in an already busy marketplace, the real needs of the storage teams can be left un-met and also that of the storage consumer. At times it is as if the various vendors are building dragsters, calling them family saloons and hoping that nobody notices. The problems that I blogged about when I started out blogging seem still mostly unsolved.

Management

Storage management at scale is still problematic; it is still extremely hard to find a toolset that will allow a busy team to be able to assess health, performance, supportability and capacity at a glance. Still too many teams are using spreadsheets and manually maintained records to manage their storage.

Tools which allow end-to-end management of an infrastructure from rust to silicon and all parts in-between still don’t exist or if they do, they come with large price-tags which invariably do not have a real ROI or a realistic implementation strategy.

As we build more silos in the storage-infrastructure; getting a view of the whole estate is harder now than ever. Multi-vendor management tools are in general lacking in capability with many vendors using subtle changes to inflict damage on the competing management tools.

Mobility

Data mobility across tiers where those tiers are spread across multiple vendors is hard; applications are generally not currently architected to encapsulate this functionality in their non-functional specifications. And many vendors don’t want you to be able to move data between their devices and competitors for obvious reasons.

But surely the most blinkered flash start-up must realise that this needs to be addressed; it is going to be an unusual company who will put all of their data onto flash.

Of course this is not just a problem for the start-ups but it could be a major barrier for adoption and is one of the hardest hurdles to overcome.

Scaling

Although we have scale-out and scale-up solutions; scaling is a problem. Yes, we can scale to what appears to be almost limitless size these days but the process of scaling brings problems. Adding additional capacity is relatively simple; rebalancing performance to effectively use that capacity is not so easy. If you don’t rebalance, you risk hotspots and even under-utilisation.

It requires careful planning and timing even with tools; it means understanding the underlying performance characteristics and requirements of your applications. And with some of the newer architectures that are storing metadata and de-duping; this appears to be a challenge to vendors. Ask questions of vendors as to why they are limited to a number of nodes; there will sheepish shuffling of feet and alternative methods of federating a number of arrays into one logical entity will quickly come into play.

And then mobility between arrays becomes an issue to be addressed.

Deterministic Performance

As arrays get larger; more workloads get consolidated onto a single array and without the ability to isolate workloads or guarantee performance; the risk of bad and noisy neighbours increases. Few vendors have yet grasped the nettle of QoS and yet fewer developers actually understand what their performance characteristics and requirements.

Data Growth

Despite all efforts to curtail this; we store ever larger amounts of data. We need an industry-wide initiative to look at how we better curate and manage data. And yet if we solve the problems above, the growth issue will simply get worse..as we reduce the friction and the management overhead, we’ll simply consume more and more.

Perhaps the vendors should be concentrating on making it harder and even more expensive to store data. It might be the only way to slow down the inexorable demand for ever more storage. Still, that’s not really in their interest.

All of the above is in their interest…makes you wonder why they are still problems.

 

 

Xpect More…

So we finally have the GA of XtremIO; not that the GA is much different from the DA in many ways, it is still going to be pretty hard to get hold of an XtremIO if you want one. And that is of course a big *if*; do you really need an All-Flash-Array? Can you use it? Is it just going to be a sledge-hammer to crack a performance nut!

Firstly, I think that you have to point out that presently even under GA, the XtremIO array has some pretty horrible official caveats; not guaranteed non-disruptive upgrade, lack of replication services and the like mean that this is no-where near ready to replace the normal use case for an Enterprise array.

Add in today’s fairly limited scalability and it is obvious that this is not a VMAX replacement today. So will it be in future?

At the moment, that is pretty unclear; we’re due a tick in the tick-tock of VMAX releases; I’d say EMCWorld 2014 is going to be all about the next generation of the VMAX.

But what about the XtremIO in general?

Is it a good idea or even a good implementation of a good idea? It’s odd because looking at the architecture…it feels like what would have happened if XIV had built an all-flash-array as opposed to the spinning-rust array that they have. Much of what we are coming to expect from a modern array from an architectural point-of-view is here; balanced i/o, no hot spots, no tuning, no tiering and minimal management is here. Yet without the aforementioned Enterprise features such as replication, it all feels….well, undercooked.

And there is still the question as to where you are going to use an All-Flash-Array; if I never see another presentation from a flash vendor that mentions VDI again, it’ll be too soon! Let’s just use our AFA arrays as super-fast boot-drives!

So where else are you going to use it? Probably in the same place that you are already using hybrid arrays…to accelerate poorly performing applications? But do you need to keep all your data on AFA and can you tier easily between multiple array types?   You see, you need a capacity tier and you need data mobility…data has to flow. The hybrids have a distinct advantage here. So what is the answer for XtremIO

If you manage the data silos at the application layer; you might well find that you begin to loose the value of an all-flash-array, you’ll be moving data from the array to another array. Doing more I/Os at the host level..

I’m intrigued to see how EMC and others begin to solve this problem because we are not looking at an All-Flash data-storage environment for most people for some time.

Could we see clustering of XtremIO with other arrays? Or are we doomed to manage storage as silos forever.

I don’t see XtremIO as replacing the Enterprise storage arrays anytime soon; I think the traditional EMC sales-drone has plenty of refresh opportunity for some time.

 

 

Bearing the Standard…

At SNW Europe, I had a chance to sit down with David Dale of SNIA; we talked about various things, how SNIA becomes more relevant, how it becomes a more globally focused organisation and the IT industry in general. And we had a chat about SNIA’s role in the development of standards and suddenly companies who were not interested in standards are suddenly becoming interested in standards.

It appears in the world of Software-Defined everything; vendors are beginning to realise that they need standards to make this all to work. Although it is possible to write plug-ins for every vendor’s bit of kit; there is a dawning realisation amongst the more realistic that this is a bit of a fool’s errand.

So after years of dissing things like SMI-S; a certain large vendor who is trying to make a large play in the software-defined storage world is coming to the table. A company who in the past have been especially standards non-friendly…

Of course, they are busy trying to work out how to make their software-defined-storage API the de-facto standard but it is one of the more amusing things to see SMI-S dotted all over presentations from EMC.

But I think that it is a good point and important for us customers; standards to control and manage our storage are very important, we need to do more to demand that our vendors start to play ball. Storage infrastructure is becoming increasingly complex due to the new silos that are springing up in our data-centre.

It may well to be too early to predict the death of the Enterprise-Do-Everything-Array but a vigorously supported management standard could hasten its demise; if I can manage my silos simply, automating provisioning, billing and all my admin tasks…I can start to purchase best of breed without seriously overloading my admin team.

This does somewhat beggar the question, why are EMC playing in this space? Perhaps because they have more silos than any one company at the moment and they need a way of selling them all to the same customer at the same time…

EMC, the standard’s standard bearer…oh well, stranger things have happened…

 

 

 

Looking for the Exit

Sitting on the exhibition floor at Powering the Cloud; you sometimes wonder how many of these companies will be still here next year, the sheer volume of flash start-ups is quite frightening; I can see Pure, Tintri, Nimble, Fusion-IO, Violin all from where I am sitting. And really there is little to choose between them. All trying to do the same thing…all trying to be the next big thing, the next NetApp. And do any of them stand any chance of doing this?

There is little uniqueness about any of them; they’ll all claim technical superiority over each other but for most this is a marketing war driven by a desire to exit. Technically the differentiation between them is slight, scratch many of them and you will find a relatively standard dual-head array running Linux with some kind of fork of ZFS. Tuned for flash? Well, NetApp have spent years trying to tune WAFL for flash and have pretty much given up. Hence the purchase of Engenio and the new Flashray products.

This is not to say that new start-ups are bringing nothing new to the party but we have a mulitude of products; from pure flash plays to hybrid flash to flash in the server; lots of product overlap and little differentiation. Choice is good though?

So what does this mean to the end-user delegates walking the floor; well, there has to be a certain amount of wariness about the whole thing. Who do you spend your money with, who do you make a strategic investment in and who is going to be around next year?

The biggest stands here are that of Oracle, HP, Dell and NetApp. All companies who might be in the market for such an acquisition. I guess the question in the back of everyone’s mind is who might they acquire, if anyone. And will acquisition even be good for the customers of the acquired company? How many products have we seen acquired and pushed into dusty corners.

End-users are bad enough when it comes to shelfware but the big technology companies are even worse, they acquire whole companies and turn them into shelf-ware.

So we need to be looking at more standards and ways of deploying storage in ways that it becomes more standardised and removes the risk from taking risks.

And there in lies a problem for many start-ups; how do you disrupt without disrupting. How do you lock-in without actually locking-in. Perhaps some help on that question might come from an unusual place but more another time.

But I’ve pretty much come to the conclusion that most of them don’t really care. It’s all about the exit; some might make it to IPO but even then most will want to be acquired. I’m not seeing a huge amount of evidence of a desire to build a sustainable and stable business.

It is as if the storage industry has lost it’s soul or at least sold it.

Tape – the Death Watch..

Watching the Spectralogic announcements from a far and getting involved in a conversation about tape on Twitter has really brought home the ambivalent relationship I have with tape; it is a huge part of my professional life but if it could be removed my environment, I’d be more than happy.

Ragging on the tape vendors does at times feel like kicking a kitten but ultimately tape sucks as a medium; it’s fundamental problem is that it is a sequential medium in a random world.

If you are happy to write your data away and only ever access it in truly predictable fashions; it is potentially fantastic but unfortunately much of business is not like this. People talk about tape as being the best possible medium for cold storage and that is true, as long as you never want to thaw large quantities quickly. If you only ever want to thaw a small amount and in relatively predictable manner; you’ll be fine with tape. Well, in the short term anyway.

And getting IT to look at an horizon which more than one refresh generation away is extremely tough.

Of course, replacing tape with disk is not yet economic over the short-term views that we generally take; the cost of disk is still high when compared to tape; disk’s environmental footprint is still pretty poor when compared to tape and from a sheer density point of view, tape still has a huge way to go…even if we start factor in upcoming technologies such as shingled disks.

So for long-term archives; disk will continue to struggle against tape…however does that means we are doomed to live with tape for years to come? Well SSDs are going to take 5-7 years to hit parity with disk prices; which means that they are not going to hit parity with tape for some time.

Yet I think the logical long-term replacement for tape at present is SSDs in some form or another; I fully expect the Facebooks and the Googles of this world to start to look at the ways of building mass archives on SSD in an economic fashion. They have massive data requirements and as they grow to maturity as businesses; the age of that data is increasing…their users do very little in the way of curation, so that data is going to grow forever and it probably has fairly random access patterns.

You don’t know when someone is going to start going through someone’s pictures, videos and timelines; so that cold data could warm pretty quickly.  So having to recall it from tape is not going to be fun; the contention issues for starters and unless you come up with ways of colocating all of an individual’s data on a single tape; a simple trawl could send a tape-robot into melt down. Now perhaps you could do some big data analytics and start recalling data based on timelines; employ a bunch of actuaries to analyse the data and recall data based on actuarial analysis.

The various news organisations already do this to a certain extent and have obits prepared for most major world figures. But this would be at another scale entirely.

So funnily enough…tape, the medium that wouldn’t die could be kiboshed by death. And if the hyper-scale companies can come up with an economic model which replaces tape…I’ll raise a glass to good time and mourn it little..

And with that cheerful note…I’ll close..

 

 

Die Lun DIE!

I know people think that storagebods are often backward thinking and hidebound by tradition and there is some truth in that. But the reality is that we can’t afford to carry on like this; demands are such that we should grasp anything which makes our lives easier.

However we need some help both with tools and education; in fact we could do with some radical thinking as well; some moves which allow us to break with the past. In fact what I am going to suggest almost negates my previous blog entry here but not entirely.

The LUN must die, die, die….I cannot tell you how much I loathe the LUN as a abstraction now; the continued existence of the LUN offends mine eyes! Why?

Because it allows people to carry on asking for stupid things like multiple 9 gigabyte LUNs for databases and the likes. When we are dealing with terabyte+ databases; this is plain stupid. It also encourage people to believe that they can do a better job of laying out an array than an automated process.

We need to move to a more service oriented provisioning model; where we provision capacity and ask for a IOPS and latency profile appropriate to the service provision. Let the computers work it all out.

This has significant ease of management and removes what has become a fairly pointless abstraction from the world. It means it easier to configure replication, data-protection, snaps and clones and the like. It means that growing an environment becomes simpler as well.

It would make the block world feel at closer to the file world. Actually, it may even allow us to wrap a workload into something which feels like an object; a super-big-object but still an object.

We move to a world where applications can request space programmatically if required.

As we start to move away from an infrastructure which is dominated by the traditional RAID architectures; this makes more sense than the current LUN abstraction.

If I already had one of these forward-looking architectures, say XIV or 3PAR; I’d look at ways of baking this in now..this should be relatively easy for them, certainly a lot easier than some of the more legacy architectures out there. But even the long-in-the-tooth and tired architectures such as VMAX should be able to be provisioned like that.

And then what we need is vendors to push this as the standard for provisioning…yes, you can still do it the old way but it is slower and may well be less performant.

Once you’ve done that….perhaps we can have a serious look at Target Driven Zoning; if you want to move to a Software Defined Data-Centre; enhancements to the existing protocols like this are absolutely key.

 

So I wouldn’t start from here…

We’ve had a few announcements from vendors and various roadmaps have been put past me recently; if I had one comment, it would be if I was designing an array or a storage product; I probably wouldn’t start from where most of them are….both vendors, old and new.

There appears to be a real fixation on the past; lots of architectures which are simply re-inventing what has gone before. And I although I understand why; I don’t understand why.

Let’s take the legacy vendors; you can’t change things because you will break everything; you will break the existing customer scripts and legacy automation; you break processes and understanding. So, we can’t build a new architecture because it breaks everything.

I get the argument but I don’t necessarily agree with the result.

And then we have the new kids on the block who want to see to continue to build yesterday’s architecture today; so we’ll build something based on a dual-head filer because everyone knows how to do that and they understand the architecture.

Yet again I get the argument but I really don’t agree with the result now.

I’m going to take the second first; if I wanted to buy a dual-head filer, I’d probably buy it from the leading pack. Certainly if I’m a big storage customer; it is very hard for one of the new vendors get it down to a price that is attractive.

Now, you may argue that your new kit is so much better than the legacy vendors that it is worth the extra but you almost certainly will break my automation and existing processes. Is it really worth that level of disruption?

The first situation with the legacy vendors is more interesting; can I take the new product and make it feel like the old stuff from a management point of view? If storage is truly software and the management layer is certainly software; I don’t see that it should be beyond the wit of developers to make your new architecture feel like the old stuff.

Okay, you might strip out some of the old legacy constructs; you might even fake them…so if a script creates a LUN utilisng a legacy construct; you just fake the responses.

There are some more interesting issues around performance and monitoring but as a whole, the industry is so very poor at it; breaking this is not such a major issue.

Capacity planning and management; well how many people really do this? Although it is probably the really big customers who do so but they might well be the ones who will look at leveraging new technology without a translation layer.

So if I was a vendor; I would be looking at ways to make my storage ‘plug compatible’ with what has gone before but under the covers, I’d be looking for ways to do it a whole lot better and I wouldn’t be afraid to upset some of my legacy engineering teams. I’d build a platform that I could stick personalities over.

And it’s not just about a common look and feel for the GUI; it has to be for the CLI and the APIs as well.

Make the change easy…reduce the friction…

Five Years On (part 3)

So all the changes referenced in part 2, what do they mean? Are we are at an inflection point?

The answer to the latter question is probably yes but we could be at a number of inflection points both localised vendor inflection points but also industry-wide ones as well. But we’ll probably not know for a couple more years and then with hindsight we can look back and see.

The most dramatic change that we have seen in the past five years is the coming of Flash-based storage devices; this is beginning to change our estates and what we thought was going to become the norm.

Five years ago; we were talking about general purpose, multi-tier arrays; automated tiering and provisioning but all coming together in a single monolithic device. The multi-protocol filer model was going to become the dominant model; this was going to allow us to break down silos in the data centre and to simply the estate.

Arrays were getting bigger as were disks; i/o density was a real problem and generally the slowest part of any system was the back-end storage.

And then SSDs began to happen; I know that flash-based/memory-based arrays have been around for a long time but they were very much specialist and a niche market. But the arrival of the SSD; flash in familar form-factor at a slightly less eye-watering price was a real change-bringer.

EMC and others scrambled to make use of this technology; treat them as a faster disk tier in the existing arrays was the order of the day. Automated Storage Tiering technology was the must have technology for many array manufacturers; few customers could afford to run all of their workloads on an entirely SSD-based infrastructure.

Yet if you talk to the early adopters of SSDs in these arrays; you will soon hear some horror stories; the legacy arrays simply were not architected to make best use of the SSDs in them. And arguably still aren’t; yes, they’ll run faster than your 15k spinning rust tier but you are not getting the full value from them.

I think that all the legacy array manufacturers knew that there were going to be bottle-necks and problems; the different approaches that the vendors take almost points to this and the different approaches taken by a single vendor..from using flash as a cache to utilising it simply as a faster disk…using it as extension of the read cache to using it as both a read and write cache.

Vendors claiming that they had the one true answer….none of them did.

This has enabled a bunch of start-ups to burgeon; where confusion reigns, there is opportunity for disruption. That and the open-sourcing of ZFS has built massive opportunity for smaller start-ups, the cost of entry into the market has dropped. Although if you examine many of the start-ups offerings; they are really  a familiar architecture but aimed at a different price point and market as opposed to the larger storage vendors.

And we have seen a veritable snow-storm of cash both in the form of VC-money but also acquisition as the traditional vendors realise that they simply cannot innovate quickly enough within their own confines.

Whilst all this was going on; there has been an incredible rise in the amount of data that is now being stored and captured. The more traditional architectures struggle; scale-up has it’s limits in many cases and techniques from the HPC market place began to become mainstream. Scale-out architectures had begun to appear; firstly in the HPC market, then into the media space and now with the massive data demands of the traditional enterprises…we see them across the board.

Throw SSDs, Scale-Out together with Virtualisation; you have created a perfect opportunity for all in the storage market to come up with new ways of fleecing providing value to their customers.

How do you get these newly siloed data-stores to work in harmonious and easy to manage way? How do we meet the demands of businesses that are growing ever faster. Of course we invent a new acronym that’s how….’SDS’ or ‘Software Defined Storage’

Funnily enough; the whole SDS movement takes me right back to the beginning; many of my early blogs were focused on the terribleness of ECC as a tool to manage storage. Much of it due to the frustration that it was both truly awful and was trying to do to much.

It needed to be simpler; the administration tools were getting better but the umbrella tools such as ECC just seemed to collapse under their own weight. Getting information out of them was hard work; EMC had teams devoted to writing custom reports for customers because it was so hard to get ECC to report anything useful. There was no real API and it was easier to interrogate that database directly.

But even then it struck me that it should have been simple to code something which sat on top of the various arrays (from all vendors); queried them and pulled back useful information. Most of them already had fully featured CLIs; it should have been not beyond the wit of man to code a layer that sat above the CLIs that took simple operations such as ‘allocate 10x10Gb LUNs to host ‘x’ ‘ and turn them into the appropriate array commands; no matter which array.

I think this is the promise of SDS. I hope the next five years will see the development of this; that we see storage with in a data-centre becoming more standardised from an programmatic point of view.

I have hopes but I’m sure we’ll see many of the vendors trying to push their standard and we’ll probably still be in a world of storage silos and ponds…not a unified Sea of Storage.

 

 

Five Years On (part 2)

Looking back over the last five years; what has changed in the storage industry?

Well, there have certainly been a few structural changes; the wannabes, the could-bes, theyve mostly disappeared through acquisition or general collapse. The big players are still the big players; EMC, HDS, HP, IBM and NetApp still pretty much dominate the industry.

And their core products are pretty much the same at present; there’s been little revolution and a bit of evolution but the array in the data-centre today doesn’t yet feel much different from the array from five years ago.

Five years ago I was banging on about how useless ECC was and how poor the storage management tools are in general. The most used storage management tool was Excel. That was five years ago and as it was then, so it is today. No-one has yet produced a great storage management tool to enable the management of these ever growing estates.

Yet, there has been a massive improvement in the storage administration tools; anyone with a modicum of storage knowledge should be able to configure almost any array these days. Yes, you will be working at the GUI but I can take an IBM storage admin and point them at an EMC array, they will be able to carve it up and present storage.

Utilisation figures for storage still tend to be challenging; there is a great deal of wastage as I have blogged about recently. Some of this is poor user behaviour and some is poor marketing behaviour in that there is no way way to use what has been sold effectively.

So pretty much nothing has changed then?

Well…

Apart from the impact of SSD and Flash on the market; the massive number of start-ups focused on this sector…

Oh…and scale-out; Scale-Out is the new Scale-Up…Go Wide or Go Home..

Oh..then there’s virtualisation; the impact of virtualisation on the storage estate has been huge…

And then there’s that thing called Cloud which no-one can grasp and means different things to everyone..

And then there’s the impact of Amazon and their storage technologies..

And Big Data and the ever exploding growth of data collected and the ever hyperbolic hype-cycle.

So nothing’s really changed whilst everything has.

 

What a Waste..

Despite the rapid changes in the storage industry at the moment, it is amazing how much everything stays the same. Despite compression, dedupe and other ways people try to reduce and manage the amount of data that they store; it still seems that storage infrastructure tends to waste many £1000s just by using it according to the vendor’s best practise.

I spend a lot of my time with clustered file-systems of one type or another; from Stornext to GPFS to OneFS to various open-source systems and the constant refrain comes back; you don’t want your utilisation running too high..certainly no-more than 80% or if you feeling really brave, 90%. But the thing about clustered file-systems is that they tend to be really large and wasting 10-20% of your capacity rapidly adds up to 10s of £1000s. This is already on-top of the normal data-protection overheads…

Of course, I could look utilising thin-provisioning but the way that we tend to use these large file-systems does not it lend itself to it; dedupe and compression rarely help either.

So I sit there with storage which the vendor will advise me not to use but I’ll tell you something, if I was to suggest that they didn’t charge me for that capacity? Dropped the licensing costs for the capacity that they recommend that I don’t use; I don’t see that happening anytime soon.

So I guess I’ll just have factor in that I am wasting 10-20% of my storage budget on capacity that I shouldn’t use and if I do; the first thing that the vendor will do if I raise a performance related support call is to suggest that I either reduce the amount of data that I store or spend even more money with them.

I guess it would be nice to be actually able to use what I buy without worrying about degrading performance if I actually use it all. 10% of that nice bit of steak you’ve just bought…don’t eat it, it’ll make you ill!