Storagebod Rotating Header Image

Big Ideas

Tape – the Death Watch..

Watching the Spectralogic announcements from a far and getting involved in a conversation about tape on Twitter has really brought home the ambivalent relationship I have with tape; it is a huge part of my professional life but if it could be removed my environment, I’d be more than happy.

Ragging on the tape vendors does at times feel like kicking a kitten but ultimately tape sucks as a medium; it’s fundamental problem is that it is a sequential medium in a random world.

If you are happy to write your data away and only ever access it in truly predictable fashions; it is potentially fantastic but unfortunately much of business is not like this. People talk about tape as being the best possible medium for cold storage and that is true, as long as you never want to thaw large quantities quickly. If you only ever want to thaw a small amount and in relatively predictable manner; you’ll be fine with tape. Well, in the short term anyway.

And getting IT to look at an horizon which more than one refresh generation away is extremely tough.

Of course, replacing tape with disk is not yet economic over the short-term views that we generally take; the cost of disk is still high when compared to tape; disk’s environmental footprint is still pretty poor when compared to tape and from a sheer density point of view, tape still has a huge way to go…even if we start factor in upcoming technologies such as shingled disks.

So for long-term archives; disk will continue to struggle against tape…however does that means we are doomed to live with tape for years to come? Well SSDs are going to take 5-7 years to hit parity with disk prices; which means that they are not going to hit parity with tape for some time.

Yet I think the logical long-term replacement for tape at present is SSDs in some form or another; I fully expect the Facebooks and the Googles of this world to start to look at the ways of building mass archives on SSD in an economic fashion. They have massive data requirements and as they grow to maturity as businesses; the age of that data is increasing…their users do very little in the way of curation, so that data is going to grow forever and it probably has fairly random access patterns.

You don’t know when someone is going to start going through someone’s pictures, videos and timelines; so that cold data could warm pretty quickly.  So having to recall it from tape is not going to be fun; the contention issues for starters and unless you come up with ways of colocating all of an individual’s data on a single tape; a simple trawl could send a tape-robot into melt down. Now perhaps you could do some big data analytics and start recalling data based on timelines; employ a bunch of actuaries to analyse the data and recall data based on actuarial analysis.

The various news organisations already do this to a certain extent and have obits prepared for most major world figures. But this would be at another scale entirely.

So funnily enough…tape, the medium that wouldn’t die could be kiboshed by death. And if the hyper-scale companies can come up with an economic model which replaces tape…I’ll raise a glass to good time and mourn it little..

And with that cheerful note…I’ll close..

 

 

Die Lun DIE!

I know people think that storagebods are often backward thinking and hidebound by tradition and there is some truth in that. But the reality is that we can’t afford to carry on like this; demands are such that we should grasp anything which makes our lives easier.

However we need some help both with tools and education; in fact we could do with some radical thinking as well; some moves which allow us to break with the past. In fact what I am going to suggest almost negates my previous blog entry here but not entirely.

The LUN must die, die, die….I cannot tell you how much I loathe the LUN as a abstraction now; the continued existence of the LUN offends mine eyes! Why?

Because it allows people to carry on asking for stupid things like multiple 9 gigabyte LUNs for databases and the likes. When we are dealing with terabyte+ databases; this is plain stupid. It also encourage people to believe that they can do a better job of laying out an array than an automated process.

We need to move to a more service oriented provisioning model; where we provision capacity and ask for a IOPS and latency profile appropriate to the service provision. Let the computers work it all out.

This has significant ease of management and removes what has become a fairly pointless abstraction from the world. It means it easier to configure replication, data-protection, snaps and clones and the like. It means that growing an environment becomes simpler as well.

It would make the block world feel at closer to the file world. Actually, it may even allow us to wrap a workload into something which feels like an object; a super-big-object but still an object.

We move to a world where applications can request space programmatically if required.

As we start to move away from an infrastructure which is dominated by the traditional RAID architectures; this makes more sense than the current LUN abstraction.

If I already had one of these forward-looking architectures, say XIV or 3PAR; I’d look at ways of baking this in now..this should be relatively easy for them, certainly a lot easier than some of the more legacy architectures out there. But even the long-in-the-tooth and tired architectures such as VMAX should be able to be provisioned like that.

And then what we need is vendors to push this as the standard for provisioning…yes, you can still do it the old way but it is slower and may well be less performant.

Once you’ve done that….perhaps we can have a serious look at Target Driven Zoning; if you want to move to a Software Defined Data-Centre; enhancements to the existing protocols like this are absolutely key.

 

So I wouldn’t start from here…

We’ve had a few announcements from vendors and various roadmaps have been put past me recently; if I had one comment, it would be if I was designing an array or a storage product; I probably wouldn’t start from where most of them are….both vendors, old and new.

There appears to be a real fixation on the past; lots of architectures which are simply re-inventing what has gone before. And I although I understand why; I don’t understand why.

Let’s take the legacy vendors; you can’t change things because you will break everything; you will break the existing customer scripts and legacy automation; you break processes and understanding. So, we can’t build a new architecture because it breaks everything.

I get the argument but I don’t necessarily agree with the result.

And then we have the new kids on the block who want to see to continue to build yesterday’s architecture today; so we’ll build something based on a dual-head filer because everyone knows how to do that and they understand the architecture.

Yet again I get the argument but I really don’t agree with the result now.

I’m going to take the second first; if I wanted to buy a dual-head filer, I’d probably buy it from the leading pack. Certainly if I’m a big storage customer; it is very hard for one of the new vendors get it down to a price that is attractive.

Now, you may argue that your new kit is so much better than the legacy vendors that it is worth the extra but you almost certainly will break my automation and existing processes. Is it really worth that level of disruption?

The first situation with the legacy vendors is more interesting; can I take the new product and make it feel like the old stuff from a management point of view? If storage is truly software and the management layer is certainly software; I don’t see that it should be beyond the wit of developers to make your new architecture feel like the old stuff.

Okay, you might strip out some of the old legacy constructs; you might even fake them…so if a script creates a LUN utilisng a legacy construct; you just fake the responses.

There are some more interesting issues around performance and monitoring but as a whole, the industry is so very poor at it; breaking this is not such a major issue.

Capacity planning and management; well how many people really do this? Although it is probably the really big customers who do so but they might well be the ones who will look at leveraging new technology without a translation layer.

So if I was a vendor; I would be looking at ways to make my storage ‘plug compatible’ with what has gone before but under the covers, I’d be looking for ways to do it a whole lot better and I wouldn’t be afraid to upset some of my legacy engineering teams. I’d build a platform that I could stick personalities over.

And it’s not just about a common look and feel for the GUI; it has to be for the CLI and the APIs as well.

Make the change easy…reduce the friction…

Five Years On (part 3)

So all the changes referenced in part 2, what do they mean? Are we are at an inflection point?

The answer to the latter question is probably yes but we could be at a number of inflection points both localised vendor inflection points but also industry-wide ones as well. But we’ll probably not know for a couple more years and then with hindsight we can look back and see.

The most dramatic change that we have seen in the past five years is the coming of Flash-based storage devices; this is beginning to change our estates and what we thought was going to become the norm.

Five years ago; we were talking about general purpose, multi-tier arrays; automated tiering and provisioning but all coming together in a single monolithic device. The multi-protocol filer model was going to become the dominant model; this was going to allow us to break down silos in the data centre and to simply the estate.

Arrays were getting bigger as were disks; i/o density was a real problem and generally the slowest part of any system was the back-end storage.

And then SSDs began to happen; I know that flash-based/memory-based arrays have been around for a long time but they were very much specialist and a niche market. But the arrival of the SSD; flash in familar form-factor at a slightly less eye-watering price was a real change-bringer.

EMC and others scrambled to make use of this technology; treat them as a faster disk tier in the existing arrays was the order of the day. Automated Storage Tiering technology was the must have technology for many array manufacturers; few customers could afford to run all of their workloads on an entirely SSD-based infrastructure.

Yet if you talk to the early adopters of SSDs in these arrays; you will soon hear some horror stories; the legacy arrays simply were not architected to make best use of the SSDs in them. And arguably still aren’t; yes, they’ll run faster than your 15k spinning rust tier but you are not getting the full value from them.

I think that all the legacy array manufacturers knew that there were going to be bottle-necks and problems; the different approaches that the vendors take almost points to this and the different approaches taken by a single vendor..from using flash as a cache to utilising it simply as a faster disk…using it as extension of the read cache to using it as both a read and write cache.

Vendors claiming that they had the one true answer….none of them did.

This has enabled a bunch of start-ups to burgeon; where confusion reigns, there is opportunity for disruption. That and the open-sourcing of ZFS has built massive opportunity for smaller start-ups, the cost of entry into the market has dropped. Although if you examine many of the start-ups offerings; they are really  a familiar architecture but aimed at a different price point and market as opposed to the larger storage vendors.

And we have seen a veritable snow-storm of cash both in the form of VC-money but also acquisition as the traditional vendors realise that they simply cannot innovate quickly enough within their own confines.

Whilst all this was going on; there has been an incredible rise in the amount of data that is now being stored and captured. The more traditional architectures struggle; scale-up has it’s limits in many cases and techniques from the HPC market place began to become mainstream. Scale-out architectures had begun to appear; firstly in the HPC market, then into the media space and now with the massive data demands of the traditional enterprises…we see them across the board.

Throw SSDs, Scale-Out together with Virtualisation; you have created a perfect opportunity for all in the storage market to come up with new ways of fleecing providing value to their customers.

How do you get these newly siloed data-stores to work in harmonious and easy to manage way? How do we meet the demands of businesses that are growing ever faster. Of course we invent a new acronym that’s how….’SDS’ or ‘Software Defined Storage’

Funnily enough; the whole SDS movement takes me right back to the beginning; many of my early blogs were focused on the terribleness of ECC as a tool to manage storage. Much of it due to the frustration that it was both truly awful and was trying to do to much.

It needed to be simpler; the administration tools were getting better but the umbrella tools such as ECC just seemed to collapse under their own weight. Getting information out of them was hard work; EMC had teams devoted to writing custom reports for customers because it was so hard to get ECC to report anything useful. There was no real API and it was easier to interrogate that database directly.

But even then it struck me that it should have been simple to code something which sat on top of the various arrays (from all vendors); queried them and pulled back useful information. Most of them already had fully featured CLIs; it should have been not beyond the wit of man to code a layer that sat above the CLIs that took simple operations such as ‘allocate 10x10Gb LUNs to host ‘x’ ‘ and turn them into the appropriate array commands; no matter which array.

I think this is the promise of SDS. I hope the next five years will see the development of this; that we see storage with in a data-centre becoming more standardised from an programmatic point of view.

I have hopes but I’m sure we’ll see many of the vendors trying to push their standard and we’ll probably still be in a world of storage silos and ponds…not a unified Sea of Storage.

 

 

What a Waste..

Despite the rapid changes in the storage industry at the moment, it is amazing how much everything stays the same. Despite compression, dedupe and other ways people try to reduce and manage the amount of data that they store; it still seems that storage infrastructure tends to waste many £1000s just by using it according to the vendor’s best practise.

I spend a lot of my time with clustered file-systems of one type or another; from Stornext to GPFS to OneFS to various open-source systems and the constant refrain comes back; you don’t want your utilisation running too high..certainly no-more than 80% or if you feeling really brave, 90%. But the thing about clustered file-systems is that they tend to be really large and wasting 10-20% of your capacity rapidly adds up to 10s of £1000s. This is already on-top of the normal data-protection overheads…

Of course, I could look utilising thin-provisioning but the way that we tend to use these large file-systems does not it lend itself to it; dedupe and compression rarely help either.

So I sit there with storage which the vendor will advise me not to use but I’ll tell you something, if I was to suggest that they didn’t charge me for that capacity? Dropped the licensing costs for the capacity that they recommend that I don’t use; I don’t see that happening anytime soon.

So I guess I’ll just have factor in that I am wasting 10-20% of my storage budget on capacity that I shouldn’t use and if I do; the first thing that the vendor will do if I raise a performance related support call is to suggest that I either reduce the amount of data that I store or spend even more money with them.

I guess it would be nice to be actually able to use what I buy without worrying about degrading performance if I actually use it all. 10% of that nice bit of steak you’ve just bought…don’t eat it, it’ll make you ill!

#storagebeers – September 25th – London

So as the evenings draw in; what could be nicer than a decent pint of beer with great company?

Well, this isn’t that…it’s a #storagebeers to be held in London on September 25th. There’s a few storage events around this date and we thought that it would be an ideal opportunity to bring the community together.

So if you are a storage admin, a vendor, a journo or perhaps you work for EMC Marketing and you want come along and tell me why the megalaunch was awesome and not tacky….please come along.

We’ll be in the Lamb and Flag near Covent Garden from about 17:30, may be earlier.

There is a rumour that Mr Knieriemen will be there and buying at least one drink…

Such Fun…

With EMC allegedly putting the VMAX into the capacity tier and suggesting that performance cannot be met by the traditional SAN; are we finally beginning to look at the death of the storage array?

The storage array as a shared monolithic device came about almost directly as the result of distributed computing; the necessity for a one-to-many device was not really there when the data-centre was dominated by the mainframe. And yet as computing has become ever more distributed; the storage array has begun to struggle more and more to keep up.

Magnetic spinning platters of rust have hardly increased in speed in a decade or more; their capacity has got ever bigger tho’; storage arrays have got denser and denser from a capacity point of view, yet real-world performance has just not kept pace. More and more cache has helped to hide some of this; SSDs have helped but to what degree?

It also has not helped that the plumbing for most SANs is Fibre-channel; esoteric, expensive and ornery, the image of the storage array is not good.

Throw in the increased compute power and the ever incessant demands for more data processing, coupled with an attitude to data-hoarding at a corporate scale which would make even the most OCD amongst of us look relatively normal.

And add the potential for storage-arrays to become less reliable and more vulnerable to real data-loss as RAID becomes less and less of an viable data-protection methodology at scale.

Cost and complexity with a sense of unease about the future means that storage must change. So what are we seeing?

A rebirth in DAS? Or perhaps simply a new iteration of DAS?

From Pernix to ScaleIO to clustered-filesystems such as GPFS; the heart of the new DAS is Shared-Nothing-Clusters. ex-Fusion-IO’s David Flynn appears to be doing something to pool storage attached to servers; you can bet that there will be a Flash part to all this.

We are going to have a multitude of products; interoperability issues like never before, implementation and management headaches…do you implement one of these products or many? What happens if you have to move data around between these various implementations? Will they present as a file-system today? Are they looking to replace current file-systems; I know many sys-admins who will cry if you try to take VxFS away from them.

What does data protection look like? I must say that the XIV data-protection methods which were scorned by many (me included) look very prescient at the moment (still no software XIV tho’? What gives IBM…).

And then there is application specific nature of much of this storage; so many start-ups are focused on VMware and providing storage in clever ways to vSphere…when VMware’s storage roadmap looks so rich and so aimed taking that market, is this wise?

The noise and clamour from the small and often quite frankly under-funded start-ups is becoming deafening…and I’ve yet to see a compelling product which I’d back my business on. The whole thing feels very much like the early days of the storage-array; it’s kind of fun really.

You Will be Assimilated.

So why are the small Flash vendors innovating and the big boys not? Why are they leaving them for dust? And do the big boys care?

Innovation in large companies is very hard; you have all the weight of history pressing down on you and few large companies are set-up to allow their staff to really innovate. Even Google’s famous 20% time has probably not born the fruit that one would expect.

Yet innovation does happen in large companies; they all spend a fortune on R&D; unfortunately most of that tends to be making existing products better rather than come up with a new product.

Even when a new concept threatens to produce a new product; getting an existing sales-force to sell a new product…well, why would they? Why would I as a big-tin sales-droid try and push a new concept to my existing customer base? They probably don’t even want to talk about something new; it’s all about the incremental business.

I have seen plenty of concepts squashed which then pop up in new start-ups having totally failed to gain traction in the large company.

And then there are those genuinely new ideas that the large vendor has a go at implementing themselves; often with no intention of releasing their own product, they are just testing the validity of the concept.

Of course, then there is the angel funding that many larger vendors quietly carry out; if you follow the money it is not uncommon to find a large name sitting somewhere in the background.

So do the big boys really care about the innovation being driven by start-ups…I really don’t think so. Get someone else to take the risk and pick-up the ones which succeed at a later date.

Acquisition is a perfectly valid R&D and Innovation strategy. Once these smaller players start really taking chunks of revenue from the big boys…well, it’s a founder with real principles who won’t take a large exit.

Of course, seeing new companies IPO is cool but it’s rarely the end of the story.

 

 

The Landscape Is Changing

As the announcements and acquisitions which fall into the realms of Software Defined Storage or Storage as I like to call it continue to come; one starts to ponder how this is all going to work and work practically.

I think it is extremely important to remember that firstly, you are going to need hardware to run this software on and this although is trending towards a commodity model; there are going to be subtle differences that are going to need accounting for. And as we move down this track, there is going to be a real focus on understanding workloads and the impact of different infrastructure and infrastructure patterns on this.

I am seeing more and more products which enable DAS to work as shared-storage resource; removing the SAN from the infrastructure and reducing the complexity. I am going to argue that this does not necessarily remove complexity but it shifts it. In fact, it doesn’t remove the SAN at all; it just changes it.

It is not uncommon now to see storage vendor presentations that show Shared-Nothing-Cluster architectures in some form or another; often these are software and hardware ‘packaged’ solutions but as end-users start to demand the ability to deploy on their own hardware, this brings a whole new world of unknown behaviours into play.

Once vendors relinquish control of the underlying infrastructure; the software is going to have to be a lot more intelligent and the end-user implementation teams are going to have to start thinking more like the hardware teams in vendors.

For example, the East-West traffic models in your data-centre become even more important and here you might find yourself implementing low-latency storage networks; your new SAN is no longer a North-South model but Server-Server (East-West). This is something that the virtualisation guys have been dealing with for some time.

Understanding performance and failure domains; do you protect the local DAS with RAID or move to a distributed RAIN model? If you do something like aggregate the storage on your compute farm into one big pool, what is the impact if one node in the compute farm starts to come under load? Can it impact the performance of the whole pool?

Anyone who has worked with any kind of distributed storage model will tell you that a slow performing node or a failing node can have impacts which far exceed that you believe possible. At times, it can feel like the good old days of token ring where a single misconfigured interface can kill the performance for everyone. Forget about the impact of a duplicate IP address; that is nothing.

What is the impact of the failure of a single compute/storage node? Multiple compute/storage nodes?

In the past, this has all been handled by the storage hardware vendor and pretty much invisibly at implementation phase to the local Storage team. But you will need now to make decisions about how data is protected and understand the impact of replication.

In theory, you want your data as close to the processing as you can but data has weight and persistence; it will have to move. Or do you come up with a method that allows you in a dynamic infrastructure that identifies where data is located and spins/moves the compute to it?

The vendors are going to have to improve their instrumentation as well; let me tell you from experience, at the moment understanding what is going on in such environments is deep magic. Also the software’s ability to cope with the differing capabilities and vagaries of a large-scale commodity infrastructure is going to be have to be a lot more robust than it is today.

Yet I see a lot of activity from vendors, open-source and closed-source; and I see a lot of interest from the large storage consumers; this all goes to point to a large prize to be won. But I’m expecting to see a lot of people fall by the road.

It’s an interesting time…

 

 

From Servers to Service?

Should Enterprise Vendors consider becoming Service Providers? When Rich Rogers of HDS  tweeted this and my initial response was

This got me thinking, why does everyone think that Enterprise Vendors shouldn’t become Service Providers? Is this a reasonable response or just a knee-jerk, get out of my space and stick to doing what you are ‘good’ at.

It is often suggested that you should not compete with your customers; if Enterprise Vendors move into the Service Provider space, they compete with some of their largest customers, the Service Providers and potentially all of their customers; the Enterprise IT departments.

But the Service Providers are already beginning to compete with the Enterprise Vendors, more and more of them are looking at moving to a commodity model and not buying everything from the Enterprise Vendors; larger IT departments are thinking the same. Some of this is due to cost but much of it is that they feel that they can do a better job of meeting their business requirements by engineering solutions internally.

If the Enterprise Vendors find themselves squeezed by this; is it really fair that they should stay in their little box and watch their revenues dwindle away? They can compete in different ways, they can compete by moving their own products to more of a commodity model, many are already beginning to do so; they could compete by building a Service Provider model and move into that space.

Many of the Enterprise Vendors have substantial internal IT functions; some have large services organisations; some already play in the hosting/outsourcing space.  So why shouldn’t they move into the Service Provider space? Why not leverage the skills that they already have?

Yes, they change their business model; they will have to be careful that they ensure that they compete on a level playing field and look very carefully that they are not utilising their internal influence on pricing and development to drive an unfair competitive advantage. But if they feel that they can do a better job than the existing Service Providers; driving down costs and improving capability in this space….more power to them.

If an online bookstore can do it; why shouldn’t they? I don’t fear their entry into the market, history suggests that they have made a bit of a hash of it so far…but guys fill your boots.

And potentially, it improves things for us all; as the vendors try to manage their kit at scale, as they try to maintain service availability, as they try to deploy and develop an agile service; we all get to benefit from the improvements…Service Providers, Enterprise Vendors, End-Users…everyone.