Storagebod Rotating Header Image

Gherkins

I can only write from my experience and your mileage will vary somewhat but 2014 is already beginning to get interesting from a storage point of view. And it appears to have little to do with technology or perhaps too little technology.

Perhaps the innovation has stopped? Or perhaps we’re finally beginning to see the impact of Google/Amazon and Azure on the Enterprise market. Pricing models seem to be being thrown out of the window as the big vendors try to work out how to defend themselves against the big Cloud players.

Historically high margins are being sacrificed in order to maintain footprint; vendors are competing against themselves internally. Commodity plays are competing with existing product sets; white-box implementations, once something that they all liked to avoid and FUD, are seriously on the agenda.

It won’t be completely free-for-all but expect to start seeing server-platforms certified as target-platforms for all but the highest-value storage. Engineering objections are being worked around as hardware teams transition to software development teams; those who won’t or can’t will become marginalised.

Last year I saw lip-service being paid to this trend; now I’m beginning to see this happening. A change in focus…long overdue.

If you work in the large Enterprise, it seems that you can have it your way….

And yet, I still see a place for the hardware vendor. I see a place for the vendor that has market leading support and the engineering smarts that means that support does not cost a fortune to provide or procure.

Reducing call volumes and onsite visits but still ensuring that the call is handled and dealt with by smart people. This is becoming more and more of a differentiator for me; I don’t want okay support, I want great support.

The move to commoditisation is finally beginning….but I wonder if we are going to need new support models to at least maintain and hopefully improve the support we get today.

 

Storage Blues…

January not even out yet and already we have an interesting technology market happening; IBM’s withdrawal from the x86 server market does lead to a number of questions. Both on the future of IBM but also on what IBM feel the future of the market is; yet could this be another market that they withdraw from only to long-term regret as they did with the network market allowing Cisco to dominate?

IBM’s piecemeal withdrawal from the hardware market; a retreat to the highlands of the legacy enterprise market in hardware will lead to questions across the board as to what the future is for any IBM hardware. I am not sure of the market acceptance of their converged compute/network/storage strategy in the form of PureSystems; their me-too ‘Block’ offering but surely this is dead-duck now; Lenovo may continue to make the x86 components for IBM but how committed can we feel that IBM is to this. IBM appear to have completely ceded this space to their competitors; personally I’m not convinced by most of the converged offerings and the value but to completely cede a market seems to be rash.

But how does this impact IBM storage?

The heart of IBM’s Storwize product set is x86-based servers; SVC especially was ‘just’ an IBM server. IBM were one of the first companies who really leveraged the idea of the server as storage; Shark is and was simply a pair of RS/6000 or pSeries boxes, this has allowed them to utilise and share R&D across divisions. Something which should have been an advantage and enabled them to do some clever stuff; this stuff  they demonstrated yet never delivered.

Now there is no reason for them to simply source the servers from others, the same as almost every other storage company in the world and it moves the Storwize product set firmly into the realms of software (it was anyway) but will IBM move Storwize to a software-only product?

There is part of me who really feels that this is inevitable, it may be as a reaction to a move by a competitor; it may be as a move to enable a vV7000 to run as a cloud appliance? It may well end up being the only way that IBM can maintain any kind of foothold in the storage market.

No I haven’t forgotten XIV or IBM’s Flash offerings; XIV is a solid Tier 1.5 offering but it is also a collection of servers. XIV’s issue is really scalability and simply putting larger drives in is just reducing the IOP density. The Flash offering is as good as many and if you want raw performance without features; it is worth considering.

IBM’s GSS could be built into something which scales and with many of the ‘features’ of XIV. And in a software only IBM Storage strategy; it could develop into a solid product if some of the dependency on specific disk controllers could be relaxed. Yet the question has to be whether IBM has time.

And yet without either a scalable NAS or Object store; IBM have some real problems. None of which are are really hardware problems but moving away from building your base platform probably makes none of them easier to solve.

Or perhaps if they concentrate on software and services….

Already Getting Busy…

I’ve not been away but a mixture of illness, Christmas and general lethargy have meant that I’ve not bothered with writing for a bit. But 2014 and a new year appears to be upon us and I do wonder what it is going to bring us, especially in the world of IT infrastructure.

As we ended 2013, we saw both winners and losers in the world of Flash for example; Violin crashing as they struggle to increase sales and reduce burn; yet Pure seem to be on a stellar rise and hiring like maniacs. A UK launch is imminent and they are going to be interesting to watch. All Flash Arrays are still very much niche and even companies who need them are holding off on making any big decisions.

I’ve already spoken to a hybrid vendor this year; pushing their hybrid is good enough for most cases, very tied to the virtualisation use-case. And yes, VDI all over their powerpoints as a use-case. 2014, the year when VDI happens!!

I expect that I’ll spend time with more hybrid vendors who are playing some kind of chicken with SSD/Disk ratios; how low can they go? However, I’m also seeing more KVM/Openstack appearing on road-maps as they begin to realise that VMware might not be the only game in town.

I’m sure we’ll see more hype around hyper-convergence as attempts continue to build a new mainframe and I shall continue to struggle to work out why anyone wants to? I like being able to scale my infrastructure in right place; I don’t want to have to increase my compute to increase my storage and vice versa. Flexibility around compute/storage and network ratios is important.

Yet convergence of storage and compute will continue and there’s potentially some real challenge to the traditional storage technologies there. If I was building a new infrastructure today, I’d be looking hard whether I needed a SAN at all. But I wouldn’t be going straight to a hyper-converged infrastructure; there be dragons there I suspect.

I’ve already had my first vendor conversation where I’ve suggested that they are actually selling a software product and perhaps they should drop the hardware part; that and asking why the hell were they touting their own REST API for cloud-like storage…if industry giants like EMC have struggled against the Amazon juggernaut, what makes they think that they are any different?

And marketing as differentiation will probably continue….especially as the traditional vendors get more defensive around their legacy products.  No-one should get rich selling disk any more but it won’t stop them all trying.

 

2014 – A Look Forward….

As as we come to the end of another year, it is worth looking forward to see what if anything is going to change in the storage world next year because this year has pretty much been a bust as to innovation and radical new products.

So what is going to change?

I get the feeling not a huge amount.

Storage growth is going to continue for the end-users but the vendors are going to continue to experience a plateau of revenues. As end-users, we will expect more for our money but it will be mostly more of the same.

More hype around Software-Defined-Everything will keep the marketeers and the marchitecture specialists well employed for the next twelve months but don’t expect anything radical. The only innovation is going to be around pricing and consumption models as vendors try to maintain margins.

Early conversations this year point to the fact that the vendors really have little idea how to price their products in this space; if your software+commodity-hardware=cost-of-enterprise-array, what is in it for me?  If vendors get their pricing right; this could be very disruptive but at what cost to their own market position?

We shall see more attempts to integrate storage into the whole-stacks and we’ll see more attempts to converge compute, network and storage at hardware and software levels. Most of these will be some kind of Frankenpliance and converged only in shrink-wrap.

Flash will continue to be hyped as the saviour of the data-centre but we’ll still struggle to find real value in the proposition in many places as will many investors. There is a reckoning coming. I think some of the hybrid manufacturers might do better than the All-Flash challengers.

Hopefully however the costs of commodity SSDs will keep coming down and it’ll finally allow everyone to enjoy better performance on their work-laptops!

Shingled Magnetic Recording will allow storage densities to increase and we’ll see larger capacity drives ship but don’t expect them to appear in mainstream arrays soon; the vibration issues and re-write process is going to require some clever software and hardware to fully commercialise these. Still for those of us who are interested in long-term archive disks, this is an area worth watching.

FCoE will continue to be a side-show and FC, like tape, will soldier on happily. NAS will continue to eat away at the block storage market and perhaps 2014 will be the year that Object storage finally takes off.

Feeling Lucky?

And here we go again, another IT systems failure at RBS; RBS appear to have been having a remarkable run of high-profile core-system failures but I suspect that they have been rather unlucky or at least everyone else has been lucky. Ross McEwan, the new Chief Executive of RBS has admitted that decades of under-investment in IT Systems is to blame.

Decades seems to be an awful long-time but may well be accurate; certainly when I started working in IT twenty-five years ago, the rot had already set in. For example, the retail-bank that I started at had it’s core standing order system written in pounds, shillings and pence with a translation routine sitting on top of it; yet many of these systems were supposed to have been re-written as part of the millenium-bug investigations. Most of this didn’t happen, whole-scale rewrites of systems decades old and with few people who understood how they worked, this was simply not a great investment; just patch it up and move on.

RBS are not going to be the only large company sitting on a huge liability in the form of legacy applications; pretty much all of the banks do and many others. Applications have been moved from one generation of mainframe to the next and they still generally work but the people who know how they really work are long gone.

Yet this is no longer constrained to mainframe operations; many of us can point at applications running on kit which is ten years or more old on long-deprecated operating-systems. Just talk to your friendly DBA about how many applications are still dependent on Oracle 8 and in cases even earlier. Every data-centre has an application sitting in the corner doing something but no-one knows what it is and no-one will turn-off just in case.

Faced with ever declining IT budgets; either a real decline or being expected to do more with the same amount; legacy applications are getting left behind. Yes, we come across attempts to encapsulate the application in a VM and run it on the latest hardware but it still does not fix the legacy issue.

If it ain’t broke, don’t fix it…but the thing is, most software is broken but you’ve just not yet come across the condition that breaks it. Now the condition that breaks it may well be the untrained operator who does not know the cunning work-around to keep an application running; work-arounds simply should not become standard operating procedure.

Question is as we chase the new world of dynamic operations with applications churning every day; who is brave enough to argue for budget to go back and fix those things which aren’t broken. Who is going to be brave enough to argue for budget to properly decommission legacy systems, you know those systems who only have one user who happens to have a C at the beginning of their job title?

Now it seems that Ross McEwan may be one who is actually being forced into taking action; is anyone else going take action without a major failure and serious reputational damage? Or do people just feel lucky?

 

 

 

In With The New

As vendors race to be better, faster and to differentiate themselves in an already busy marketplace, the real needs of the storage teams can be left un-met and also that of the storage consumer. At times it is as if the various vendors are building dragsters, calling them family saloons and hoping that nobody notices. The problems that I blogged about when I started out blogging seem still mostly unsolved.

Management

Storage management at scale is still problematic; it is still extremely hard to find a toolset that will allow a busy team to be able to assess health, performance, supportability and capacity at a glance. Still too many teams are using spreadsheets and manually maintained records to manage their storage.

Tools which allow end-to-end management of an infrastructure from rust to silicon and all parts in-between still don’t exist or if they do, they come with large price-tags which invariably do not have a real ROI or a realistic implementation strategy.

As we build more silos in the storage-infrastructure; getting a view of the whole estate is harder now than ever. Multi-vendor management tools are in general lacking in capability with many vendors using subtle changes to inflict damage on the competing management tools.

Mobility

Data mobility across tiers where those tiers are spread across multiple vendors is hard; applications are generally not currently architected to encapsulate this functionality in their non-functional specifications. And many vendors don’t want you to be able to move data between their devices and competitors for obvious reasons.

But surely the most blinkered flash start-up must realise that this needs to be addressed; it is going to be an unusual company who will put all of their data onto flash.

Of course this is not just a problem for the start-ups but it could be a major barrier for adoption and is one of the hardest hurdles to overcome.

Scaling

Although we have scale-out and scale-up solutions; scaling is a problem. Yes, we can scale to what appears to be almost limitless size these days but the process of scaling brings problems. Adding additional capacity is relatively simple; rebalancing performance to effectively use that capacity is not so easy. If you don’t rebalance, you risk hotspots and even under-utilisation.

It requires careful planning and timing even with tools; it means understanding the underlying performance characteristics and requirements of your applications. And with some of the newer architectures that are storing metadata and de-duping; this appears to be a challenge to vendors. Ask questions of vendors as to why they are limited to a number of nodes; there will sheepish shuffling of feet and alternative methods of federating a number of arrays into one logical entity will quickly come into play.

And then mobility between arrays becomes an issue to be addressed.

Deterministic Performance

As arrays get larger; more workloads get consolidated onto a single array and without the ability to isolate workloads or guarantee performance; the risk of bad and noisy neighbours increases. Few vendors have yet grasped the nettle of QoS and yet fewer developers actually understand what their performance characteristics and requirements.

Data Growth

Despite all efforts to curtail this; we store ever larger amounts of data. We need an industry-wide initiative to look at how we better curate and manage data. And yet if we solve the problems above, the growth issue will simply get worse..as we reduce the friction and the management overhead, we’ll simply consume more and more.

Perhaps the vendors should be concentrating on making it harder and even more expensive to store data. It might be the only way to slow down the inexorable demand for ever more storage. Still, that’s not really in their interest.

All of the above is in their interest…makes you wonder why they are still problems.

 

 

Xpect More…

So we finally have the GA of XtremIO; not that the GA is much different from the DA in many ways, it is still going to be pretty hard to get hold of an XtremIO if you want one. And that is of course a big *if*; do you really need an All-Flash-Array? Can you use it? Is it just going to be a sledge-hammer to crack a performance nut!

Firstly, I think that you have to point out that presently even under GA, the XtremIO array has some pretty horrible official caveats; not guaranteed non-disruptive upgrade, lack of replication services and the like mean that this is no-where near ready to replace the normal use case for an Enterprise array.

Add in today’s fairly limited scalability and it is obvious that this is not a VMAX replacement today. So will it be in future?

At the moment, that is pretty unclear; we’re due a tick in the tick-tock of VMAX releases; I’d say EMCWorld 2014 is going to be all about the next generation of the VMAX.

But what about the XtremIO in general?

Is it a good idea or even a good implementation of a good idea? It’s odd because looking at the architecture…it feels like what would have happened if XIV had built an all-flash-array as opposed to the spinning-rust array that they have. Much of what we are coming to expect from a modern array from an architectural point-of-view is here; balanced i/o, no hot spots, no tuning, no tiering and minimal management is here. Yet without the aforementioned Enterprise features such as replication, it all feels….well, undercooked.

And there is still the question as to where you are going to use an All-Flash-Array; if I never see another presentation from a flash vendor that mentions VDI again, it’ll be too soon! Let’s just use our AFA arrays as super-fast boot-drives!

So where else are you going to use it? Probably in the same place that you are already using hybrid arrays…to accelerate poorly performing applications? But do you need to keep all your data on AFA and can you tier easily between multiple array types?   You see, you need a capacity tier and you need data mobility…data has to flow. The hybrids have a distinct advantage here. So what is the answer for XtremIO

If you manage the data silos at the application layer; you might well find that you begin to loose the value of an all-flash-array, you’ll be moving data from the array to another array. Doing more I/Os at the host level..

I’m intrigued to see how EMC and others begin to solve this problem because we are not looking at an All-Flash data-storage environment for most people for some time.

Could we see clustering of XtremIO with other arrays? Or are we doomed to manage storage as silos forever.

I don’t see XtremIO as replacing the Enterprise storage arrays anytime soon; I think the traditional EMC sales-drone has plenty of refresh opportunity for some time.

 

 

Bearing the Standard…

At SNW Europe, I had a chance to sit down with David Dale of SNIA; we talked about various things, how SNIA becomes more relevant, how it becomes a more globally focused organisation and the IT industry in general. And we had a chat about SNIA’s role in the development of standards and suddenly companies who were not interested in standards are suddenly becoming interested in standards.

It appears in the world of Software-Defined everything; vendors are beginning to realise that they need standards to make this all to work. Although it is possible to write plug-ins for every vendor’s bit of kit; there is a dawning realisation amongst the more realistic that this is a bit of a fool’s errand.

So after years of dissing things like SMI-S; a certain large vendor who is trying to make a large play in the software-defined storage world is coming to the table. A company who in the past have been especially standards non-friendly…

Of course, they are busy trying to work out how to make their software-defined-storage API the de-facto standard but it is one of the more amusing things to see SMI-S dotted all over presentations from EMC.

But I think that it is a good point and important for us customers; standards to control and manage our storage are very important, we need to do more to demand that our vendors start to play ball. Storage infrastructure is becoming increasingly complex due to the new silos that are springing up in our data-centre.

It may well to be too early to predict the death of the Enterprise-Do-Everything-Array but a vigorously supported management standard could hasten its demise; if I can manage my silos simply, automating provisioning, billing and all my admin tasks…I can start to purchase best of breed without seriously overloading my admin team.

This does somewhat beggar the question, why are EMC playing in this space? Perhaps because they have more silos than any one company at the moment and they need a way of selling them all to the same customer at the same time…

EMC, the standard’s standard bearer…oh well, stranger things have happened…

 

 

 

Looking for the Exit

Sitting on the exhibition floor at Powering the Cloud; you sometimes wonder how many of these companies will be still here next year, the sheer volume of flash start-ups is quite frightening; I can see Pure, Tintri, Nimble, Fusion-IO, Violin all from where I am sitting. And really there is little to choose between them. All trying to do the same thing…all trying to be the next big thing, the next NetApp. And do any of them stand any chance of doing this?

There is little uniqueness about any of them; they’ll all claim technical superiority over each other but for most this is a marketing war driven by a desire to exit. Technically the differentiation between them is slight, scratch many of them and you will find a relatively standard dual-head array running Linux with some kind of fork of ZFS. Tuned for flash? Well, NetApp have spent years trying to tune WAFL for flash and have pretty much given up. Hence the purchase of Engenio and the new Flashray products.

This is not to say that new start-ups are bringing nothing new to the party but we have a mulitude of products; from pure flash plays to hybrid flash to flash in the server; lots of product overlap and little differentiation. Choice is good though?

So what does this mean to the end-user delegates walking the floor; well, there has to be a certain amount of wariness about the whole thing. Who do you spend your money with, who do you make a strategic investment in and who is going to be around next year?

The biggest stands here are that of Oracle, HP, Dell and NetApp. All companies who might be in the market for such an acquisition. I guess the question in the back of everyone’s mind is who might they acquire, if anyone. And will acquisition even be good for the customers of the acquired company? How many products have we seen acquired and pushed into dusty corners.

End-users are bad enough when it comes to shelfware but the big technology companies are even worse, they acquire whole companies and turn them into shelf-ware.

So we need to be looking at more standards and ways of deploying storage in ways that it becomes more standardised and removes the risk from taking risks.

And there in lies a problem for many start-ups; how do you disrupt without disrupting. How do you lock-in without actually locking-in. Perhaps some help on that question might come from an unusual place but more another time.

But I’ve pretty much come to the conclusion that most of them don’t really care. It’s all about the exit; some might make it to IPO but even then most will want to be acquired. I’m not seeing a huge amount of evidence of a desire to build a sustainable and stable business.

It is as if the storage industry has lost it’s soul or at least sold it.

Tape – the Death Watch..

Watching the Spectralogic announcements from a far and getting involved in a conversation about tape on Twitter has really brought home the ambivalent relationship I have with tape; it is a huge part of my professional life but if it could be removed my environment, I’d be more than happy.

Ragging on the tape vendors does at times feel like kicking a kitten but ultimately tape sucks as a medium; it’s fundamental problem is that it is a sequential medium in a random world.

If you are happy to write your data away and only ever access it in truly predictable fashions; it is potentially fantastic but unfortunately much of business is not like this. People talk about tape as being the best possible medium for cold storage and that is true, as long as you never want to thaw large quantities quickly. If you only ever want to thaw a small amount and in relatively predictable manner; you’ll be fine with tape. Well, in the short term anyway.

And getting IT to look at an horizon which more than one refresh generation away is extremely tough.

Of course, replacing tape with disk is not yet economic over the short-term views that we generally take; the cost of disk is still high when compared to tape; disk’s environmental footprint is still pretty poor when compared to tape and from a sheer density point of view, tape still has a huge way to go…even if we start factor in upcoming technologies such as shingled disks.

So for long-term archives; disk will continue to struggle against tape…however does that means we are doomed to live with tape for years to come? Well SSDs are going to take 5-7 years to hit parity with disk prices; which means that they are not going to hit parity with tape for some time.

Yet I think the logical long-term replacement for tape at present is SSDs in some form or another; I fully expect the Facebooks and the Googles of this world to start to look at the ways of building mass archives on SSD in an economic fashion. They have massive data requirements and as they grow to maturity as businesses; the age of that data is increasing…their users do very little in the way of curation, so that data is going to grow forever and it probably has fairly random access patterns.

You don’t know when someone is going to start going through someone’s pictures, videos and timelines; so that cold data could warm pretty quickly.  So having to recall it from tape is not going to be fun; the contention issues for starters and unless you come up with ways of colocating all of an individual’s data on a single tape; a simple trawl could send a tape-robot into melt down. Now perhaps you could do some big data analytics and start recalling data based on timelines; employ a bunch of actuaries to analyse the data and recall data based on actuarial analysis.

The various news organisations already do this to a certain extent and have obits prepared for most major world figures. But this would be at another scale entirely.

So funnily enough…tape, the medium that wouldn’t die could be kiboshed by death. And if the hyper-scale companies can come up with an economic model which replaces tape…I’ll raise a glass to good time and mourn it little..

And with that cheerful note…I’ll close..