Storagebod Rotating Header Image

Big Ideas

Another Year In Bits…

So as another year draws to a close, it appears that everything in the storage industry is still pretty much as it was. There have been no really seismic shifts in the industry yet. Perhaps next year?

The Flash start-ups still continue to make plenty of noise and fizz about their products and growth. Lots of promises about performance and consolidation opportunities, however the focus on performance is throwing up some interesting stuff. It turns out that when you start to measure performance properly; you begin to find that in many cases that the assumed IOP requirements for many workloads isn’t actually there. I know of a few companies who have started down the flash route only to discover that they didn’t anything like the IOPs that they’d thought and with a little bit of planning and understanding, they could make a little flash go an awful long way. In fact, 15K disks would probably have done the job from a performance point of view. Performance isn’t a product and I wish some vendors would remember this.

Object Storage still flounders with an understanding or use case problem; the people who really need Object Storage currently, really do need it but they tend to be really large players and there are not a lot of them. All of the Object Storage companies can point at some really big installs but you will rarely come across the installs; there is a market, it is growing but not at a stellar rate at moment.

Object Storage Gateways are becoming more common and there is certainly a growing requirement; I think as they become common and perhaps simply a feature of a NAS device, this will drive the use of Object Storage until it hits a critical mass and there will be more application support for Object Storage natively. HSM and ILM may finally happen in a big way; probably not to tape but to an Object Store (although Spectralogic are doing great work in bringing Object and Tape together).

The big arrays from the major vendors continue to attract premium costs; the addiction to high margins in this space continues. The usability and manageability has improved significantly but the premium you pay cannot really continue. I get the feeling that some vendors are simply using these to fund their transition to a different model; lets hope that this transition doesn’t take so long that they get brushed away.

The transition to a software dominated model is causing vendors some real internal and cultural issues; they are so addicted to the current costing models that they risk alienating their customers. If software+commodity hardware turns out to be more expensive than buying a premium hardware array; customers may purchase neither and find a different way of doing things.

The cost of storage in the Cloud, both for consumers and corporates continues to fall; it continues to trend ever closer to zero as the Cloud price war continues. You have to wonder when Amazon will give it up as Google and Microsoft fight over the space. Yet for the really large users of storage, trending to zero is still too expensive for us to put stuff in the Cloud; I’m not even sure free is cheap enough yet.

The virtualisation space continues to be dominated by the reality of VMware and promise of OpenStack. If we look at industry noise, OpenStack is going to be the big player; any event that mentions OpenStack gets booked up and sells out but the reality is that the great majority are still looking to VMware for their virtualisation solution. OpenStack is not a direct replacement for VMware and architectural work will needed in your data-centre and with your installed applications but we do see VMware architectures that could be easily and more effectively replaced with OpenStack. But quite simply, OpenStack is still pretty hard-work and hard-pushed infrastructure teams aren’t well positioned currently to take advantage of it.

And almost all virtualisation initiatives are driven and focussed on the wrong people; the server-side is easy…the storage and especially the changes to the network are much harder and require signfiicantly more change. It’s time for the Storage and Network folks to gang-up and get their teams fully involved in virtualisation initiatives. If you are running a virtualisation initiative and you haven’t got your storage and network teams engaged, you are missing a trick.

There’s a lot bubbling in the Storage Industry but it all still feels the same currently. Every year I expect something to throw everything up in the air and it is ripe for major disruption but the dominant players still are dominant. Will the disruption be technology or perhaps it’ll be a mega-merger?

Can I take this chance to wish all my readers a Merry Christmas and a Fantastic New Year…

Stop Selling Storage

In the shower today, I thought back over a number of meetings with storage vendors I’ve had over the past couple of weeks. Almost without exception, they mentioned AWS and the other large cloud vendors as a major threat and compared their costs to them.

We’ve all seen the calculation and generally we know that for many large Enterprises that the costs often favour the traditional vendors; buying at scale and at the traditionally large discounts mean that we get a decent deal. Storage turns out to be free at the terabyte  level and only becomes an appreciable cost once we start getting to the petascale; this is pretty much true for both the Cloud providers and the traditional vendors.

But when I look round the room in a normal sales presentation/briefing; it is not uncommon for the vendor to have four or five people present, often outnumbering the number of customers in the room; account salesman, product salesman, account technical specialist, product technical specialist and probably a couple of hangers-on. A huge cost to the vendor and hence to me as a customer.

And then if we decide that we want to purchase the storage; we then drift into the extended procurement mode. Our procurement and finance teams will talk to the vendor teams; there may well also be legal teams and other meetings to deal with. The cost to both the vendor and the customer is enormous.

However if we go to a cloud vendor; we generally deal with a website. The cost is there; it’s displayed to all and the only discounts we get are based around volume. Now, I know that there are deals to be done with the larger cloud vendors; otherwise I wouldn’t be fielding calls from their recruitment people looking for people to work in their technical consultancy/sales teams but their sales efforts and costs are a lot less.

It seems to me that if the traditional storage vendors really want to compete with the cloud vendors, they need to change their sales model completely. This means stripping out huge amounts of the cost of sale; this means that they also need to consider how they equalise the playing field for customers both large and small; published volume discounts and reduced costs for all, especially the smaller customers. The Enterprise customers will not initially see a huge difference in their cost base but smaller customers will have greater choice and long-term it will benefit all; perhaps even some vendors.

Basically stop selling storage; build better products, sensible marketing and reduced friction to acquisition.

I kind of hope that the move to storage delivered as software designed to run on commodity hardware could drive this but at the moment, I see many traditional vendors really struggling to come up with a sales and marketing strategy to support this transition.

The one who gets this right, could or should do very well. The ones who continue with sales-model that is based on how they sold hardware in the past…could fail very hard.

Yes, there are customers who still like the idea of buying hardware and software in an integrated package; arguably, that’s what the cloud-providers do with serious limitations; but they will look at disaggregated models and do the cost modelling. Your prices should not attract some of the serious premium that you believe that you deserve….so look at ways of taking out cost.

 

Scrapheap Challenge

On the way to ‘Powering the Cloud’ with Greg Ferro and Chris Evans, we got to discussing Greg’s book White Box Networking and whether there could be a whole series of books discussing White Box storage, virtualisation, servers etc and how to build a complete White Box environment.

This lead me to thinking about how you would build an entire environment and how cheap it would be if you simply used eBay as your supplier/reseller.  If you start looking round eBay, it is crazy how far you can make your money go; dual processor HP G7s with 24Gb for less than £1000.; 40 port 10 GbE switch for £1500; 10 GbE cards down to £60.  Throw in a Supermicro 36 drive storage chassis and build a hefty storage device utilising that; you can build a substantial environment for less than £10,000 without even trying.

I wonder how far you could go in building the necessary infrastructure for a start-up with very few compromises. And whether you can completely avoid going into the cloud at all?  The thing that itsstill going to hurt is the external network connectivity to the rest of the world.

But instead of ‘White Box’…perhaps it’s time for junk-box infrastructure. I don’t think it’d be any worse than quite a few existing corporate infrastructures and would probably be more up-to-date than many.

What you could build?

 

Heady Potential Eventually Means Catastrophe?

Amongst the storage cognoscenti today on Twitter, there’s been quite a discussion about EMC and HP possibly merging. Most people seem to be either negative or at best disbelieving that something like this would bring value or even happen.

But from a technology point of view, the whole thing might make a lot of sense. The storage folks like to point at overlap in the portfolios but I am not convinced that this really matters and the overlap might not be as great as people think. Or at least, the overlap might well finally kill off the weaker products; I’ll let the reader decide those products that deserve to die.

EMC are on a massive push to commoditise and move their technology onto a standard platform; software variants of all their storage platforms exist and just need infrastructure to run on. I’ve mentioned before that HP’s SL4500 range is an ideal platform for many of EMC’s software defined products.

But storage aside; the EMC Federation has a lot of value for HP, it is early days for Pivotal but I suspect Meg can see a lot of potential in it. She’ll see a bit of the eBay in it; she’ll get the value of some of the stuff that they are trying to do. They are still very much a start-up, a well-funded start-up tho’.

VMware, I would expect to continue as it is; it might throw up some questions about EVO-RAIL and HP have pointedly not produced an EVO-RAIL certified stack; despite being invited to. But to fold VMware into the main HP would be rash and would upset too many other vendors. But hey, with IBM pulling out of x86 servers and honestly, who cares about Oracle’s x86 servers; HP might have a decent run at dominating the server marketplace before Lenovo takes a massive bite out of it.

And customers? I’m not sure that they’d be too uncomfortable with a HP/EMC merger; mergers are almost certainly on the agenda and there are less attractive ones on the table.

HP need software to help them build their software-defined data-centre; Openstack will only take them so far today. EMC need a commodity partner to help them build a hardware platform that would be trusted. An HP/EMC stack would be solid and traditional but with potential to grow into the 3rd platform supporting infrastructure as customers move that way.

And they both need a way of fending off Amazon and Google; this might be the way for them to do it.

I know I’ve been talking about this more like a HP take-over of EMC and it’d be closer to a true merger; this makes it harder…true mergers always are but culturally, the companies are less dissimilar than most realise. They both need more rapid cultural change…perhaps a merger might force that on them.

Will it happen, I don’t know…would it be a disaster if it did? I don’t think so. It’d also be good for the industry; lots of hacked-off smart people would leave the new behemoth and build new companies or join some of the pretenders.

A shake up is needed…this might do it. Will the market like it? I’m not especially bothered…I don’t hold shares in either company. I just think it might make more sense than people realise. 

 

Singing the lowest note…

The problem with many discussions in IT, is that they rapidly descend into one that looks and feels like a religious debate; whereas reality is much more complex and the good IT specialist will develop their own syncretic religion and pinch bits that work from everywhere.

One of the things that many of us working in Enterprise IT is that our houses have many rooms and must house many differing belief systems; the one true way is not a reality. And any organisation more than fifteen years old has probably built up a fair amount of incompatible dogmas.

For all the pronouncements of the clouderatti; we are simply not in the position to move whole-scale to the Cloud in any of its many forms. We have applications that are simply not designed for scale-out; they are certainly not infrastructure aware and none of them are built for failure. But we also have a developer community who might be wanting to push ahead; use the language du jour and want to utilise cloud-like infrastructure, dev-ops and software defined everything.

So what do we in the infrastructure teams do? Well, we are going to have to implement multiple infrastructure patterns to cater for the demands of all our communities. But we really don’t want to bespoke everything and we certainly don’t want to lock ourselves into anything.

Many of the hyper-converged plays lock us into one technology or another; hence we are starting to look at building our own rack-converged blocks to give us lowest common denominator infrastructure that can be managed with standard tools.

Vendors with unique features are sent packing; we want to know why you are better at the 90%. Features will not sell; if I can’t source a feature/function from more than one vendor, I probably will not do it. Vendors who not play nice with other vendors; vendors who insist on doing it all and make this their lock-in are not where it’s at.

On top of this infrastructure; we will start to layer on the environment to support the applications. For some applications; this will be cloudy and fluffy. We will allow a lot more developer interaction with the infrastructure; it will feel a lot closer to dev-ops.

For others where it looks like a more traditional approach is required; think those environments that need a robustly designed SAN, traditional fail-over clustering; we’ll be a lot more proscriptive about what can be done.

But all of these will sit on a common, reusable infrastructure that will allow us to meet the demands of the business.  This infrastructure will be able to be quickly deployed but also quickly removed and moved away from; it will not require us to train our infrastructure teams in depth to take advantage of some unique feature.

Remember to partner well with us but also with your competitors; yes, it sometimes makes for an amusing conversation about how rubbish the other guy is but we’ll also have exactly that same conversation about you.

Don’t just play lip-service to openness, be prepared to show us evidence.

ESXi Musings…

VMware need to open-source ESXi and move on; by open-sourcing ESXi, they could start to concentrate on becoming the dominant player in the future delivery of the 3rd platform.

If they continue with the current development model with ESXi; their interactions with the OpenStack community and others will always be treated with slight suspicion. And their defensive moves with regards to VIO to try to keep the faithful happy will not stop larger players abandoning them to more open technologies.

A full open-sourcing of ESXi could bring a new burst of innovation to the product; it would allow the integration of new storage modules for example. Some will suggest that they just need to provide a pluggable architecture but that will inevitably will also leave people with the feeling that they allow preferential access to core partners such as EMC.

The reality is that we are beginning to see more and more companies running multiple virtualisation technologies. If we throw in containerisation into the mix, within the next five years, we will see large companies running three or four virtualisation technologies to support a mix of use-cases and the real headache on how we manage these will begin.

I know it is slightly insane to be even talking about us having more virtualisation platforms than operating systems but most large companies are running at least two virtualisation platforms and probably many are already at three (they just don’t realise it). This ignores those with running local desktop virtualisation by the way.

The battle for dominance is shifting up the stack as the lower layers become ‘good enough’..vendors will need to find new differentiators…

 

Death of the Salesman

Reflecting recently on the changes that I have seen in the Enterprise IT market, more specifically the Enterprise storage market; I have come to the conclusion that over the past five years or so, the changes have not been so much technological but everything  around the technology and it’s packaging.

There appears to be significantly less selling going on and a lot more marketing. This is not necessarily a good thing; there is more reliance than ever on PowerPoint and fancy marketing routines. More gimmick than ever, more focus on the big launch and less on understanding what the customer needs.

More webinars and broadcasting of information and a lot less listening than ever from the vendors.

Yet this is hardly surprising; as the margins on Enterprise hardware slowly erode away and the commoditisation continues; it is a lot harder to justify the existence of the shiny suit.

And many sales teams are struggling with this shift; the sales managers setting targets have not yet adjusted to the new rhythms  and how quickly the market can shift.

But there is a requirement for sales who understand their customers and understand the market. Sales who understand that no one solution fits all; that there is a difference between the traditional IT and the new web-scale stuff.

However, if the large vendors continue to be very target focussed; panicking over the next quarter’s figures and setting them and their staff some unrealistic targets; not realising that the customer now has a lot of choice on how they buy technology and from whom, then they are going fail.

Customers themselves are struggling with some the new paradigms and the demands that their businesses are making of them. The answers are not to be found in another webinar; another meag-launch but perhaps in the conversation.

We used to say that ears and mouth need to be used in proportion; this is never more true but has never been more ignored.

 

 

 

Pay it back..

Linux and *BSD have completely changed the storage market; they are the core of so many storage products, allowing start-ups and established vendors to bring new products to the market more rapidly than previously possible.

Almost every vendor I talk to these days have their systems built on top of these and then there are the number of vendors who are using Samba implementations for their NAS functionality. Sometimes they move on from Samba but almost all of version 1 NAS boxen are built on top of Samba.

There is a massive debt owed to the community and sometimes it is not quite as acknowledged as it should be.

So next time you have a vendor in; make sure you ask the question…how many developers do you have submitting code into the core open-source products you are using? What is your policy for improving the key stacks that you use?

Now, I probably wouldn’t reject a product/company that did not have a good answer but I’m going to give a more favourable listen to those who do.

An Opening is needed

As infrastructure companies like EMC try to move to a more software oriented world; they are having to try different things to try to grab our business. A world where tin is not the differentiator and a world where they are competing head-on with open-source means that they are going to have to take a more open-source type approach. Of course, they will argue that they have been moving this way with some of their products for sometime but these have tended to be outside of their key infrastructure market.

The only way I can see products like ViPR in all it’s forms gaining any kind of penetration will be for EMC to actually open-source it; there is quite a need for a ViPR like product, especially in the arena of storage management but it is far too easy for their competitors to ignore it and subtly block it. So for it to gain any kind of traction; it’ll need open-sourcing.

The same goes for ScaleIO which is competing against a number of open-source products.

But I really get the feeling that EMC are not quite ready for such a radical step; so perhaps the first step will a commercial free-to-use license; none of these mealy mouthed, free-to-use for non-production workloads but a proper you can use this and you can put it into production at your own risk type license. If it breaks and you need support; these are the places you can get support but if it really breaks and you *really* need to to pick up the phone and talk to somone, then you need to pay.

It might that if you want the pretty interface that you need to pay but I’m not sure about that either.

Of course, I’m not just bashing EMC; I still want IBM to take this approach with GPFS; stop messing about, the open-source products are beginning to be good enough for much, certainly outside of some core performance requirements. Ceph for example is really beginning to pick-up some momentum; especially now that RedHat have bought Inktank.

More and more, we are living with infrastructure and infrastructure products that are good enough. The pressure on costs continues for many of us and hence good enough will do; we are expected to deliver against tighter budgets and tight timescales. If you can make it easier for me, by for example allowing my teams to start implementing without a huge upfront price negotiation; the long-term sale will have less friction. If you allow customers to all intents and purposes use your software like open-source; because to be frank, most companies who utilise open-source are not changing the code and could care less whether the source is available; you find that this will play well in the long-term.

The infrastructure market is changing; it becomes more a software play every week. And software is a very different play to infrastructure hardware..


			

Silly Season

Yes, I’ve not been writing much recently; I am trying to work out whether I am suffering from announcement overload or just general boredom with the storage industry in general.

Another day hardly passes without receiving an announcement from some vendor or another; every one is revolutionary and a massive step forward for the industry or so they keep telling me. Innovation appears to be something that is happening every day, we seem to be leaving in a golden age of invention.

Yet many conversations with peer end-users generally end up with us feeling rather confused about what innovation is actually happening.

We see increasingly large number of vendors presenting to us an architecture that pretty much looks identical to the one that we know and ‘love’ from NetApp; at a price point that is not that dissimilar to that we are paying from NetApp and kin.

All-Flash-Arrays are pitched with monotonous regularity at the cost of disk based on dedupe and compression ratios that are oft best-case and seem to assume that you are running many thousands of VDI users.

The focus seems to be on VMware and virtualisation as a workload as opposed to the applications and the data. Please note that VMware is not a workload in the same way that Windows is not a workload.

Don’t get me wrong; there’s some good incremental stuff happening; I’ve seen a general improvement in code quality from some vendors after a really poor couple of years. There still needs to be work done in that area though.

But innovation; there’s not so much that we’re seeing from the traditional and new boys on the block.