Storagebod Rotating Header Image

Big Ideas

Death of the Salesman

Reflecting recently on the changes that I have seen in the Enterprise IT market, more specifically the Enterprise storage market; I have come to the conclusion that over the past five years or so, the changes have not been so much technological but everything  around the technology and it’s packaging.

There appears to be significantly less selling going on and a lot more marketing. This is not necessarily a good thing; there is more reliance than ever on PowerPoint and fancy marketing routines. More gimmick than ever, more focus on the big launch and less on understanding what the customer needs.

More webinars and broadcasting of information and a lot less listening than ever from the vendors.

Yet this is hardly surprising; as the margins on Enterprise hardware slowly erode away and the commoditisation continues; it is a lot harder to justify the existence of the shiny suit.

And many sales teams are struggling with this shift; the sales managers setting targets have not yet adjusted to the new rhythms  and how quickly the market can shift.

But there is a requirement for sales who understand their customers and understand the market. Sales who understand that no one solution fits all; that there is a difference between the traditional IT and the new web-scale stuff.

However, if the large vendors continue to be very target focussed; panicking over the next quarter’s figures and setting them and their staff some unrealistic targets; not realising that the customer now has a lot of choice on how they buy technology and from whom, then they are going fail.

Customers themselves are struggling with some the new paradigms and the demands that their businesses are making of them. The answers are not to be found in another webinar; another meag-launch but perhaps in the conversation.

We used to say that ears and mouth need to be used in proportion; this is never more true but has never been more ignored.




Pay it back..

Linux and *BSD have completely changed the storage market; they are the core of so many storage products, allowing start-ups and established vendors to bring new products to the market more rapidly than previously possible.

Almost every vendor I talk to these days have their systems built on top of these and then there are the number of vendors who are using Samba implementations for their NAS functionality. Sometimes they move on from Samba but almost all of version 1 NAS boxen are built on top of Samba.

There is a massive debt owed to the community and sometimes it is not quite as acknowledged as it should be.

So next time you have a vendor in; make sure you ask the question…how many developers do you have submitting code into the core open-source products you are using? What is your policy for improving the key stacks that you use?

Now, I probably wouldn’t reject a product/company that did not have a good answer but I’m going to give a more favourable listen to those who do.

An Opening is needed

As infrastructure companies like EMC try to move to a more software oriented world; they are having to try different things to try to grab our business. A world where tin is not the differentiator and a world where they are competing head-on with open-source means that they are going to have to take a more open-source type approach. Of course, they will argue that they have been moving this way with some of their products for sometime but these have tended to be outside of their key infrastructure market.

The only way I can see products like ViPR in all it’s forms gaining any kind of penetration will be for EMC to actually open-source it; there is quite a need for a ViPR like product, especially in the arena of storage management but it is far too easy for their competitors to ignore it and subtly block it. So for it to gain any kind of traction; it’ll need open-sourcing.

The same goes for ScaleIO which is competing against a number of open-source products.

But I really get the feeling that EMC are not quite ready for such a radical step; so perhaps the first step will a commercial free-to-use license; none of these mealy mouthed, free-to-use for non-production workloads but a proper you can use this and you can put it into production at your own risk type license. If it breaks and you need support; these are the places you can get support but if it really breaks and you *really* need to to pick up the phone and talk to somone, then you need to pay.

It might that if you want the pretty interface that you need to pay but I’m not sure about that either.

Of course, I’m not just bashing EMC; I still want IBM to take this approach with GPFS; stop messing about, the open-source products are beginning to be good enough for much, certainly outside of some core performance requirements. Ceph for example is really beginning to pick-up some momentum; especially now that RedHat have bought Inktank.

More and more, we are living with infrastructure and infrastructure products that are good enough. The pressure on costs continues for many of us and hence good enough will do; we are expected to deliver against tighter budgets and tight timescales. If you can make it easier for me, by for example allowing my teams to start implementing without a huge upfront price negotiation; the long-term sale will have less friction. If you allow customers to all intents and purposes use your software like open-source; because to be frank, most companies who utilise open-source are not changing the code and could care less whether the source is available; you find that this will play well in the long-term.

The infrastructure market is changing; it becomes more a software play every week. And software is a very different play to infrastructure hardware..


Silly Season

Yes, I’ve not been writing much recently; I am trying to work out whether I am suffering from announcement overload or just general boredom with the storage industry in general.

Another day hardly passes without receiving an announcement from some vendor or another; every one is revolutionary and a massive step forward for the industry or so they keep telling me. Innovation appears to be something that is happening every day, we seem to be leaving in a golden age of invention.

Yet many conversations with peer end-users generally end up with us feeling rather confused about what innovation is actually happening.

We see increasingly large number of vendors presenting to us an architecture that pretty much looks identical to the one that we know and ‘love’ from NetApp; at a price point that is not that dissimilar to that we are paying from NetApp and kin.

All-Flash-Arrays are pitched with monotonous regularity at the cost of disk based on dedupe and compression ratios that are oft best-case and seem to assume that you are running many thousands of VDI users.

The focus seems to be on VMware and virtualisation as a workload as opposed to the applications and the data. Please note that VMware is not a workload in the same way that Windows is not a workload.

Don’t get me wrong; there’s some good incremental stuff happening; I’ve seen a general improvement in code quality from some vendors after a really poor couple of years. There still needs to be work done in that area though.

But innovation; there’s not so much that we’re seeing from the traditional and new boys on the block.


So EMC have finally productised Nile and given it the wonderful name of ‘Elastic Cloud Storage’; there is much to like about it and much I have been asking for…but before I talk about what I like about it, I’ll point out one thing…

Not Stretchy

It’s not very Elastic, well not when compared to the Public Cloud Offerings unless there is a very complicated finance model behind it and even then it might not be that Elastic. One of the things that people really like about Public Cloud Storage is that they pay for what they use and if their consumption goes down….then their costs go down.

Now EMC can probably come up with a monthly charge based on how much you are using; they certainly can do capacity on demand. And they might be able to do something with leasing to allow downscaling as well at a financial level but what they can’t easily do is take storage away on demand. So that 5 petabytes will be on premise and using space; it will also need maintaining even if it spins down to save power.

Currently EMC are stating 9%-28% lower TCO over Public Cloud…it needs to be. Also that is today; Google and Amazon are fighting a price-war, can EMC play in that space and react quickly enough? They claim that they are cheaper after the last round of price cutting but after the next?

So it’s not as Elastic as Public Cloud and this might matter…unless they are relying on the fact that storage demands never seem to go away.


I can’t remember when I started writing about commodity storage and the convergence between storage and servers. Be it roll-your-own or when vendors were going to start doing something very similar; ZFS really sparked a movement who looked at storage and thought why do we need big vendors like EMC, NetApp, HDS and HP for example.

Yet there was always the thorny issue of support and for many of us; it was a bridge too far. In fact, it actually started to look more expensive than buying a supported product..and we quite liked sleeping at night.

But there were some interesting chassis out there that really started to catch our eyes and even our traditional server vendors were shipping interesting boxes. It was awfully tempting.

And so I kept nagging the traditional vendors…

Many didn’t want to play or were caught up in their traditional business. Some didn’t realise that this was something that they could do and some still don’t.


The one company who had the most to loose from a movement to commodity storage was EMC; really, this could be very bad news. There’s enough ‘hate’ in the market for a commodity movement to get some real traction. So they bought a company that could allow commoditisation of storage at scale; I think at least some of us thought that would be the end of that. Or it would disappear down a rabbit hole to resurface as an overpriced product.

And the initial indications were that it wasn’t going to disappear but it was going to be stupidly expensive.

Also getting EMC to talk sensibly about Scale-IO was a real struggle but the indication is that it was a good but expensive product.


So what EMC have announced at EMC-World is kind of surprising in that it looks like that they may well be willing to rip the guts out of their own market. We can argue about the pricing and the TCO model but it looks a good start; street prices and list prices have a very loose relationship. The four year TCO they are quoting needs to drop by a bit to be really interesting.

But the packaging and the option to deploy on your own hardware; although this is going to be from a carefully controlled catalogue I guess; is a real change from EMC. But you will also notice that EMC have got into the server-game; a shot across the bows of the converged players?

And don’t just expect this to be a content dump; Scale-IO can do serious I/O if you deploy SSDs.


My biggest problem with Scale-IO is that it breaks EMC; breaks them in a good way but it’s a completely different sales model. For large storage consumers, an Enterprise License Agreement with all you can eat and deploying onto your chosen commodity platform is going to be very attractive. Now the ELA might be a big-sum but as a per terabyte cost; it might not be so big and the more you use; the cheaper it gets.

And Old EMC might struggle a bit with that. They’ll probably try to sell you a VMAX to sit behind your ViPR nodes.


RedHat have an opportunity now with Ceph; especially amongst those who hate EMC for being EMC. IBM could do something with GPFS. HP have a variety of products.

There are certainly smaller competitors as well.

And then there’s VMware with VSAN; which I still don’t understand!

There’s an opportunity here for a number of people…they need to grasp it and compete. This isn’t going to go away any more.



Not So Potty

Virtual Openness

I don’t always agree with Trevor Pott but this piece on ServerSAN, VSAN and storage acceleration is spot on; the question about VSAN running in the kernel and the advantages that brings to performance; and indeed, I’ve also heard comments about reliability, support and the likes over competing products is very much one which has left me scratching my head and feeling very irritated.

If running VSAN in the kernel is so much better and it almost feels that it should be; it kind of asks another question, perhaps I would be better running all my workloads on bare-metal or as close as I can.

Or perhaps VMware need to be allowing a lot more access to the kernel or a pluggable architecture that allows various infrastructure services to run at that level. There are a number of vendors that would welcome that move and it might actually hasten the adoption of VMware yet further or at least take out some of the more entrenched resistance around it.

I do hope more competition in the virtualisation space will bring more openness to the VMware hypervisor stack.

And it does seem that we are beginning towards data-centres which host competing virtualisation technologies; so it would be good if that at a certain level that these became more infrastructure agnostic. From a purely selfish point of view; it would be good to have the same technology to present storage space to VMware, Hyper-V, KVM and anything else.

I would like to easily share data between systems that run on different technologies and hypervisors; if I use VSAN, I can’t do this without putting in some other technology on top.

Perhaps VMware don’t really want me to have more than one hypervisor in my data-centre; the same way that EMC would prefer that all my storage was from them…but they have begun to learn to live with reality and perhaps they need to encourage VMware to live in the real world as well.  I certainly have use-cases that utilise bare-metal for some specific tasks but that data does find its way into virtualised environments.

Speedy Storage

There are many products that promise to speed-up your centralised storage and they work very well, especially in simple use-cases. Trevor calls this Centralised Storage Acceleration (CSA); some are software products, some come with hardware devices and some are mixture of both.

They can have some significant impact on the performance of your workloads; databases can benefit from them especially (most databases benefit more with decent DBAs and developers how-ever); they are a quick fix for many performance issues and remove that bottleneck which is spinning rust.

But as soon as you start to add complexity; clustering, availability and moving beyond a basic write-cache functionality…they stop being a quick-fix and become yet another system to go wrong and manage.

Fairly soon; that CSA becomes something a lot closer to a ServerSAN and you are sticking that in front of your expensive SAN infrastructure.

The one place that a CSA becomes interesting is as Cloud Storage Acceleration; a small amount of flash storage on server but with the bulk of data sitting in a cloud of some sort.

So what is going on?

It is unusual to have such a number of competing deployment models for infrastructure; in storage, we have an increasing number of deployment models.

  • Centralised Storage – the traditional NAS and SAN devices
  • Direct Attached Storage – Local disk with the application layer doing all the replication and other data management services
  • Distributed Storage – Server-SAN; think VSAN and competitors

And we can layer an acceleration infrastructure on top of those; this acceleration infrastructure could be local to the server or perhaps an appliance sitting in the ‘network’.

All of these have use-cases and the answer may well be that to run a ‘large’ infrastructure; you need a mixture of them all?

Storage was supposed to get simple and we were supposed to focus on the data and providing data services. I think people forgot that just calling something a service didn’t make it simple and the problems go away.


Licensed To Bill

‘*sigh* Another change to a licensing model and you can bet it’s not going to work out any cheaper for me’ was the first thought that flickered through my mind during a presentation about  GPFS 4.1 at the GPFS UG meeting in London (if you are a GPFS user in the UK, you should attend this next time…probably the best UG meeting I’ve been at for a long time).

This started up another train of thought; in this new world of Software Defined Storage, how should the software be licensed? And how should the value be reflected?

Should we be moving to a capacity based model?

Should I get charged per terabyte of storage being ‘managed’?

Or perhaps per server that has this software defined storage presented to it?

Perhaps per socket? Per core?

But this might not work well if I’m running at hyperscale?

And if I fully embrace a programmatic provisioning model that dynamically changes the storage configuration…does any model make any sense apart from some kind of flat-fee, all-you-can-eat model.

Chatting to a few people; it seems that no-one really has any idea what the licensing model should look like. Funnily enough; it is this sort of thing which could really de-rail ServerSAN and Software Defined Storage; it’s not going to be a technical challenge but if the licensing model gets too complex, hard to manage and generally too costly, it is going to fail.

Of course inevitably someone is going to pop-up and mention Open-Source…and I will simply point out, RedHat make quite a lot of money out of Open-Source; you pay for support based on some kind of model. Cost of acquisition is just a part of IT infrastructure spend.

So what is a reasonable price? Anyone?



Too Cheap To Manage…

Five years or so when I started this blog, I spent much time venting my spleen at EMC and especially the abomination that was Control-Center; a product so poor that a peer in the inudstry once described it as being too expensive even if it was free.

And yet still the search for the perfect storage management product still continues; there have been contenders along the way and yet they still continue to fall short and as the administration tools have got better and easier to use, the actual management tools have still fallen some way short of the mark.

But something else has happened and it was only a chance conversation today that highlighted this to me; the tenuous business case that many have been purchased on has collapsed…many storage management products are purchased with the business case that ultimately that they will save you money by allowing you to right-size your storage estate….they will maximise the usage of the estate that you have on the floor.

Unfortunately and it surprises to say this; the price of enterprise storage has collapsed…seriously, although it is still obviously too expensive (I have to say that); the price of storage management products has not declined at the same rate. This means that it is doubtful that I can actually save enough capacity to make it worth my time trying too hard and putting in a tool to do so, the economics don’t actually stack up.

So there has to be whole new business case around risk mitigation, change-planning, improved agility…or the licensing model that tends to be capacity-based in some form or another has to be reviewed.

Do we still need good storage management tools? Yes but they need to focused on automation and service delivery; not on simply improving the utilisation of the estate.

Thin-provisioning, deduplication, compression and the likes are already driving down these costs; they do this ways that are easier than reclaiming orphaned storage and even under-utilised SAN ports.And as long as I am clever, I can pick-up a lot of orphaned storage on refresh.

If ‘Server-SAN’ is a real thing; these tools are going to converge into the general management tools, giving me a whole new topic to vent at..because most of these aren’t especially great either.

p.s If you want to embarrass EMC and make them sheepish…just mention Control-Center…you’d think it’d killed someone..

Buying High-End Storage

I was in the process of writing a blog about buying High-End storage..then I remembered this sketch. So in a purely lazy blog entry, I think this sums at the experience of many storage buyers…


I think as we head into a month or so of breathless announcements, bonkers valuations and industry is worth a watch…

But I do have some special audiophile SAN cables which will enhance the quality of your data if you want some!! It may even enbiggen it!

New Service Offering…

I like sales people and marketeers; they are often nice, genuine and good people….mostly!


I’ve got a new service to offer; if you think that you’ve invented a new product sector, a new market, a new concept…email me and we’ll arrange a call.

If you can convince me that you’ve invented a completely new concept; the call is free and I’ll even write a blog on it but I won’t pimp your product. If I call ‘Bullsh*t’, you buy me something off my Amazon wishlist and I won’t laugh at you in public!

And I’ll give you a starter…if your new concept is Anything Defined Anything….it’s ‘Bullsh*t…total and utter crap…’!