Storagebod Rotating Header Image

Big Ideas

Stretching…

So EMC have finally productised Nile and given it the wonderful name of ‘Elastic Cloud Storage’; there is much to like about it and much I have been asking for…but before I talk about what I like about it, I’ll point out one thing…

Not Stretchy

It’s not very Elastic, well not when compared to the Public Cloud Offerings unless there is a very complicated finance model behind it and even then it might not be that Elastic. One of the things that people really like about Public Cloud Storage is that they pay for what they use and if their consumption goes down….then their costs go down.

Now EMC can probably come up with a monthly charge based on how much you are using; they certainly can do capacity on demand. And they might be able to do something with leasing to allow downscaling as well at a financial level but what they can’t easily do is take storage away on demand. So that 5 petabytes will be on premise and using space; it will also need maintaining even if it spins down to save power.

Currently EMC are stating 9%-28% lower TCO over Public Cloud…it needs to be. Also that is today; Google and Amazon are fighting a price-war, can EMC play in that space and react quickly enough? They claim that they are cheaper after the last round of price cutting but after the next?

So it’s not as Elastic as Public Cloud and this might matter…unless they are relying on the fact that storage demands never seem to go away.

Commodity

I can’t remember when I started writing about commodity storage and the convergence between storage and servers. Be it roll-your-own or when vendors were going to start doing something very similar; ZFS really sparked a movement who looked at storage and thought why do we need big vendors like EMC, NetApp, HDS and HP for example.

Yet there was always the thorny issue of support and for many of us; it was a bridge too far. In fact, it actually started to look more expensive than buying a supported product..and we quite liked sleeping at night.

But there were some interesting chassis out there that really started to catch our eyes and even our traditional server vendors were shipping interesting boxes. It was awfully tempting.

And so I kept nagging the traditional vendors…

Many didn’t want to play or were caught up in their traditional business. Some didn’t realise that this was something that they could do and some still don’t.

Acquisition

The one company who had the most to loose from a movement to commodity storage was EMC; really, this could be very bad news. There’s enough ‘hate’ in the market for a commodity movement to get some real traction. So they bought a company that could allow commoditisation of storage at scale; I think at least some of us thought that would be the end of that. Or it would disappear down a rabbit hole to resurface as an overpriced product.

And the initial indications were that it wasn’t going to disappear but it was going to be stupidly expensive.

Also getting EMC to talk sensibly about Scale-IO was a real struggle but the indication is that it was a good but expensive product.

Today

So what EMC have announced at EMC-World is kind of surprising in that it looks like that they may well be willing to rip the guts out of their own market. We can argue about the pricing and the TCO model but it looks a good start; street prices and list prices have a very loose relationship. The four year TCO they are quoting needs to drop by a bit to be really interesting.

But the packaging and the option to deploy on your own hardware; although this is going to be from a carefully controlled catalogue I guess; is a real change from EMC. But you will also notice that EMC have got into the server-game; a shot across the bows of the converged players?

And don’t just expect this to be a content dump; Scale-IO can do serious I/O if you deploy SSDs.

Tomorrow

My biggest problem with Scale-IO is that it breaks EMC; breaks them in a good way but it’s a completely different sales model. For large storage consumers, an Enterprise License Agreement with all you can eat and deploying onto your chosen commodity platform is going to be very attractive. Now the ELA might be a big-sum but as a per terabyte cost; it might not be so big and the more you use; the cheaper it gets.

And Old EMC might struggle a bit with that. They’ll probably try to sell you a VMAX to sit behind your ViPR nodes.

Competitors?

RedHat have an opportunity now with Ceph; especially amongst those who hate EMC for being EMC. IBM could do something with GPFS. HP have a variety of products.

There are certainly smaller competitors as well.

And then there’s VMware with VSAN; which I still don’t understand!

There’s an opportunity here for a number of people…they need to grasp it and compete. This isn’t going to go away any more.

 

 

Not So Potty

Virtual Openness

I don’t always agree with Trevor Pott but this piece on ServerSAN, VSAN and storage acceleration is spot on; the question about VSAN running in the kernel and the advantages that brings to performance; and indeed, I’ve also heard comments about reliability, support and the likes over competing products is very much one which has left me scratching my head and feeling very irritated.

If running VSAN in the kernel is so much better and it almost feels that it should be; it kind of asks another question, perhaps I would be better running all my workloads on bare-metal or as close as I can.

Or perhaps VMware need to be allowing a lot more access to the kernel or a pluggable architecture that allows various infrastructure services to run at that level. There are a number of vendors that would welcome that move and it might actually hasten the adoption of VMware yet further or at least take out some of the more entrenched resistance around it.

I do hope more competition in the virtualisation space will bring more openness to the VMware hypervisor stack.

And it does seem that we are beginning towards data-centres which host competing virtualisation technologies; so it would be good if that at a certain level that these became more infrastructure agnostic. From a purely selfish point of view; it would be good to have the same technology to present storage space to VMware, Hyper-V, KVM and anything else.

I would like to easily share data between systems that run on different technologies and hypervisors; if I use VSAN, I can’t do this without putting in some other technology on top.

Perhaps VMware don’t really want me to have more than one hypervisor in my data-centre; the same way that EMC would prefer that all my storage was from them…but they have begun to learn to live with reality and perhaps they need to encourage VMware to live in the real world as well.  I certainly have use-cases that utilise bare-metal for some specific tasks but that data does find its way into virtualised environments.

Speedy Storage

There are many products that promise to speed-up your centralised storage and they work very well, especially in simple use-cases. Trevor calls this Centralised Storage Acceleration (CSA); some are software products, some come with hardware devices and some are mixture of both.

They can have some significant impact on the performance of your workloads; databases can benefit from them especially (most databases benefit more with decent DBAs and developers how-ever); they are a quick fix for many performance issues and remove that bottleneck which is spinning rust.

But as soon as you start to add complexity; clustering, availability and moving beyond a basic write-cache functionality…they stop being a quick-fix and become yet another system to go wrong and manage.

Fairly soon; that CSA becomes something a lot closer to a ServerSAN and you are sticking that in front of your expensive SAN infrastructure.

The one place that a CSA becomes interesting is as Cloud Storage Acceleration; a small amount of flash storage on server but with the bulk of data sitting in a cloud of some sort.

So what is going on?

It is unusual to have such a number of competing deployment models for infrastructure; in storage, we have an increasing number of deployment models.

  • Centralised Storage – the traditional NAS and SAN devices
  • Direct Attached Storage – Local disk with the application layer doing all the replication and other data management services
  • Distributed Storage – Server-SAN; think VSAN and competitors

And we can layer an acceleration infrastructure on top of those; this acceleration infrastructure could be local to the server or perhaps an appliance sitting in the ‘network’.

All of these have use-cases and the answer may well be that to run a ‘large’ infrastructure; you need a mixture of them all?

Storage was supposed to get simple and we were supposed to focus on the data and providing data services. I think people forgot that just calling something a service didn’t make it simple and the problems go away.

 

Licensed To Bill

‘*sigh* Another change to a licensing model and you can bet it’s not going to work out any cheaper for me’ was the first thought that flickered through my mind during a presentation about  GPFS 4.1 at the GPFS UG meeting in London (if you are a GPFS user in the UK, you should attend this next time…probably the best UG meeting I’ve been at for a long time).

This started up another train of thought; in this new world of Software Defined Storage, how should the software be licensed? And how should the value be reflected?

Should we be moving to a capacity based model?

Should I get charged per terabyte of storage being ‘managed’?

Or perhaps per server that has this software defined storage presented to it?

Perhaps per socket? Per core?

But this might not work well if I’m running at hyperscale?

And if I fully embrace a programmatic provisioning model that dynamically changes the storage configuration…does any model make any sense apart from some kind of flat-fee, all-you-can-eat model.

Chatting to a few people; it seems that no-one really has any idea what the licensing model should look like. Funnily enough; it is this sort of thing which could really de-rail ServerSAN and Software Defined Storage; it’s not going to be a technical challenge but if the licensing model gets too complex, hard to manage and generally too costly, it is going to fail.

Of course inevitably someone is going to pop-up and mention Open-Source…and I will simply point out, RedHat make quite a lot of money out of Open-Source; you pay for support based on some kind of model. Cost of acquisition is just a part of IT infrastructure spend.

So what is a reasonable price? Anyone?

 

 

Too Cheap To Manage…

Five years or so when I started this blog, I spent much time venting my spleen at EMC and especially the abomination that was Control-Center; a product so poor that a peer in the inudstry once described it as being too expensive even if it was free.

And yet still the search for the perfect storage management product still continues; there have been contenders along the way and yet they still continue to fall short and as the administration tools have got better and easier to use, the actual management tools have still fallen some way short of the mark.

But something else has happened and it was only a chance conversation today that highlighted this to me; the tenuous business case that many have been purchased on has collapsed…many storage management products are purchased with the business case that ultimately that they will save you money by allowing you to right-size your storage estate….they will maximise the usage of the estate that you have on the floor.

Unfortunately and it surprises to say this; the price of enterprise storage has collapsed…seriously, although it is still obviously too expensive (I have to say that); the price of storage management products has not declined at the same rate. This means that it is doubtful that I can actually save enough capacity to make it worth my time trying too hard and putting in a tool to do so, the economics don’t actually stack up.

So there has to be whole new business case around risk mitigation, change-planning, improved agility…or the licensing model that tends to be capacity-based in some form or another has to be reviewed.

Do we still need good storage management tools? Yes but they need to focused on automation and service delivery; not on simply improving the utilisation of the estate.

Thin-provisioning, deduplication, compression and the likes are already driving down these costs; they do this ways that are easier than reclaiming orphaned storage and even under-utilised SAN ports.And as long as I am clever, I can pick-up a lot of orphaned storage on refresh.

If ‘Server-SAN’ is a real thing; these tools are going to converge into the general management tools, giving me a whole new topic to vent at..because most of these aren’t especially great either.

p.s If you want to embarrass EMC and make them sheepish…just mention Control-Center…you’d think it’d killed someone..

Buying High-End Storage

I was in the process of writing a blog about buying High-End storage..then I remembered this sketch. So in a purely lazy blog entry, I think this sums at the experience of many storage buyers…

 

I think as we head into a month or so of breathless announcements, bonkers valuations and industry nonsense..it is worth a watch…

But I do have some special audiophile SAN cables which will enhance the quality of your data if you want some!! It may even enbiggen it!

New Service Offering…

I like sales people and marketeers; they are often nice, genuine and good people….mostly!

But..

I’ve got a new service to offer; if you think that you’ve invented a new product sector, a new market, a new concept…email me and we’ll arrange a call.

If you can convince me that you’ve invented a completely new concept; the call is free and I’ll even write a blog on it but I won’t pimp your product. If I call ‘Bullsh*t’, you buy me something off my Amazon wishlist and I won’t laugh at you in public!

And I’ll give you a starter…if your new concept is Anything Defined Anything….it’s ‘Bullsh*t…total and utter crap…’!

 

Fundamental…

I’m a big fan of Etherealmind and his blog; I like that it is a good mix of technical and professional advice; he’s also a good guy to spend an hour or so chatting to, he’s always generous with his time to peers and even when he knows a lot more than you about a subject, you never really feel patronised or lectured to.

I particularly liked this blog, myself and Greg are really on the same page with regards to work/life balance but it is this paragraph that stands out..

 

Why am I focussed on work life ? After 25 or so years in technology, I have developed some level of mastery.  Working on different products is usually just a few days work to come up to speed on the CLI or GUI. Takes a few more weeks to understand some of the subtle tricks. Say a month to be competent, maybe two months. The harder part is refreshing my knowledge on different technologies – for example, SSL, MPLS, Proxy, HTTP, IPsec, SSL VPN. I often need to refresh my knowledge since it fades from my brain or there is some advancement. IPsec is a good example where DMVPN is a solid advancement but takes a few weeks to update the knowledge to an operational level.

Now although he is talking about networking technologies; what he says is true about storage technologies and actually pretty much all of IT these days. You should be able to become productive on most technologies in a matter of days providing you have the fundamentals; spend your early days becoming knowledgeable about the underlying principles and avoid vendor-specific traps.

Try not to run a translation layer in your mind; too many storage admins are translating back to the first array that they worked on; they try to turn hypers and metas into aggregates, they worry about fan-outs without understanding why you have to in some architectures and not necessarily so in others.

Understanding the underlying principles means that you can evaluate new products that much quicker; you are not working why product ‘A’ is better than product ‘B’, this often results in biases. You understand why product ‘A’ is a good fit for your requirement and you also understand why neither product is a good fit.

Instead of iSCSI bad, FC good…you will develop an idea as to the appropriate use-case for either.

You will become more useful…and you will find that you are less resistant to change; it becomes less stressful and easier to manage. Don’t become an EMC dude, become a Storagebod…Don’t become a Linux SysAdmin, become a SysAdmin.

Am I advocating generalism? To a certain extent, yes but you can become expert within a domain and not a savant for a specific technology.

And a final bit of advice; follow Etherealmind….he talks sense for a network guy!

 

 

A Press Release From The Future…

Future-View, CA – March 2018

Evian Storage – Storage so Pure it’s like a torrent of glacial water announced today the end of the All-Flash-Array with the announcement of it’s StupendoStore 20000 based around the HyperboleHype-based storage device.

Our research shows that All Flash Arrays are slowing down businesses in their move to meet the new business paradigms brought about by computing at the quantum scale. Their architectures simply can’t keep up and storage is yet again the bottle-neck and yet scaling economically also seems to be beyond them.  Customers have found themselves locked into an architecture which promised no more fork-lift upgrades but has delivered technology lock-in and all the agility of a dancing hippo. Forget about fork-lifts, we are talking cranes!

Fortunately our team’s experience in delivering hybrid arrays at such companies as EMC, HDS, NetApp and other vendors has enabled us to take advantage of the newest technology on the block but also leverage the economies of flash and indeed the huge capacity and scale of magnetic disk; we know that your data should live in the right place and although we admit that our arrays might not be as fast the Purest arrays…I’m sure we’re not the only ones who prefer their rocket fuel with a little mixer…

Yes, this is a dig at the All-Flash players…but it doesn’t matter how great your technology is today; there will always be something newer and faster round the corner. And as a customer, it is worth remembering that the future is always closer than you think. It could be only a single depreciation cycle away, a single tech-refresh away. The challenge for all vendors is delivering a sustainable model and product-set.

And no-one product will meet all your needs….no matter what the vendor tells you!

VSANity?

So VSAN is finally here in a released form; on paper, it sure looks impressive but it’s not for me.

I spend an awful lot of time looking at Scale-Out Storage systems; looking at ways to do them faster, cheaper and better. And although I welcome VMware and VSAN to the party; I think that their product falls some-way from the mark but I don’t think that I’m really the target market; it’s not really ready or appropriate for Media and Entertainment or anyone interested in HyperScale.

But even so I’ve got thoughts that I’d like to share.

So VSAN is better because it runs in the VMware kernel? This seems logical but this has tied VSAN to VMware in a way that some of the competing products are not; if I want to run a Gluster Cluster which encompasses not just VMware but also XEN, bare-metal and anything else, I could. And there might be some excellent reasons why I would want to do so, I’d transcode on bare-metal machines for example but might present out on VM-ed application servers. Of course, it is not only Media and Entertainment who have such requirements; there are plenty of other places where heavy lifting would be better done on the bare-metal.

I think that VMware need to be much more open about allowing third party access to the kernel interfaces; they should allow more pluggable options; so I could run GPFS, ScaleIO, Gluster, Stornext within the VMWare kernel.

VSAN limits itself by tying itself so closely to the VMware stack; it’s scalability is limited by the current cluster size. Now there are plenty good architectural reasons for doing so but most of these are enforced by a VMware-only mindset.

But why limit to only 35 disks per server? An HP ProLiant SL4540 takes 60 disks and there are SuperMicro chassis that take 72 disks. Increasing the spindle count not only increases the maximum capacity but the RAW IOps of the solution. Of course, there might be some saturation issues with regards to the inter-server communication.

Yet, I do think it is interesting how the converged IT stacks are progressing; the differences in approach; VMware itself is pretty much a converged stack now but it is a software converged stack; VCE and Nutanix converge onto hardware as well. And yes, VMware is currently the core of all of this.

I actually prefer the VMware-only approach in many ways as I think I could scale computer and storage separately within some boundaries; I’m not sure what the impact of having unbalanced clusters will be on VSAN? Whether it would make sense to have some Big Flipping Dense VSAN appliances rather than distributing the storage equally across the nodes?

But VSAN is certainly welcome in the market; it certainly validates the approaches being taken by a number of other companies…I just wish it were more flexible and open.

 

Disrupt?

So you’ve founded a new storage business; you’ve got a great idea and you want to disrupt the market? Good for you…but you want to maintain the same-old margins as the old crew?

So you build it around commodity hardware; you use the same commodity hardware as I can buy off the shelf; basically the same disks that I can buy off the shelf from PC World or order from my preferred Enterprise tin-shifter.

You tell me that you are lean and mean? You don’t have huge sales overheads, no huge marketing budget and no legacy code to maintain?

You tell me that it’s all about the software but you still want to clothe it in hardware.

And then you tell me it’s cheaper than the stuff that I buy from my current vendor? How much cheaper? 20%, 30%, 40%, 50%??

Then I do the calculations; your cost base and your BoM is much lower and you are actually making more money per terabyte than the big old company that you used to work for?

But hey, I’m still saving money, so that’s okay….

Of course, then I dig a bit more…I want support? Your support organisation is tiny; I do my due diligence,  can you really hit your response times?

But you’ve got a really great feature? How great? I’ve not seen a single vendor come up with a feature that is so awesome and so unique that no-one manages to copy it…few which aren’t in a lab somewhere.

In a race to the bottom; you are still too greedy. You still believe that customers are stupid and will accept being ripped off.

If you were truly disruptive….you’d work out a way of articulating the value of your software without clothing it in hardware. You’d work with me on getting it onto commodity hardware and no I’m not talking about some no-name white-box; you’d work with me on getting it onto my preferred vendor’s kit; be it HP, Dell, Lenovo, Oracle or whoever else…

For hardware issues; I could utilise the economies of scale and the leverage I have with my tin-shifter; you wouldn’t have to set-up a maintenance function or sub-contract it to some third party who will inevitably let us both down.

And for software support; well you could concentrate on those…

You’d help me be truly disruptive…and ultimately we’d both be successful…