Storagebod Rotating Header Image

March, 2014:

A two question RFP….

Is it easy?

Is is cheap?

Pretty much these are the only two questions which interest me when talking to a vendor these days; after years of worrying about technology, it has all boiled down to those two questions. Of course, if I was to produce an RFx document with simply those two questions, I’d probably be out of a job fairly swiftly.

But those two questions are not really that simple to answer for many vendors.

Is it easy? How simply can I get your product to meet my requirements and business need? My business need may be to provide massive capacity; it could be to support many thousands of VMs, it could be to provide sub-millisecond latency.  This all needs to be simple.

It doesn’t matter if you provide me with the richest feature-set, simplest GUI or backwards compatibility with the ENIAC  if it is going to take a cast of thousands to do this. Yet still vendors struggle to answer the questions posed and you often get the response to a question you didn’t ask but the vendor wants to answer.

Is it cheap? This question is even more complicated as the vendor likes to try to hide all kinds of things but I can tell you; if you are not upfront with your costs and you start to present me with surprises, this is not good.

Of course features like deduplication and compression mean that the capacity costs are even more opaque but we are beginning to head towards the idea that capacity is free; performance costs. But as capacity becomes cheaper, the real value of primary storage dedupe and compression for your non-active set that sits on SATA and the likes begins to diminish.

So just make it easy, just make it cheap and make my costs predictable.

Be honest, be up-front and answer the damn questions….

A Press Release From The Future…

Future-View, CA – March 2018

Evian Storage – Storage so Pure it’s like a torrent of glacial water announced today the end of the All-Flash-Array with the announcement of it’s StupendoStore 20000 based around the HyperboleHype-based storage device.

Our research shows that All Flash Arrays are slowing down businesses in their move to meet the new business paradigms brought about by computing at the quantum scale. Their architectures simply can’t keep up and storage is yet again the bottle-neck and yet scaling economically also seems to be beyond them.  Customers have found themselves locked into an architecture which promised no more fork-lift upgrades but has delivered technology lock-in and all the agility of a dancing hippo. Forget about fork-lifts, we are talking cranes!

Fortunately our team’s experience in delivering hybrid arrays at such companies as EMC, HDS, NetApp and other vendors has enabled us to take advantage of the newest technology on the block but also leverage the economies of flash and indeed the huge capacity and scale of magnetic disk; we know that your data should live in the right place and although we admit that our arrays might not be as fast the Purest arrays…I’m sure we’re not the only ones who prefer their rocket fuel with a little mixer…

Yes, this is a dig at the All-Flash players…but it doesn’t matter how great your technology is today; there will always be something newer and faster round the corner. And as a customer, it is worth remembering that the future is always closer than you think. It could be only a single depreciation cycle away, a single tech-refresh away. The challenge for all vendors is delivering a sustainable model and product-set.

And no-one product will meet all your needs….no matter what the vendor tells you!

Chop Their Fingers Off!

This is a very good piece on FAST-VP on VMAX, well-written and some good advice in it but it sums up almost everything that is wrong with VMAX today. VMAX has too many nerd-knobs and so people think they should fiddle and try and out-do the machine.

And hence probably make a right-old mess, FAST-VP ends up not working quite as well as it should and so people tend to fiddle even more and the next thing you know, you are trying to manage your VMAX in the way you would have managed an old-school Symm.

I think it is time that EMC and their users seriously consider breaking away from the past; the old-school nerd-knob fettling needs to stop. I know that is why storage admins get paid the big bucks but I do wonder if we might be better paying them to stop?

I long for the day when we see VMAX managed without worrying about what the internal engines are doing; when we set various performance parameters and let the array sort it out. When we pay for performance and capacity without worrying how the system gets to it.

There is at least one amusing part of advice in the article tho’ and it although it is well-argued and there appears to be good reason to do so; you still should keep the FC-tier on RAID-1 mirrored disks…Nothing really changes in the world of Symm!

 

 

VSANity?

So VSAN is finally here in a released form; on paper, it sure looks impressive but it’s not for me.

I spend an awful lot of time looking at Scale-Out Storage systems; looking at ways to do them faster, cheaper and better. And although I welcome VMware and VSAN to the party; I think that their product falls some-way from the mark but I don’t think that I’m really the target market; it’s not really ready or appropriate for Media and Entertainment or anyone interested in HyperScale.

But even so I’ve got thoughts that I’d like to share.

So VSAN is better because it runs in the VMware kernel? This seems logical but this has tied VSAN to VMware in a way that some of the competing products are not; if I want to run a Gluster Cluster which encompasses not just VMware but also XEN, bare-metal and anything else, I could. And there might be some excellent reasons why I would want to do so, I’d transcode on bare-metal machines for example but might present out on VM-ed application servers. Of course, it is not only Media and Entertainment who have such requirements; there are plenty of other places where heavy lifting would be better done on the bare-metal.

I think that VMware need to be much more open about allowing third party access to the kernel interfaces; they should allow more pluggable options; so I could run GPFS, ScaleIO, Gluster, Stornext within the VMWare kernel.

VSAN limits itself by tying itself so closely to the VMware stack; it’s scalability is limited by the current cluster size. Now there are plenty good architectural reasons for doing so but most of these are enforced by a VMware-only mindset.

But why limit to only 35 disks per server? An HP ProLiant SL4540 takes 60 disks and there are SuperMicro chassis that take 72 disks. Increasing the spindle count not only increases the maximum capacity but the RAW IOps of the solution. Of course, there might be some saturation issues with regards to the inter-server communication.

Yet, I do think it is interesting how the converged IT stacks are progressing; the differences in approach; VMware itself is pretty much a converged stack now but it is a software converged stack; VCE and Nutanix converge onto hardware as well. And yes, VMware is currently the core of all of this.

I actually prefer the VMware-only approach in many ways as I think I could scale computer and storage separately within some boundaries; I’m not sure what the impact of having unbalanced clusters will be on VSAN? Whether it would make sense to have some Big Flipping Dense VSAN appliances rather than distributing the storage equally across the nodes?

But VSAN is certainly welcome in the market; it certainly validates the approaches being taken by a number of other companies…I just wish it were more flexible and open.