Storagebod Rotating Header Image

February, 2015:

Dead Flesh…

If in doubt rebrand…have IBM completely run out of ideas with their storage offerings? The Spectrum rebrand of their storage offerings feels like the last throw of the dice. And it demonstrates the problems that they currently have.

In fact, it is not all of their storage offerings but appears to be the software offerings? DS8K for example is missing from the line-up but perhaps Spectrum Zombie – the Storage Array that Will Not Die was a step too far. We do however have Spectrum Virtualise; this is a hardware offering offering in the form of SVC currently but is this going to morph into a software offering? There is little reason why it shouldn’t.

But there are also products such as the hardware XIV, the Vxxxx series and also the ESS GPFS appliance that are missing from the Spectrum family? Are we going to see IBM exit these products over time; it feels like the clock is ticking on them?

The DS8K is probably a safe product because of the mainframe support but users of the rest of them are going to be nervous.

Why have IBM managed to completely mess up their storage portfolio? There are still massive gaps in it after all this time; Object Storage, Scalable NAS and indeed an ordinary workaday NAS of their own.

The products they have are generally good; I’ve been a fan of SVC for a long time, a GPFS advocate and a TSM bigot. Products that really work!

I feel sorry for the folks who develop them; they have been let down again and again by their product marketing; the problem isn’t the products!

Brownie points for anyone who gets the reference in the title..

 

A fool and his money….

And the madness continues…

DON’T BUY THIS CRAP! Give your money to charity or burn it as a piece of performance art! But don’t buy this crap!

This makes me so annoyed! Do something useful with your money….please!

http://www.geek.com/chips/this-ethernet-cable-costs-10000-1615326/

Interesting Question?

Are AFAs ready for legacy Enterprise Workloads? The latest little spat between EMC and HP bloggers asked that question.

But it’s not really an interesting question; a more interesting question is why would I put traditional Enterprise workloads on an AFA? Why even bother?

More and more I’m coming across people who are asking precisely that question and struggling to come up with an answer. Yes, an AFA makes a workload run faster but what does that gain me? It really is very variable across application type and where the application bottle-necks are; if you have a workload that does not rely on massive scale and parallelism, you will probably find that a hybrid array will suit you better and you will gain pretty much all the benefits of flash at a fraction of the cost.

The response often received when asked what the impact of being able to run batch jobs, often the foundation of many legacy workloads, in half the time is a ‘So what?’ As long as the workload runs in the window; that is all anyone cares about.

If all your latency is the human in front of the screen; the differences in response times from your storage become pretty insignificant.

AFAs only really make sense as you move away from a legacy application infrastructure; where you are architecting applications differently, moving many of the traditional capabilities of an Enterprise infrastructure up the stack and into the application. Who cares if the AFA can handle replication, consistency groups and other such capabilities when that is taken care of by the application?

Yes, I can point to some traditional applications that will benefit from a massive amount of flash but these tend to be snowflake applications and they could almost certainly do with a re-write.

I’d like to see more vendors be honest about the use-cases for their arrays; more vendors working in a consultative manner and less trying to shift as much tin as possible. But that is much harder to achieve and requires a level of understanding beyond most tin-shifters.