Storagebod Rotating Header Image

Punish the Pundits!!

A day rarely goes by without someone declaring one technology or another is dead…and rarely a year goes by without someone declaring this is the year of whatever product they happen to be pimping or in favour of.

And yet, you can oft find dead technologies in rude health and rarely does it actually turn out to be the year of the product it is supposed to be the year of.

It turns out that pundits (including me) really have little idea what technology is going to die or fly. And that is what makes the industry fun and interesting.

The storage industry is especially good for this; SAN is dead, DAS lives, NAS is obsolete, Object is the future, iSCSI will never work, Scale Up, Scale Out…

We know nothing…

The only thing we do know is that data volumes will keep getting bigger and we need somewhere to put it.

In the past three months; I’ve seen technologies in what everyone will have you believe are innovation-free zones that have made me stop and think ‘But I thought that was going to die….’

Yes we have far too many start-ups in some parts of the industry; far too many people have arrived at where they thought the puck was going to be.

A few people seem to be skating round where the puck was.

And there’s a few people who have picked the puck, stuck in their pocket and hidden it.

So my prediction for the next eighteen months…

‘Bumpy….with the chance of sinkholes!’

My advice…

‘Don’t listen to the pundits, we know nothing….we just love the shinies!!’

Scale-Out of Two?

One of the things I have been lamenting about for some time with many vendors is that there has been a lack of a truly credible alternative to EMC’s Isilon product in the Scale-Out NAS space. There are some technologies out there that could compete but they just seem to fall/fail at the last hurdle; there are also technologies that are packaged to look like Scale-Out but are cludges and general hotch-potches.

So EMC have pretty much have had it their own way in this space and they know it!

But yesterday, finally a company came out of Stealth to announce a product that might finally be the alternative to Isilon that I and others have been looking for.

That company is Qumulo; they claim to have developed the first Data-Aware Scale-Out NAS; to be honest that first bit, ‘Data-Aware’ sounds a bit like marketing fluff but Scale-Out NAS…that hits the spot. Why would Qumulo be any more interesting than the other attempts in the space? Well, they are based out of Seattle founded by a bunch of ex-Isilon folks; so they have credibility. I think they understand that the core of any scale-out product is scale-out; it has to be designed that way from the start.

I also think that they understand that any scale-out system needs to be easy to manage; the command and control options need to be robust and simple. Many storage administrators love the Isilon because it is simple to manage but there are still things that it doesn’t do so well; ACL management is a particular bugbear of many, especially those of us who have to work in mixed NFS/SMB environments (OSX/Windows/Linux).

If we go to the marketing tag-line, ‘Data Aware’; this seems to be somewhat equivalent to the Insight-IQ offering from Isilon but baked into the core product set. I have mentioned here and also to the Isilon guys that I believe that Insight-IQ should be free and a standard offering; generally, by the time that a customer needs access to Insight-IQ, it’s because there’s a problem open with support.

But if I start to think about my environment; when we are dealing with complex workflows for a particular asset, it would be useful to follow that asset; see what systems touch it, where the bottle-necks are and perhaps the storage where the asset lives are might well be the best place. It might not be that the storage is the problem but it is the one common environment for an asset. So I am prepared to be convinced that ‘Data Aware’ is more than marketing; it needs be properly useful and simple for me to produce meaningful reports however.

Qumulo have made the sensible decision that at day one, a customer has the option of deploying on their own commodity hardware or purchase an appliance from Qumulo. I’ll have to see the costs and build our own TCO model, let’s hope that for once it will actually be more cost effective to use my own commodity hardware and not have to pay some opt-out tax that makes it more expensive.

It makes a change to see a product that meets a need today…I know plenty of people who will be genuinely interested in seeing a true competitor to EMC Isilon. I think even the guys still at Isilon are interested; it pushes them on as well.

I look forward to talking to Qumulo in the future.

Stupid name tho’!!

Flash in a pan?

The Tech Report have been running an ‘SSD Endurance Experiment’ utilising consumer SSDs to see how long they last and what their ‘real world’ endurance is really.  It seem that pretty much all of the drive are very good and last longer than their manufacturers state; a fairly unusual state of affairs that!! Something in IT that does better than it states on the can.

The winner is Samsung 840 Pro that manages more than 2.4 Pb of data before it dies!

This is great news for consumers but there are some gotchas; it seems that most drives when they finally fail, they fail hard and leave your data inaccessible; some of the drives’ software happily states they are healthy right up until the day they fail.

A lot of people assume that when SSDs fail and reach their end of life for writes; the data on them will still be readable; it seems that this might not be the case with the majority of drives. You are going to need decent backups.

What does this mean for the flash array market? Well, in general it appears to be pretty good news and that those vendors who are using consumer-grade SSD are pretty much vindicated. But…it does show that managing and monitoring the SSDs in those arrays is going to be key. Software as per usual is going to be king!

A much larger scale test needs to be done before we can be 100% certain and it’d be good if some of the array vendors were to release their experiences around the life of consumer drives that they are using in their arrays.

Still if I was running a large server estate and was looking at putting SSDs in them; I probably would now think twice before forking out a huge amount of cash on eMLC and would be looking at the higher-end consumer drives.



A Continuum of Theft…

Apologies, this is a bit rambling but I needed to get some ideas down…and it’s my blog so I’ll ramble if I want to!!

We’ve been talking about Cloud in one form or another for many years now; this current iteration of utility computing that has come to be known as Cloud might actually be a thing. And yet, for all of the talk and all of the noise; traditional IT does seem to rumble on.

Some analysts will have you believe that we have entered an era of bimodal computing; traditional IT and the new agile movement. Traditional IT that cannot change fast enough to meet today’s business needs and this new marvellous agile computing that is changing the world and business.

It seems that the way to move forward is to abandon the old and go headlong into the new. We’ll just stop doing that and start doing this; it’s all very easy. But we have a problem, we don’t live in a bimodal world; we don’t live in a world of absolutes and there is certainly no one solution that fits all.

And this change involves people; most people, even technologists don’t really like change, even if we accept that change is necessary. Change brings opportunity but it is also dangerous and uncomfortable. I don’t think that the analysts often take account of the fact that organisations really run on people and not machines.

Actually, I’ll take back what I said; many people do enjoy change but they like it at a measured rate. This is important to embrace and understand; it’ll allow us to build a model that does work and to take things forward, a model that doesn’t require massive leaps of faith.

We absolutely need those daredevils who are going come up ideas that have potential to change the world; the test-pilots, the explorers, the people with a vision for change. Few organisations can sustain themselves with just those people; not over any long period; they make mistakes, they crash, their luck runs out and they never finish anything!

What organisations really need are people who are capable of taking on the new ideas and making them the new normal but without sacrificing the current stability of the services currently provided. These people are not blockers; they are your implementers, finishers and they are the core of your organisation.

Then you need people to run the new normal now that it has become boring. Every now and then, you need to give them a poke and hopefully one of them will throw their hands up in horror and decide that they fancy taking a leap off a cliff; they can run round to the start of the cycle and help bring in next iteration of technology. I think there’s huge value in joining these folks up with those at the start of the process.

IT tends to be somewhat cyclical; you only have to listen to the greybeards talking about mainframe to realise this. The only question in my mind is how much faster we can get the cycles to go. It’s not bimodal; I know some think it is’s probably a lot more graduated than that.

Some people will live all their careers in one stage of the cycle or another; a few will live at the extremes but many of us will move between phases as we feel enthused or otherwise.

Friday Doom

More and more people seem to think that we are moving to some kind of bimodal storage environment where all your active data sits on AFA and everything else in an object store.

Or as I like to think of it; your data comes rushing in as an unruly torrent and becomes becalmed in a big data swamp which stinks up the place; it then sits and rots for many years, eventually becoming the fuel that you run your business on and leads to the destruction of the planet due to targeted advertising of tat that people simply must have!

So just say No to Flash and No to Object Storage!

What Year Is This?

I had hoped we’d moved beyond the SPC-1 benchmarketing but it appears not. If you read Hu’s blog; you will find that the VSP G1000 is

the clear leader in storage performance against the leading all flash storage arrays!

But when you look at the list, there are so many flash arrays missing from the list that it is hardly worth bothering with. No Pure, no Solidfire, no Violin and obviously no EMC (obviously because they don’t play the SPC game). Now, I haven’t spoken to the absentees whether they intend to both with the SPC benchmarketing exercise; I suspect most don’t intend too at the moment as they are too busy trying to improve and iterate their products.

So what we end up with is a pretty meaningless list.

Is it useful to know when your array’s performance falls of a cliff? Yes, it probably is but you might be better trying to get your vendor to sign-up to some performance guarantees as opposed to relying on a benchmark that currently appears to have little value.

I wish we could move away from benchmarketing, magic quadrants and the ‘woo’ that surrounds the storage market. I suspect we won’t anytime soon.

#storagebeers – London March 11th

So I know it’s a bit late notice however after a conversation in a pub last week; it has been suggested that we need another #storagebeers event and a pretty good opportunity has presented itself.

It is Cloud Expo next week (March 11th – March 12th) at the Exhell Centre. So for those of you interested, we have decided that on the evening of March 11th; there will be a #storagebeers. It is possible that there will be a number of vendors in attendance and there might be a possibility of the odd sponsored beer.

Normal #storagebeers rules are in force; anyone is welcome. All normal vendor hostilities are suspended for the evening..gentle banter and ribbing is allowed tho’!

But we don’t want force people to schlep out to the soul-less halls of despair that are the environs of the Excel and those of us attending might want to escape.

So the venue is ‘The Counting House’ near Bank for about 6pm; some of us will be grabbing food between 8 and 9pm.

Please come along…

Dead Flesh…

If in doubt rebrand…have IBM completely run out of ideas with their storage offerings? The Spectrum rebrand of their storage offerings feels like the last throw of the dice. And it demonstrates the problems that they currently have.

In fact, it is not all of their storage offerings but appears to be the software offerings? DS8K for example is missing from the line-up but perhaps Spectrum Zombie – the Storage Array that Will Not Die was a step too far. We do however have Spectrum Virtualise; this is a hardware offering offering in the form of SVC currently but is this going to morph into a software offering? There is little reason why it shouldn’t.

But there are also products such as the hardware XIV, the Vxxxx series and also the ESS GPFS appliance that are missing from the Spectrum family? Are we going to see IBM exit these products over time; it feels like the clock is ticking on them?

The DS8K is probably a safe product because of the mainframe support but users of the rest of them are going to be nervous.

Why have IBM managed to completely mess up their storage portfolio? There are still massive gaps in it after all this time; Object Storage, Scalable NAS and indeed an ordinary workaday NAS of their own.

The products they have are generally good; I’ve been a fan of SVC for a long time, a GPFS advocate and a TSM bigot. Products that really work!

I feel sorry for the folks who develop them; they have been let down again and again by their product marketing; the problem isn’t the products!

Brownie points for anyone who gets the reference in the title..


A fool and his money….

And the madness continues…

DON’T BUY THIS CRAP! Give your money to charity or burn it as a piece of performance art! But don’t buy this crap!

This makes me so annoyed! Do something useful with your money….please!

Interesting Question?

Are AFAs ready for legacy Enterprise Workloads? The latest little spat between EMC and HP bloggers asked that question.

But it’s not really an interesting question; a more interesting question is why would I put traditional Enterprise workloads on an AFA? Why even bother?

More and more I’m coming across people who are asking precisely that question and struggling to come up with an answer. Yes, an AFA makes a workload run faster but what does that gain me? It really is very variable across application type and where the application bottle-necks are; if you have a workload that does not rely on massive scale and parallelism, you will probably find that a hybrid array will suit you better and you will gain pretty much all the benefits of flash at a fraction of the cost.

The response often received when asked what the impact of being able to run batch jobs, often the foundation of many legacy workloads, in half the time is a ‘So what?’ As long as the workload runs in the window; that is all anyone cares about.

If all your latency is the human in front of the screen; the differences in response times from your storage become pretty insignificant.

AFAs only really make sense as you move away from a legacy application infrastructure; where you are architecting applications differently, moving many of the traditional capabilities of an Enterprise infrastructure up the stack and into the application. Who cares if the AFA can handle replication, consistency groups and other such capabilities when that is taken care of by the application?

Yes, I can point to some traditional applications that will benefit from a massive amount of flash but these tend to be snowflake applications and they could almost certainly do with a re-write.

I’d like to see more vendors be honest about the use-cases for their arrays; more vendors working in a consultative manner and less trying to shift as much tin as possible. But that is much harder to achieve and requires a level of understanding beyond most tin-shifters.