Storagebod Rotating Header Image

Storage

Scale-Out of Two?

One of the things I have been lamenting about for some time with many vendors is that there has been a lack of a truly credible alternative to EMC’s Isilon product in the Scale-Out NAS space. There are some technologies out there that could compete but they just seem to fall/fail at the last hurdle; there are also technologies that are packaged to look like Scale-Out but are cludges and general hotch-potches.

So EMC have pretty much have had it their own way in this space and they know it!

But yesterday, finally a company came out of Stealth to announce a product that might finally be the alternative to Isilon that I and others have been looking for.

That company is Qumulo; they claim to have developed the first Data-Aware Scale-Out NAS; to be honest that first bit, ‘Data-Aware’ sounds a bit like marketing fluff but Scale-Out NAS…that hits the spot. Why would Qumulo be any more interesting than the other attempts in the space? Well, they are based out of Seattle founded by a bunch of ex-Isilon folks; so they have credibility. I think they understand that the core of any scale-out product is scale-out; it has to be designed that way from the start.

I also think that they understand that any scale-out system needs to be easy to manage; the command and control options need to be robust and simple. Many storage administrators love the Isilon because it is simple to manage but there are still things that it doesn’t do so well; ACL management is a particular bugbear of many, especially those of us who have to work in mixed NFS/SMB environments (OSX/Windows/Linux).

If we go to the marketing tag-line, ‘Data Aware’; this seems to be somewhat equivalent to the Insight-IQ offering from Isilon but baked into the core product set. I have mentioned here and also to the Isilon guys that I believe that Insight-IQ should be free and a standard offering; generally, by the time that a customer needs access to Insight-IQ, it’s because there’s a problem open with support.

But if I start to think about my environment; when we are dealing with complex workflows for a particular asset, it would be useful to follow that asset; see what systems touch it, where the bottle-necks are and perhaps the storage where the asset lives are might well be the best place. It might not be that the storage is the problem but it is the one common environment for an asset. So I am prepared to be convinced that ‘Data Aware’ is more than marketing; it needs be properly useful and simple for me to produce meaningful reports however.

Qumulo have made the sensible decision that at day one, a customer has the option of deploying on their own commodity hardware or purchase an appliance from Qumulo. I’ll have to see the costs and build our own TCO model, let’s hope that for once it will actually be more cost effective to use my own commodity hardware and not have to pay some opt-out tax that makes it more expensive.

It makes a change to see a product that meets a need today…I know plenty of people who will be genuinely interested in seeing a true competitor to EMC Isilon. I think even the guys still at Isilon are interested; it pushes them on as well.

I look forward to talking to Qumulo in the future.

Stupid name tho’!!

Flash in a pan?

The Tech Report have been running an ‘SSD Endurance Experiment’ utilising consumer SSDs to see how long they last and what their ‘real world’ endurance is really.  It seem that pretty much all of the drive are very good and last longer than their manufacturers state; a fairly unusual state of affairs that!! Something in IT that does better than it states on the can.

The winner is Samsung 840 Pro that manages more than 2.4 Pb of data before it dies!

This is great news for consumers but there are some gotchas; it seems that most drives when they finally fail, they fail hard and leave your data inaccessible; some of the drives’ software happily states they are healthy right up until the day they fail.

A lot of people assume that when SSDs fail and reach their end of life for writes; the data on them will still be readable; it seems that this might not be the case with the majority of drives. You are going to need decent backups.

What does this mean for the flash array market? Well, in general it appears to be pretty good news and that those vendors who are using consumer-grade SSD are pretty much vindicated. But…it does show that managing and monitoring the SSDs in those arrays is going to be key. Software as per usual is going to be king!

A much larger scale test needs to be done before we can be 100% certain and it’d be good if some of the array vendors were to release their experiences around the life of consumer drives that they are using in their arrays.

Still if I was running a large server estate and was looking at putting SSDs in them; I probably would now think twice before forking out a huge amount of cash on eMLC and would be looking at the higher-end consumer drives.

 

 

Friday Doom

More and more people seem to think that we are moving to some kind of bimodal storage environment where all your active data sits on AFA and everything else in an object store.

Or as I like to think of it; your data comes rushing in as an unruly torrent and becomes becalmed in a big data swamp which stinks up the place; it then sits and rots for many years, eventually becoming the fuel that you run your business on and leads to the destruction of the planet due to targeted advertising of tat that people simply must have!

So just say No to Flash and No to Object Storage!

What Year Is This?

I had hoped we’d moved beyond the SPC-1 benchmarketing but it appears not. If you read Hu’s blog; you will find that the VSP G1000 is

the clear leader in storage performance against the leading all flash storage arrays!

But when you look at the list, there are so many flash arrays missing from the list that it is hardly worth bothering with. No Pure, no Solidfire, no Violin and obviously no EMC (obviously because they don’t play the SPC game). Now, I haven’t spoken to the absentees whether they intend to both with the SPC benchmarketing exercise; I suspect most don’t intend too at the moment as they are too busy trying to improve and iterate their products.

So what we end up with is a pretty meaningless list.

Is it useful to know when your array’s performance falls of a cliff? Yes, it probably is but you might be better trying to get your vendor to sign-up to some performance guarantees as opposed to relying on a benchmark that currently appears to have little value.

I wish we could move away from benchmarketing, magic quadrants and the ‘woo’ that surrounds the storage market. I suspect we won’t anytime soon.

Dead Flesh…

If in doubt rebrand…have IBM completely run out of ideas with their storage offerings? The Spectrum rebrand of their storage offerings feels like the last throw of the dice. And it demonstrates the problems that they currently have.

In fact, it is not all of their storage offerings but appears to be the software offerings? DS8K for example is missing from the line-up but perhaps Spectrum Zombie – the Storage Array that Will Not Die was a step too far. We do however have Spectrum Virtualise; this is a hardware offering offering in the form of SVC currently but is this going to morph into a software offering? There is little reason why it shouldn’t.

But there are also products such as the hardware XIV, the Vxxxx series and also the ESS GPFS appliance that are missing from the Spectrum family? Are we going to see IBM exit these products over time; it feels like the clock is ticking on them?

The DS8K is probably a safe product because of the mainframe support but users of the rest of them are going to be nervous.

Why have IBM managed to completely mess up their storage portfolio? There are still massive gaps in it after all this time; Object Storage, Scalable NAS and indeed an ordinary workaday NAS of their own.

The products they have are generally good; I’ve been a fan of SVC for a long time, a GPFS advocate and a TSM bigot. Products that really work!

I feel sorry for the folks who develop them; they have been let down again and again by their product marketing; the problem isn’t the products!

Brownie points for anyone who gets the reference in the title..

 

Interesting Question?

Are AFAs ready for legacy Enterprise Workloads? The latest little spat between EMC and HP bloggers asked that question.

But it’s not really an interesting question; a more interesting question is why would I put traditional Enterprise workloads on an AFA? Why even bother?

More and more I’m coming across people who are asking precisely that question and struggling to come up with an answer. Yes, an AFA makes a workload run faster but what does that gain me? It really is very variable across application type and where the application bottle-necks are; if you have a workload that does not rely on massive scale and parallelism, you will probably find that a hybrid array will suit you better and you will gain pretty much all the benefits of flash at a fraction of the cost.

The response often received when asked what the impact of being able to run batch jobs, often the foundation of many legacy workloads, in half the time is a ‘So what?’ As long as the workload runs in the window; that is all anyone cares about.

If all your latency is the human in front of the screen; the differences in response times from your storage become pretty insignificant.

AFAs only really make sense as you move away from a legacy application infrastructure; where you are architecting applications differently, moving many of the traditional capabilities of an Enterprise infrastructure up the stack and into the application. Who cares if the AFA can handle replication, consistency groups and other such capabilities when that is taken care of by the application?

Yes, I can point to some traditional applications that will benefit from a massive amount of flash but these tend to be snowflake applications and they could almost certainly do with a re-write.

I’d like to see more vendors be honest about the use-cases for their arrays; more vendors working in a consultative manner and less trying to shift as much tin as possible. But that is much harder to achieve and requires a level of understanding beyond most tin-shifters.

Happy New Year

Or is it April already, I really cannot tell from this post

So I am going to kickstart a new product; AudioNAS – sounds expensive because it is!

There are very many complaints and issues that I have dealt with when dealing with the creative types that are my user-base but never have they complained that one storage system sounds better than the other storage system. They have never asked for better quality HDMI cables, better quality USB or even better quality Ethernet cables because their current ones just don’t render their work sufficiently well.

But perhaps there is a need for AudioNAS that allows you to get more from your files…improving the bits so that they sound better. Look, believe what you want to believe but if the storage system impacts on the sound of the files being stored there, there are horrible implications…because it means it is changing the data and that would be bad.

‘Sorry, Mr Audiophile….the storage improved your medical files and has smoothed out the fact that you are allergic to penicillin’

We call such improvements data corruption…this is bad!

But I’ll take your money for my new AudioNAS…

Another Year In Bits…

So as another year draws to a close, it appears that everything in the storage industry is still pretty much as it was. There have been no really seismic shifts in the industry yet. Perhaps next year?

The Flash start-ups still continue to make plenty of noise and fizz about their products and growth. Lots of promises about performance and consolidation opportunities, however the focus on performance is throwing up some interesting stuff. It turns out that when you start to measure performance properly; you begin to find that in many cases that the assumed IOP requirements for many workloads isn’t actually there. I know of a few companies who have started down the flash route only to discover that they didn’t anything like the IOPs that they’d thought and with a little bit of planning and understanding, they could make a little flash go an awful long way. In fact, 15K disks would probably have done the job from a performance point of view. Performance isn’t a product and I wish some vendors would remember this.

Object Storage still flounders with an understanding or use case problem; the people who really need Object Storage currently, really do need it but they tend to be really large players and there are not a lot of them. All of the Object Storage companies can point at some really big installs but you will rarely come across the installs; there is a market, it is growing but not at a stellar rate at moment.

Object Storage Gateways are becoming more common and there is certainly a growing requirement; I think as they become common and perhaps simply a feature of a NAS device, this will drive the use of Object Storage until it hits a critical mass and there will be more application support for Object Storage natively. HSM and ILM may finally happen in a big way; probably not to tape but to an Object Store (although Spectralogic are doing great work in bringing Object and Tape together).

The big arrays from the major vendors continue to attract premium costs; the addiction to high margins in this space continues. The usability and manageability has improved significantly but the premium you pay cannot really continue. I get the feeling that some vendors are simply using these to fund their transition to a different model; lets hope that this transition doesn’t take so long that they get brushed away.

The transition to a software dominated model is causing vendors some real internal and cultural issues; they are so addicted to the current costing models that they risk alienating their customers. If software+commodity hardware turns out to be more expensive than buying a premium hardware array; customers may purchase neither and find a different way of doing things.

The cost of storage in the Cloud, both for consumers and corporates continues to fall; it continues to trend ever closer to zero as the Cloud price war continues. You have to wonder when Amazon will give it up as Google and Microsoft fight over the space. Yet for the really large users of storage, trending to zero is still too expensive for us to put stuff in the Cloud; I’m not even sure free is cheap enough yet.

The virtualisation space continues to be dominated by the reality of VMware and promise of OpenStack. If we look at industry noise, OpenStack is going to be the big player; any event that mentions OpenStack gets booked up and sells out but the reality is that the great majority are still looking to VMware for their virtualisation solution. OpenStack is not a direct replacement for VMware and architectural work will needed in your data-centre and with your installed applications but we do see VMware architectures that could be easily and more effectively replaced with OpenStack. But quite simply, OpenStack is still pretty hard-work and hard-pushed infrastructure teams aren’t well positioned currently to take advantage of it.

And almost all virtualisation initiatives are driven and focussed on the wrong people; the server-side is easy…the storage and especially the changes to the network are much harder and require signfiicantly more change. It’s time for the Storage and Network folks to gang-up and get their teams fully involved in virtualisation initiatives. If you are running a virtualisation initiative and you haven’t got your storage and network teams engaged, you are missing a trick.

There’s a lot bubbling in the Storage Industry but it all still feels the same currently. Every year I expect something to throw everything up in the air and it is ripe for major disruption but the dominant players still are dominant. Will the disruption be technology or perhaps it’ll be a mega-merger?

Can I take this chance to wish all my readers a Merry Christmas and a Fantastic New Year…

Stop Selling Storage

In the shower today, I thought back over a number of meetings with storage vendors I’ve had over the past couple of weeks. Almost without exception, they mentioned AWS and the other large cloud vendors as a major threat and compared their costs to them.

We’ve all seen the calculation and generally we know that for many large Enterprises that the costs often favour the traditional vendors; buying at scale and at the traditionally large discounts mean that we get a decent deal. Storage turns out to be free at the terabyte  level and only becomes an appreciable cost once we start getting to the petascale; this is pretty much true for both the Cloud providers and the traditional vendors.

But when I look round the room in a normal sales presentation/briefing; it is not uncommon for the vendor to have four or five people present, often outnumbering the number of customers in the room; account salesman, product salesman, account technical specialist, product technical specialist and probably a couple of hangers-on. A huge cost to the vendor and hence to me as a customer.

And then if we decide that we want to purchase the storage; we then drift into the extended procurement mode. Our procurement and finance teams will talk to the vendor teams; there may well also be legal teams and other meetings to deal with. The cost to both the vendor and the customer is enormous.

However if we go to a cloud vendor; we generally deal with a website. The cost is there; it’s displayed to all and the only discounts we get are based around volume. Now, I know that there are deals to be done with the larger cloud vendors; otherwise I wouldn’t be fielding calls from their recruitment people looking for people to work in their technical consultancy/sales teams but their sales efforts and costs are a lot less.

It seems to me that if the traditional storage vendors really want to compete with the cloud vendors, they need to change their sales model completely. This means stripping out huge amounts of the cost of sale; this means that they also need to consider how they equalise the playing field for customers both large and small; published volume discounts and reduced costs for all, especially the smaller customers. The Enterprise customers will not initially see a huge difference in their cost base but smaller customers will have greater choice and long-term it will benefit all; perhaps even some vendors.

Basically stop selling storage; build better products, sensible marketing and reduced friction to acquisition.

I kind of hope that the move to storage delivered as software designed to run on commodity hardware could drive this but at the moment, I see many traditional vendors really struggling to come up with a sales and marketing strategy to support this transition.

The one who gets this right, could or should do very well. The ones who continue with sales-model that is based on how they sold hardware in the past…could fail very hard.

Yes, there are customers who still like the idea of buying hardware and software in an integrated package; arguably, that’s what the cloud-providers do with serious limitations; but they will look at disaggregated models and do the cost modelling. Your prices should not attract some of the serious premium that you believe that you deserve….so look at ways of taking out cost.

 

Done

Could VMAX3 possibly be the last incarnation of the Symmetrix that ships?

As an Enterprise Array, it feels done; there is little left to do, arguably this has been the case for some time but the missing feature for VMAX had always been ease of use and simplicity. The little foibles such as the Rule of 17, Hypers, Metas, BCVs vs Clones all added to the mystique/complexity and led to many storage admins believing that we were some kind of special priesthood.

The latest version of VMAX and the rebrand of the Enginuity into HyperMax removes much of this and it finally feels like a modern array…as easy to configure and run as any array from their competitors.

And with this ease of use; it feels like the VMAX is done as an Enterprise Array…there is little more to add. As block array, it is feature complete.

The new NAS functionality will need building upon but apart from this…it’s done.

So this leaves EMC with VNX and VMAX; two products that are very close in features and functionality; one that is cheap and one that is still expensive. So VMAX’s only key differentiator is cost…a Stellar Artois of the storage world.

I can’t help but feel that VNX should have a relatively short future but perhaps EMC will continue to gouge the market with the eye-watering costs that VMAX still attracts. A few years a go; I thought the Clariion team might win out over the Symm team, now I tend to believe that eventually the Symm will win out.

But as it stands, VMAX3 is the best enterprise array that EMC have shipped but arguably it should be the last enterprise array that they ship. The next VMAX version should just be software running on either your hardware or perhaps a common commodity platform that EMC ship with the option of running the storage personality of choice. And at that point; it will become increasingly hard to justify the extra costs that the ‘Enterprise’ array attracts.

This model is radically different to the way they sell today…so moving them into a group with the BURA folks makes sense; these folks are used to selling software and understand that is a different model..well some of them do.

EMC continue to try to re-shape themselves and are desperately trying to change their image; I can see a lot of pain for them over the next few years especially as they move out of the Tucci era.

Could they fail?

Absolutely but we live a world where it is conceivable that anyone of the big IT vendors could fail in the next five years. I don’t think I remember a time when they all looked so vulnerable but as their traditional products move to a state of ‘doneness’; they are all thrashing around looking for the next thing.

And hopefully they won’t get away with simply rebranding the old as new…but they will continue to try.