Storagebod Rotating Header Image

Corporate IT

2016 and Beyond…

Predictions are a mug’s game…the trick is to keep them as non-specific as possible and not name names…here are mine!

What is the future for storage in the Enterprise? 2016 is going to pan out to be an ‘interesting’ year; there’s company integrations and mergers to complete with more to come so I hear; cascading acquisitions seem likely as well.

There will IPOs; they will be ‘interesting’! People are looking for exits, especially from the flash market. A market that looks increasingly crowded with little to really tell between the players.

Every storage vendor is going to struggle with maintaining growth; technology changes has meant that it is likely that just to maintain current revenues that twice as much capacity is going to have to be shipped. Yet data efficiency improvements from thin-provisioning to compression to dedupe mean that customers are storing more data on less capacity.

Add in the normal year-on-year decline of the price of storage, this is a very challenging place.

Larger storage customers are becoming more mecurial about what they buy; storage administration has got so easy that changing storage vendors is not the big deal it used to be. The primary value these days of having some dedicated storage bods is that they should be pretty comfortable with any storage put in front of them.

As much as vendors like to think that we all get very excited by their latest bell or whistle; I’m afraid that we don’t any more. Does it make my job easier; can I continue to more with less or best case the same.

Data volumes do continue to grow but the amount of traditional primary data growth has slowed somewhat in my experience.

Data from instrumentation is a real growth area but much of this is transitory; collect, analyse, archive/delete…and as people start to see an ever increasing amount of money flowing to companies like Splunk expect some sharp intakes of breath.

Object Storage will continue to under-perform but probably less so. S3 will continue its rise as the protocol/API of choice for native object. Many file-stores will become object at the back-end but with traditional SMB/NFS front-ends. However, sync and share will make inroads formally into the enterprise space; products like Dropbox Enterprise will have an impact there.

Vendors will continue to wash their products in ‘Software Defined’ colours; customers will remain unimpressed. Open-source storage offerings will grow and cause more challenges in the market. Some vendors might decide to open-source some of their products; expect at least one large company to take this route and be accused of abandonware. And watch everyone try to change their strategy to match this.

An interesting year for many…so with that, I shall be off and wrap presents!

May you all have a Happy Christmas, a prosperous New Year and may your bits never rot!!

Waffle to burn?

NetApp have finally bitten the bullet and bought an AFA vendor; plumping for the technology driven Solidfire as opposed to some of the marketing driven competitors in the space.

At less than a billion dollars; it appears to be a very good deal for NetApp and perhaps with an ever decreasing number of suitors, it is a good deal for Solidfire and avoids the long march to IPO.

Obviously the whole deal will be painted as complementary to NetApp’s current product set but many will hope that Solidfire will long-term supplant the long-in-the-tooth OnTap. NetApp need to swallow their pride and need to move on from the past.

It can’t do this immediately; it needs work and it is not yet a solution for unstructured data. But putting data-services on top of it should not be a massive task as long as that is what NetApp decide to do and they don’t decide to try to integrate it with OnTap. NetApp can’t afford another decade of engineering faff! Funnily enough though , FC is seen as a relatively weak-point for Solidfire; where have we heard the before?

This could be as big a deal for them as EMC’s acquisition of Data General in 1999; the Clariion business brought some great engineers and a business that turned into a cash-cow for them. It allowed them to move into a different space and gave them options; it probably saved the company whilst they were messing up the Symmetrix line.

And whilst EMC/Dell are integrating themselves; NetApp have a decent opportunity to steal a march on their arch-rivals; especially if they take a light touch and continue to allow Solidfire to act like an engineering-led start-up.

I still have my doubts whether a storage-focused behemoth can actually survive long-term as data-centres change and buying behaviours change. But for the time being, NetApp have an interesting product again.

Interesting times for friends at both companies…

p.s anyone want to buy a pair of Solidfire socks?

Object Lessons?

I was hoping that one of the things that I might be able to write about after HPE Discover was that HPE finally had a great solution for Scale-Out storage; either NAS or Object.

There had been hints that something was coming; yes, HPe had done work with Cleversafe and Scality for Object Storage but the hints were that they were doing something of their own. And with IBM having taken Cleversafe into their loving bosom, HPE are the only big player without their own object platform.

Turns out however that HPE’s big announcement was their ongoing partnership with Scality; now Scality is a good object platform but there are bits that need work as is the case with Cleversafe and the others.

I don’t think that I am the only one is left disappointed by the announcement and the not the only person who was thinking…why didn’t they just buy Scality?

Are HPE still thinking of doing their own thing? Well, it’s gone very quiet and there’s some sheepish looking people about and some annoyed HPErs wondering when they will get their story straight.

Like HPE’s Cloud strategy; confusion seems to reign.

If there is any take-away from the first HPE Discover….it seems that HPE are discovering slowly and the map that is being revealed has more in common with the Mappa Mundi than an Ordinance Survey map…vaguely right, bits missing and centralised on the wrong thing.

Dude – You’re Getting An EMC

Just a few thoughts on the Dell/EMC takeover/merger or whatever you want to call it. 

  1. In a world where IT companies have been busy splitting themselves up; think HP, Symantec, IBM divesting from server business…it seems a brave move to build a new IT behemoth. 
  2. However; some of the restructuring already announced hints at a potential split in how Dell do business. Dell Enterprise to be run out of Hopkinton and using EMC’s Enterprise smarts in this space.
  3. Dell have struggled to build a genuine storage brand since going their different ways; arguably their acquisitions have under-performed.
  4. VMware is already under attack from various technologies – VMware under control of hardware server vendor would have been a problem a decade ago but might be less so as people have more choices for both virtualising Heritage applications and Cloud-Scale. VMware absolutely now have to get their container strategy right.
  5. EMC can really get to grips with how to build their hyper-converged appliances and get access to Dell’s supply chain. 
  6. That EMC have been picked up by a hardware vendor just shows how hard it is to transition from a hardware company to a software company. 
  7. A spell in purdah seems necessary for any IT company trying to transition their business model. Meeting the demands of the market seems to really hamper innovation and change. EMC were so driven by a reporting cycle, it drove very poor behaviours.
  8. All those EMC guys who transitioned away from using Dell laptops to various MacBooks…oh dear!
  9. I doubt this is yet a done deal and expect more twists and turns! But good luck to all my friends working at both companies! May it be better!

 

Overcoming Objections

My friend Enrico is a massive fan of Object Storage whereas for a long time, I’ve had the reputation of being somewhat sceptical; feeling the whole thing has been somewhat overhyped. The hype started with EMC’s Atmos launch and continued from there. 

The problem with Object Storage has been the lack of support from application vendors especially in the space that I work in. And development teams, especially those working in organisations with large numbers of heritage applications have been very slow to embrace it.  Most just want to work with standard filesystems.

And so we saw the birth of the cloud-gateway; devices that sat in front of the object-stores and presented the object-stores in a more familiar manner. Yet often the way that these were licensed simply added cost and negated the low cost of object store; they also added complexity into an environment.

The Object Storage vendors were slower to acknowledge the issue and really wanted you to use the API to access the storage; some of the larger vendors really didn’t want their Object Storage to cannibalise their NAS revenues and were even slower to acknowledge the issue.

So it seemed that Object Storage was really going to be confined to the world of cloud-scale and cloud-native applications. 

But this now seems to be rapidly changing; robust NFS implementations from the Object Storage vendors are becoming significantly more common; SMB implementations still seem to be rather patchy but once they become more robust, I see Object Storage becoming the standard for file-serving applications. 

Will we see API-driven ‘file access’ become the universal method for interacting with file storage? Not for some time but having the choice and realising that it is a not and all or nothing scenario will begin to ease friction in this space.  

 

A Slight Return

I intend to start updating here again occasionally as the itches begin to build up again and I feel the need to scratch. There’s a lot going in the industry and there’s a massive amount of confusion about where it’s going at the moment.

I’m having interesting conversations with industry figures, many of them are as confused privately as they are sure publicly. Few seem to know exactly how this all plays out and not just storage guys.

I had a conversation a couple of days ago that put the electricity supply model for compute back on the radar; the technology enablers are beginning to line up to make this much more feasible but is the will/desire there? This debate will carry on until we wake up and realise that it’s all changed again.

Flash and trash is still fascinating; vendors still playing games with pricing and comparisons that make little sense. Valuations out of control (maybe) and yet quite possibly we can see the time when flash does become the standard as the prices continue to fall and storage requirements continue to soar.

And a lastly, a big thanks to all those have offered support, prayers, kind thoughts to me over the past few months. It does help..watching people you love go through chemo isn’t fun but it does help reset your priorities a bit.

Scale-Out of Two?

One of the things I have been lamenting about for some time with many vendors is that there has been a lack of a truly credible alternative to EMC’s Isilon product in the Scale-Out NAS space. There are some technologies out there that could compete but they just seem to fall/fail at the last hurdle; there are also technologies that are packaged to look like Scale-Out but are cludges and general hotch-potches.

So EMC have pretty much have had it their own way in this space and they know it!

But yesterday, finally a company came out of Stealth to announce a product that might finally be the alternative to Isilon that I and others have been looking for.

That company is Qumulo; they claim to have developed the first Data-Aware Scale-Out NAS; to be honest that first bit, ‘Data-Aware’ sounds a bit like marketing fluff but Scale-Out NAS…that hits the spot. Why would Qumulo be any more interesting than the other attempts in the space? Well, they are based out of Seattle founded by a bunch of ex-Isilon folks; so they have credibility. I think they understand that the core of any scale-out product is scale-out; it has to be designed that way from the start.

I also think that they understand that any scale-out system needs to be easy to manage; the command and control options need to be robust and simple. Many storage administrators love the Isilon because it is simple to manage but there are still things that it doesn’t do so well; ACL management is a particular bugbear of many, especially those of us who have to work in mixed NFS/SMB environments (OSX/Windows/Linux).

If we go to the marketing tag-line, ‘Data Aware’; this seems to be somewhat equivalent to the Insight-IQ offering from Isilon but baked into the core product set. I have mentioned here and also to the Isilon guys that I believe that Insight-IQ should be free and a standard offering; generally, by the time that a customer needs access to Insight-IQ, it’s because there’s a problem open with support.

But if I start to think about my environment; when we are dealing with complex workflows for a particular asset, it would be useful to follow that asset; see what systems touch it, where the bottle-necks are and perhaps the storage where the asset lives are might well be the best place. It might not be that the storage is the problem but it is the one common environment for an asset. So I am prepared to be convinced that ‘Data Aware’ is more than marketing; it needs be properly useful and simple for me to produce meaningful reports however.

Qumulo have made the sensible decision that at day one, a customer has the option of deploying on their own commodity hardware or purchase an appliance from Qumulo. I’ll have to see the costs and build our own TCO model, let’s hope that for once it will actually be more cost effective to use my own commodity hardware and not have to pay some opt-out tax that makes it more expensive.

It makes a change to see a product that meets a need today…I know plenty of people who will be genuinely interested in seeing a true competitor to EMC Isilon. I think even the guys still at Isilon are interested; it pushes them on as well.

I look forward to talking to Qumulo in the future.

Stupid name tho’!!

Flash in a pan?

The Tech Report have been running an ‘SSD Endurance Experiment’ utilising consumer SSDs to see how long they last and what their ‘real world’ endurance is really.  It seem that pretty much all of the drive are very good and last longer than their manufacturers state; a fairly unusual state of affairs that!! Something in IT that does better than it states on the can.

The winner is Samsung 840 Pro that manages more than 2.4 Pb of data before it dies!

This is great news for consumers but there are some gotchas; it seems that most drives when they finally fail, they fail hard and leave your data inaccessible; some of the drives’ software happily states they are healthy right up until the day they fail.

A lot of people assume that when SSDs fail and reach their end of life for writes; the data on them will still be readable; it seems that this might not be the case with the majority of drives. You are going to need decent backups.

What does this mean for the flash array market? Well, in general it appears to be pretty good news and that those vendors who are using consumer-grade SSD are pretty much vindicated. But…it does show that managing and monitoring the SSDs in those arrays is going to be key. Software as per usual is going to be king!

A much larger scale test needs to be done before we can be 100% certain and it’d be good if some of the array vendors were to release their experiences around the life of consumer drives that they are using in their arrays.

Still if I was running a large server estate and was looking at putting SSDs in them; I probably would now think twice before forking out a huge amount of cash on eMLC and would be looking at the higher-end consumer drives.

 

 

A Continuum of Theft…

Apologies, this is a bit rambling but I needed to get some ideas down…and it’s my blog so I’ll ramble if I want to!!

We’ve been talking about Cloud in one form or another for many years now; this current iteration of utility computing that has come to be known as Cloud might actually be a thing. And yet, for all of the talk and all of the noise; traditional IT does seem to rumble on.

Some analysts will have you believe that we have entered an era of bimodal computing; traditional IT and the new agile movement. Traditional IT that cannot change fast enough to meet today’s business needs and this new marvellous agile computing that is changing the world and business.

It seems that the way to move forward is to abandon the old and go headlong into the new. We’ll just stop doing that and start doing this; it’s all very easy. But we have a problem, we don’t live in a bimodal world; we don’t live in a world of absolutes and there is certainly no one solution that fits all.

And this change involves people; most people, even technologists don’t really like change, even if we accept that change is necessary. Change brings opportunity but it is also dangerous and uncomfortable. I don’t think that the analysts often take account of the fact that organisations really run on people and not machines.

Actually, I’ll take back what I said; many people do enjoy change but they like it at a measured rate. This is important to embrace and understand; it’ll allow us to build a model that does work and to take things forward, a model that doesn’t require massive leaps of faith.

We absolutely need those daredevils who are going come up ideas that have potential to change the world; the test-pilots, the explorers, the people with a vision for change. Few organisations can sustain themselves with just those people; not over any long period; they make mistakes, they crash, their luck runs out and they never finish anything!

What organisations really need are people who are capable of taking on the new ideas and making them the new normal but without sacrificing the current stability of the services currently provided. These people are not blockers; they are your implementers, finishers and they are the core of your organisation.

Then you need people to run the new normal now that it has become boring. Every now and then, you need to give them a poke and hopefully one of them will throw their hands up in horror and decide that they fancy taking a leap off a cliff; they can run round to the start of the cycle and help bring in next iteration of technology. I think there’s huge value in joining these folks up with those at the start of the process.

IT tends to be somewhat cyclical; you only have to listen to the greybeards talking about mainframe to realise this. The only question in my mind is how much faster we can get the cycles to go. It’s not bimodal; I know some think it is trimodal..it’s probably a lot more graduated than that.

Some people will live all their careers in one stage of the cycle or another; a few will live at the extremes but many of us will move between phases as we feel enthused or otherwise.

What Year Is This?

I had hoped we’d moved beyond the SPC-1 benchmarketing but it appears not. If you read Hu’s blog; you will find that the VSP G1000 is

the clear leader in storage performance against the leading all flash storage arrays!

But when you look at the list, there are so many flash arrays missing from the list that it is hardly worth bothering with. No Pure, no Solidfire, no Violin and obviously no EMC (obviously because they don’t play the SPC game). Now, I haven’t spoken to the absentees whether they intend to both with the SPC benchmarketing exercise; I suspect most don’t intend too at the moment as they are too busy trying to improve and iterate their products.

So what we end up with is a pretty meaningless list.

Is it useful to know when your array’s performance falls of a cliff? Yes, it probably is but you might be better trying to get your vendor to sign-up to some performance guarantees as opposed to relying on a benchmark that currently appears to have little value.

I wish we could move away from benchmarketing, magic quadrants and the ‘woo’ that surrounds the storage market. I suspect we won’t anytime soon.