Storagebod Rotating Header Image

Storage

Time to Shop?

So have you taken my advice and started to build your own storage arrays? I certainly hope so; it’s a lot of fun and could really save your organisation some money.

Still I hope that you’ve bought some spares; thoroughly checked the code, made sure that all your components work together and have thoroughly checked out the edge-cases.

You’ve sorted out your supply chain; made sure that you can get to your data-centre at all hours and told your boss that you are happy to be phoned up and shouted at until something broken is fixed.

But you’ve saved your company a bunch of money!

And it was totally worth doing!? You might have to hire another person or two to help out but they’ll be geeks too and you’ll have great fun. You’ll be competing with Google, Amazon etc with your team’s engineering chops. 

Of course, there are a lot of nay-sayers who say you can’t do this and you can’t do it at scale. They are both right and wrong.

This is a thought-process that a lot of us working in the larger-scale enterprises go through a lot. Every now and then we’ll go though the exercise and work out that the costs don’t stack up over the four or five year technology life-cycle we tend to work in. Certainly when you start to factor in the cost of support and the potential risk profile that a Business is willing to sign up to; you just don’t save enough at Enterprise scale. It might be different if you are Google or Amazon and working at a completely different scale. Building your own works at the small scale and the hyper-scale.

But the big vendors are very worried about this trend; you might simply just move wholesale into Public Cloud and then they are going to see very little of your money if you go into one of the big Cloud companies who do their own engineering. I’m finally beginning to see this fear being reflected in the traditional vendor’s pricing; their four year costs are getting closer to the cost of building out excluding the ongoing costs.

Vendors who supply both software-only and appliance versions of their infrastructure generally price their appliance versions at a lower price point than software-only with bring-your-own-hardware. This allows them to maintain revenue at the expense of pure margin; the big analysts tend to report on revenue and market-share as opposed to raw profitably; it seems to be more important to be the biggest and not necessarily the most profitable.

Buying whitebox is not a game that most Enterprises are in yet; whitebox almost takes you to a world where servers are consumables like pen and paper; this is how the hyperscale works. In the Enterprise, we might not be too far off this; if Enterprises start to change their depreciation cycles for compute tiers, it is entirely possible that the software layer becomes the real capital investment and not the tin.

Dell, HPE and the likes simply become the equivalent of a Ryman’s catalogue.

 

Tooling Up?

As the role of the storage admin changes; the toolset will change and we will need new tools that assist us. Modelling tools that enable us to work with our workloads are generally very expensive and often have rather dubious ROI models that allow us to justify them to our management. 

And you rarely need them but it you are heading into a refresh cycle for example and you don’t necessarily always trust the vendors to mark their own homework..a tool is useful, especially one that you can model your own workload on.

So this tool from the newly merged Load Dynamix and Virtual Instruments looks interesting; I’ve not had a play yet and am not sure of the limitations but free is hard to beat..

Workload Central 

Hopefully more vendors and the community will get involved and add more potential data sources..

Pestilential but Persistent!

There is no doubt that the role of the Storage Admin has changed; technology has moved on and the business has changed but the role still exists in one form or another.

You just have to look at the number of vendors out there jockeying for position; the existing big boys, the new kids of the block, the objectionable ones and the ones you simply want to file. There’s more choice, more decisions and more chance to make mistakes than ever before.

The day-to-day role of the Storage Admin; zoning, allocating LUNs, swearing at arcane settings, updating Excel spreadsheets and convincing people that it is all ‘Dark Magic’; that’s still there but much of it has got easier. I expect any modern storage device to be easily manageable on a day-to-day basis; I expect the GUI to be intuitive; I expect the CLI or API to be logical and I hope the nomenclature used by most players to be common. 

The Storage Admin does more day-to-day and does it quicker; the estates are growing ever larger but the number of Storage Admins is not increasing in-line. But that part of the role still exists and could be done by an converged Infrastructure team and often is. 

So why do people keep insisting the role is dead? 

I think because they focus on the day-to-day LUN monkey stuff and that can be done by anyone. 

I’m looking at things differently; I want people who understand business requirements who then turn these into technical requirements who can then talk to vendors and sort the wheat from the chaff. People who can filter bullshit; the crap that flies from all sides; the unreal marketing and unreal demands of the Business.

People who look at complex systems and can break them down quickly; who understand different types application interactions, who understand the difference between IOPS, latency and throughput.  

People who are prepared to ask pertinent and sometimes awkward questions; who look to challenge and change the status-quo. 

In any large IT infrastructure organisation; there are two teams who can generally look at their systems and make significant inferences about the health, the effectiveness and a difference to the infrastructure. They are often the two teams who are the most lambasted; one is the network team and the other the storage team. They are the two teams who are changing the fastest whilst maintaining a legacy infrastructure and keeping the lights on. 

The Server Admin role has hardly really changed…even virtualisation has little impact on the role; the Storage and Network teams are changing rapidly, many are embracing Software-Defined whilst the Industry is trying to decide what Software-Defined is.

Many are already pretty DevOps in nature; they just don’t call it that but you try to manage the rapid expansion in scale without a DevOps type approach. 

I think many in the industry seem to want to kill off the Storage specialist role; it is more needed than ever and is becoming a lot more key…you probably just won’t call them LUN Monkeys any more..they’ve evolved!

But we persist…

Technology Live and a Little More…

Last week, I was at A3 Communications’ Technology Live event;  it’s a smaller event where a group of journalists, bloggers and analysts are briefed by three or four companies. Good fun, a chance for awkward questions to be asked and generally good-humoured banter. 

It is a chance for some of the smaller and lesser known companies; some just pretty much unveiling from stealth to get their message across without some of the hype and hyperbole of the larger events you sometimes associate with the business. 

Companies like Scale Computing and their converged platform probably deserve to be much better known; targeted at the SMB and smaller user whose IT department is one person who actually has another proper job, quietly get on with things without press releases about yet further funding rounds and a gazillion dollar valuations. It is one of the few times when I’ve had a converged platform demonstrated where I’ve thought ‘well that makes sense for their target market’ as opposed to ‘shiny lights…but where’s the substance’. 

DDN are much larger and better known that Scale Computing but probably not as well-known as they should be; their HPC roots are allowing them to play in the scale-out and big data space. They’ve taken massive strides in hiding some of the complexity of their products; what was really a bit of an engineer’s product, now has some polish that really lends itself to the Enterprise.  If you are looking at tiering from primary storage to a secondary storage object tier; I think that you must have a look. 

Tarmin have been around for ages with their Gridbank Data-Defined Storage; it’s a really interesting concept but it’s one that I still struggle to find the use-case that will really drive it forward. A Swiss-Army knife of a product that might be lacking that one blade that would make it compelling; I feel that it’ll just need too much work to integrate into most application environments and I also have concerns about how easy it is to get out of if you decided that it was no longer the platform for you.

We also had OpenIO who are another Object Storage vendor in what is an increasingly crowded space; new to the game and building on-top of an open-source product. You pay for the support and not the product; obviously, it’s model that has worked well for some in the past but I feel that you really need some critical mass before it becomes viable. And there’s many alternatives out there now but it did look nice; hexagons instead of circles. It is also really easy to get up and running quickly; install vagrant if you haven’t already and then a couple of commands, you can quickly have an object store up and running. With Swift and S3 compatibility; it could be a nice entry point for developers to play with.

Earlier in the month, I was at BVE for my day job. I chatted to a few vendors but I really want to call out what I think is a perfect example of a company who are successfully building a business out of doing something extremely well in a well-defined niche. Object Matrix who are based in Wales do Object-Storage for media applications; they have spent a lot of time integrating with products like Avid and GrassValley, really understanding the business that they are in and building a successful company without mega-investments. And they are really nice people….who unfortunately support one of the weaker sides in the Six Nations ;-). 

There are many companies like some of the aforementioned who are doing great jobs for their customers who aren’t getting the recognition because they don’t play in the ‘glamour’ end of the market but I suspect some of them will still be around years after the Unicorns have turned out to be pit-ponies…

Perhaps you work for one; if so…get in touch, I’d love to hear from you. 

 

Something’s missing?

Yes, I know…I’m getting very lazy about blogging; I’m still not sure if the industry is boring me or simply exasperating me so much that I cannot be bothered to vent my spleen any more. I suspect that it is a bit of both! This should be an interesting year for the industry with the mergers, takeovers and companies simply thrashing around trying to reinvent themselves. So apart from life still being somewhat stressful, I amuse myself trying to get my home-office perfectly set-up. I might even put up pictures once I have done so!!

Anyway the recent announcements from companies large and small around All Flash Arrays has temporarily pricked me awake; hopefully at some point soon, the All Flash Array Announcment will no longer be a thing, it’ll just be another array announcement. Flash will eventually subsume rotational rust as the primary storage medium of choice for all workloads; well until the next big thing comes along. Opinion as to when this is does vary from pundit to purveyeor but it is going to happen.

That time is not here though and perhaps it is still worth considering the best use of our storage capacity and how to get the most from it. And it seems that some vendors don’t really want to help us poor customers in this space.

If you ship an AFA variant of existing array and you either add new features that aren’t supported on the exisiting variant across all tiers of storage be it flash or rotational rust or vice versa; I want good architectural reasons as to why you can’t do so. Compression for example works very well on both traditional disk and flash; in-line deduplication is harder, so you might get a pass on the latter but not the former. If you want to try to convince me that your expensive Flash tier is actually as cheap as the traditional tier you also ship; you are going to have work extra hard to do so when competing with vendors who can actually enable features across all of their tiers.

I shall leave it to the reader’s imagination as to which vendor might be attempting to play this game.

2016 and Beyond…

Predictions are a mug’s game…the trick is to keep them as non-specific as possible and not name names…here are mine!

What is the future for storage in the Enterprise? 2016 is going to pan out to be an ‘interesting’ year; there’s company integrations and mergers to complete with more to come so I hear; cascading acquisitions seem likely as well.

There will IPOs; they will be ‘interesting’! People are looking for exits, especially from the flash market. A market that looks increasingly crowded with little to really tell between the players.

Every storage vendor is going to struggle with maintaining growth; technology changes has meant that it is likely that just to maintain current revenues that twice as much capacity is going to have to be shipped. Yet data efficiency improvements from thin-provisioning to compression to dedupe mean that customers are storing more data on less capacity.

Add in the normal year-on-year decline of the price of storage, this is a very challenging place.

Larger storage customers are becoming more mecurial about what they buy; storage administration has got so easy that changing storage vendors is not the big deal it used to be. The primary value these days of having some dedicated storage bods is that they should be pretty comfortable with any storage put in front of them.

As much as vendors like to think that we all get very excited by their latest bell or whistle; I’m afraid that we don’t any more. Does it make my job easier; can I continue to more with less or best case the same.

Data volumes do continue to grow but the amount of traditional primary data growth has slowed somewhat in my experience.

Data from instrumentation is a real growth area but much of this is transitory; collect, analyse, archive/delete…and as people start to see an ever increasing amount of money flowing to companies like Splunk expect some sharp intakes of breath.

Object Storage will continue to under-perform but probably less so. S3 will continue its rise as the protocol/API of choice for native object. Many file-stores will become object at the back-end but with traditional SMB/NFS front-ends. However, sync and share will make inroads formally into the enterprise space; products like Dropbox Enterprise will have an impact there.

Vendors will continue to wash their products in ‘Software Defined’ colours; customers will remain unimpressed. Open-source storage offerings will grow and cause more challenges in the market. Some vendors might decide to open-source some of their products; expect at least one large company to take this route and be accused of abandonware. And watch everyone try to change their strategy to match this.

An interesting year for many…so with that, I shall be off and wrap presents!

May you all have a Happy Christmas, a prosperous New Year and may your bits never rot!!

Waffle to burn?

NetApp have finally bitten the bullet and bought an AFA vendor; plumping for the technology driven Solidfire as opposed to some of the marketing driven competitors in the space.

At less than a billion dollars; it appears to be a very good deal for NetApp and perhaps with an ever decreasing number of suitors, it is a good deal for Solidfire and avoids the long march to IPO.

Obviously the whole deal will be painted as complementary to NetApp’s current product set but many will hope that Solidfire will long-term supplant the long-in-the-tooth OnTap. NetApp need to swallow their pride and need to move on from the past.

It can’t do this immediately; it needs work and it is not yet a solution for unstructured data. But putting data-services on top of it should not be a massive task as long as that is what NetApp decide to do and they don’t decide to try to integrate it with OnTap. NetApp can’t afford another decade of engineering faff! Funnily enough though , FC is seen as a relatively weak-point for Solidfire; where have we heard the before?

This could be as big a deal for them as EMC’s acquisition of Data General in 1999; the Clariion business brought some great engineers and a business that turned into a cash-cow for them. It allowed them to move into a different space and gave them options; it probably saved the company whilst they were messing up the Symmetrix line.

And whilst EMC/Dell are integrating themselves; NetApp have a decent opportunity to steal a march on their arch-rivals; especially if they take a light touch and continue to allow Solidfire to act like an engineering-led start-up.

I still have my doubts whether a storage-focused behemoth can actually survive long-term as data-centres change and buying behaviours change. But for the time being, NetApp have an interesting product again.

Interesting times for friends at both companies…

p.s anyone want to buy a pair of Solidfire socks?

Object Lessons?

I was hoping that one of the things that I might be able to write about after HPE Discover was that HPE finally had a great solution for Scale-Out storage; either NAS or Object.

There had been hints that something was coming; yes, HPe had done work with Cleversafe and Scality for Object Storage but the hints were that they were doing something of their own. And with IBM having taken Cleversafe into their loving bosom, HPE are the only big player without their own object platform.

Turns out however that HPE’s big announcement was their ongoing partnership with Scality; now Scality is a good object platform but there are bits that need work as is the case with Cleversafe and the others.

I don’t think that I am the only one is left disappointed by the announcement and the not the only person who was thinking…why didn’t they just buy Scality?

Are HPE still thinking of doing their own thing? Well, it’s gone very quiet and there’s some sheepish looking people about and some annoyed HPErs wondering when they will get their story straight.

Like HPE’s Cloud strategy; confusion seems to reign.

If there is any take-away from the first HPE Discover….it seems that HPE are discovering slowly and the map that is being revealed has more in common with the Mappa Mundi than an Ordinance Survey map…vaguely right, bits missing and centralised on the wrong thing.

Overcoming Objections

My friend Enrico is a massive fan of Object Storage whereas for a long time, I’ve had the reputation of being somewhat sceptical; feeling the whole thing has been somewhat overhyped. The hype started with EMC’s Atmos launch and continued from there. 

The problem with Object Storage has been the lack of support from application vendors especially in the space that I work in. And development teams, especially those working in organisations with large numbers of heritage applications have been very slow to embrace it.  Most just want to work with standard filesystems.

And so we saw the birth of the cloud-gateway; devices that sat in front of the object-stores and presented the object-stores in a more familiar manner. Yet often the way that these were licensed simply added cost and negated the low cost of object store; they also added complexity into an environment.

The Object Storage vendors were slower to acknowledge the issue and really wanted you to use the API to access the storage; some of the larger vendors really didn’t want their Object Storage to cannibalise their NAS revenues and were even slower to acknowledge the issue.

So it seemed that Object Storage was really going to be confined to the world of cloud-scale and cloud-native applications. 

But this now seems to be rapidly changing; robust NFS implementations from the Object Storage vendors are becoming significantly more common; SMB implementations still seem to be rather patchy but once they become more robust, I see Object Storage becoming the standard for file-serving applications. 

Will we see API-driven ‘file access’ become the universal method for interacting with file storage? Not for some time but having the choice and realising that it is a not and all or nothing scenario will begin to ease friction in this space.  

 

Punish the Pundits!!

A day rarely goes by without someone declaring one technology or another is dead…and rarely a year goes by without someone declaring this is the year of whatever product they happen to be pimping or in favour of.

And yet, you can oft find dead technologies in rude health and rarely does it actually turn out to be the year of the product it is supposed to be the year of.

It turns out that pundits (including me) really have little idea what technology is going to die or fly. And that is what makes the industry fun and interesting.

The storage industry is especially good for this; SAN is dead, DAS lives, NAS is obsolete, Object is the future, iSCSI will never work, Scale Up, Scale Out…

We know nothing…

The only thing we do know is that data volumes will keep getting bigger and we need somewhere to put it.

In the past three months; I’ve seen technologies in what everyone will have you believe are innovation-free zones that have made me stop and think ‘But I thought that was going to die….’

Yes we have far too many start-ups in some parts of the industry; far too many people have arrived at where they thought the puck was going to be.

A few people seem to be skating round where the puck was.

And there’s a few people who have picked the puck, stuck in their pocket and hidden it.

So my prediction for the next eighteen months…

‘Bumpy….with the chance of sinkholes!’

My advice…

‘Don’t listen to the pundits, we know nothing….we just love the shinies!!’