Storagebod Rotating Header Image

May, 2016:

More Vintage Stuff

Recently I’ve been spending time thinking about what DevOps really means to my teams and to me. A lot of reading has been done and a lot of pondering of the navel.

Tne most important conclusion that I have come too is that the DevOps movement is nothing new; the second conclusion I have come to is that it can mean pretty much what you want it to and hence there is no right way to do it but there might well be horribly wrong ways to do it.

As a virtual greybeard; I started my IT career in the mainframe world as a PL/1 programmer but I also did some mainframe systems programming. As an application programmer, I was expected to do the deployment, support the deployment and be involved with application from cradle to undeath.

As a systems programmer, we scripted and automated in a variety of languages; we extended and expanded functionality of system programs; user exits were/are an incredibly powerful tool for the systems programmer. VSAM and VTAM – the software-defined storage and networking of their time.

We plagarised and shared scripts; mostly internally but also at times, scripts would make their way round the community via the contractor transmission method.

Many DevOps engineers would look at how we worked and find it instantly familiar…although the rigorous change control and formalised processes might freak them a bit.

So as per usual, the wheel has been re-invented and re-branded.

I’ve boiled down DevOps and the idea of the Site Reliability Engineering function down in my mind to the following –

Fix Bad Stuff
Stop Bad Stuff Happening
Do Good Stuff
Make Good Stuff easier to do
Keep Developing Skills

It turns out that my teams are already pretty working in this way; some folks spend more time dealing with the Bad Stuff and some spend more time dealing with the Good Stuff.

DevOps could be a great way to work; you might find that you already are on this journey and don’t believe anyone who tells you that it is new and revolutionary.

It’s not!

Time to Shop?

So have you taken my advice and started to build your own storage arrays? I certainly hope so; it’s a lot of fun and could really save your organisation some money.

Still I hope that you’ve bought some spares; thoroughly checked the code, made sure that all your components work together and have thoroughly checked out the edge-cases.

You’ve sorted out your supply chain; made sure that you can get to your data-centre at all hours and told your boss that you are happy to be phoned up and shouted at until something broken is fixed.

But you’ve saved your company a bunch of money!

And it was totally worth doing!? You might have to hire another person or two to help out but they’ll be geeks too and you’ll have great fun. You’ll be competing with Google, Amazon etc with your team’s engineering chops. 

Of course, there are a lot of nay-sayers who say you can’t do this and you can’t do it at scale. They are both right and wrong.

This is a thought-process that a lot of us working in the larger-scale enterprises go through a lot. Every now and then we’ll go though the exercise and work out that the costs don’t stack up over the four or five year technology life-cycle we tend to work in. Certainly when you start to factor in the cost of support and the potential risk profile that a Business is willing to sign up to; you just don’t save enough at Enterprise scale. It might be different if you are Google or Amazon and working at a completely different scale. Building your own works at the small scale and the hyper-scale.

But the big vendors are very worried about this trend; you might simply just move wholesale into Public Cloud and then they are going to see very little of your money if you go into one of the big Cloud companies who do their own engineering. I’m finally beginning to see this fear being reflected in the traditional vendor’s pricing; their four year costs are getting closer to the cost of building out excluding the ongoing costs.

Vendors who supply both software-only and appliance versions of their infrastructure generally price their appliance versions at a lower price point than software-only with bring-your-own-hardware. This allows them to maintain revenue at the expense of pure margin; the big analysts tend to report on revenue and market-share as opposed to raw profitably; it seems to be more important to be the biggest and not necessarily the most profitable.

Buying whitebox is not a game that most Enterprises are in yet; whitebox almost takes you to a world where servers are consumables like pen and paper; this is how the hyperscale works. In the Enterprise, we might not be too far off this; if Enterprises start to change their depreciation cycles for compute tiers, it is entirely possible that the software layer becomes the real capital investment and not the tin.

Dell, HPE and the likes simply become the equivalent of a Ryman’s catalogue.

 

Time to Build?

Any half-way competent storage administrator or systems administrator should be able to build a storage array themselves these days. It’s never really been easier and building yourself a dual-head filer that does block and network attached should be a doddle for anyone with a bit of knowledge, a bit of time and some reasonable google-fu skills. I built a block-storage array using an old PC, a couple of HBAs and linux about five years ago; it was an interesting little project, it could present LUNs via FC/iSCSI and file-share via SMB and NFS. It couldn’t do Object but if I was doing it again today, it would.

And it was a single-head device but it was good enough to use as a target device to play about with FC and generally support my home devices. I only recently switched it off because I’m not running FC at home any more.

But if I could build a storage array five years ago; you can do so today. I am not that good a storage/server guy; I’m a tinkerer and dilettante. You are probably much more competent than me.

Another factor that makes it easier is that FC is slowly going away; it’s slow progress but iSCSI making headway for those who really need block, 10 GbE is coming down in price. I’m also interested to see whether some the proposed intermediate speeds of Ethernet have an impact in this space; many data-centres are not yet 10 GbE and there is still quite a cost differential but 1 GbE is not really good enough for a data-centre storage network but 5 GbE and maybe even 2.5GbE might good enough in some cases. And as FC goes away; building your own storage endpoints becomes a lot simpler.

Throw in commodity flash with one of ‘new’ file-systems and you have a pretty decent storage array at a cost per terabyte that is very attractive. Your cost of acquistion is pretty low, you’ll learn a whole lot and be positioned nicely for Infrastructure as Code tsunami.

If you do a great job, you might even be able to spin yourself out as a new flash-startup. Your technology will very similar to a number of start-ups out there.

So why are you sitting here, why are you still raising POs against the three or four letter name vendors?

Imagine never having to speak to them again, what a perfect world.

Tooling Up?

As the role of the storage admin changes; the toolset will change and we will need new tools that assist us. Modelling tools that enable us to work with our workloads are generally very expensive and often have rather dubious ROI models that allow us to justify them to our management. 

And you rarely need them but it you are heading into a refresh cycle for example and you don’t necessarily always trust the vendors to mark their own homework..a tool is useful, especially one that you can model your own workload on.

So this tool from the newly merged Load Dynamix and Virtual Instruments looks interesting; I’ve not had a play yet and am not sure of the limitations but free is hard to beat..

Workload Central 

Hopefully more vendors and the community will get involved and add more potential data sources..