Storagebod Rotating Header Image

Corporate IT

More Vintage Stuff

Recently I’ve been spending time thinking about what DevOps really means to my teams and to me. A lot of reading has been done and a lot of pondering of the navel.

Tne most important conclusion that I have come too is that the DevOps movement is nothing new; the second conclusion I have come to is that it can mean pretty much what you want it to and hence there is no right way to do it but there might well be horribly wrong ways to do it.

As a virtual greybeard; I started my IT career in the mainframe world as a PL/1 programmer but I also did some mainframe systems programming. As an application programmer, I was expected to do the deployment, support the deployment and be involved with application from cradle to undeath.

As a systems programmer, we scripted and automated in a variety of languages; we extended and expanded functionality of system programs; user exits were/are an incredibly powerful tool for the systems programmer. VSAM and VTAM – the software-defined storage and networking of their time.

We plagarised and shared scripts; mostly internally but also at times, scripts would make their way round the community via the contractor transmission method.

Many DevOps engineers would look at how we worked and find it instantly familiar…although the rigorous change control and formalised processes might freak them a bit.

So as per usual, the wheel has been re-invented and re-branded.

I’ve boiled down DevOps and the idea of the Site Reliability Engineering function down in my mind to the following –

Fix Bad Stuff
Stop Bad Stuff Happening
Do Good Stuff
Make Good Stuff easier to do
Keep Developing Skills

It turns out that my teams are already pretty working in this way; some folks spend more time dealing with the Bad Stuff and some spend more time dealing with the Good Stuff.

DevOps could be a great way to work; you might find that you already are on this journey and don’t believe anyone who tells you that it is new and revolutionary.

It’s not!

Time to Shop?

So have you taken my advice and started to build your own storage arrays? I certainly hope so; it’s a lot of fun and could really save your organisation some money.

Still I hope that you’ve bought some spares; thoroughly checked the code, made sure that all your components work together and have thoroughly checked out the edge-cases.

You’ve sorted out your supply chain; made sure that you can get to your data-centre at all hours and told your boss that you are happy to be phoned up and shouted at until something broken is fixed.

But you’ve saved your company a bunch of money!

And it was totally worth doing!? You might have to hire another person or two to help out but they’ll be geeks too and you’ll have great fun. You’ll be competing with Google, Amazon etc with your team’s engineering chops. 

Of course, there are a lot of nay-sayers who say you can’t do this and you can’t do it at scale. They are both right and wrong.

This is a thought-process that a lot of us working in the larger-scale enterprises go through a lot. Every now and then we’ll go though the exercise and work out that the costs don’t stack up over the four or five year technology life-cycle we tend to work in. Certainly when you start to factor in the cost of support and the potential risk profile that a Business is willing to sign up to; you just don’t save enough at Enterprise scale. It might be different if you are Google or Amazon and working at a completely different scale. Building your own works at the small scale and the hyper-scale.

But the big vendors are very worried about this trend; you might simply just move wholesale into Public Cloud and then they are going to see very little of your money if you go into one of the big Cloud companies who do their own engineering. I’m finally beginning to see this fear being reflected in the traditional vendor’s pricing; their four year costs are getting closer to the cost of building out excluding the ongoing costs.

Vendors who supply both software-only and appliance versions of their infrastructure generally price their appliance versions at a lower price point than software-only with bring-your-own-hardware. This allows them to maintain revenue at the expense of pure margin; the big analysts tend to report on revenue and market-share as opposed to raw profitably; it seems to be more important to be the biggest and not necessarily the most profitable.

Buying whitebox is not a game that most Enterprises are in yet; whitebox almost takes you to a world where servers are consumables like pen and paper; this is how the hyperscale works. In the Enterprise, we might not be too far off this; if Enterprises start to change their depreciation cycles for compute tiers, it is entirely possible that the software layer becomes the real capital investment and not the tin.

Dell, HPE and the likes simply become the equivalent of a Ryman’s catalogue.

 

Time to Build?

Any half-way competent storage administrator or systems administrator should be able to build a storage array themselves these days. It’s never really been easier and building yourself a dual-head filer that does block and network attached should be a doddle for anyone with a bit of knowledge, a bit of time and some reasonable google-fu skills. I built a block-storage array using an old PC, a couple of HBAs and linux about five years ago; it was an interesting little project, it could present LUNs via FC/iSCSI and file-share via SMB and NFS. It couldn’t do Object but if I was doing it again today, it would.

And it was a single-head device but it was good enough to use as a target device to play about with FC and generally support my home devices. I only recently switched it off because I’m not running FC at home any more.

But if I could build a storage array five years ago; you can do so today. I am not that good a storage/server guy; I’m a tinkerer and dilettante. You are probably much more competent than me.

Another factor that makes it easier is that FC is slowly going away; it’s slow progress but iSCSI making headway for those who really need block, 10 GbE is coming down in price. I’m also interested to see whether some the proposed intermediate speeds of Ethernet have an impact in this space; many data-centres are not yet 10 GbE and there is still quite a cost differential but 1 GbE is not really good enough for a data-centre storage network but 5 GbE and maybe even 2.5GbE might good enough in some cases. And as FC goes away; building your own storage endpoints becomes a lot simpler.

Throw in commodity flash with one of ‘new’ file-systems and you have a pretty decent storage array at a cost per terabyte that is very attractive. Your cost of acquistion is pretty low, you’ll learn a whole lot and be positioned nicely for Infrastructure as Code tsunami.

If you do a great job, you might even be able to spin yourself out as a new flash-startup. Your technology will very similar to a number of start-ups out there.

So why are you sitting here, why are you still raising POs against the three or four letter name vendors?

Imagine never having to speak to them again, what a perfect world.

Pestilential but Persistent!

There is no doubt that the role of the Storage Admin has changed; technology has moved on and the business has changed but the role still exists in one form or another.

You just have to look at the number of vendors out there jockeying for position; the existing big boys, the new kids of the block, the objectionable ones and the ones you simply want to file. There’s more choice, more decisions and more chance to make mistakes than ever before.

The day-to-day role of the Storage Admin; zoning, allocating LUNs, swearing at arcane settings, updating Excel spreadsheets and convincing people that it is all ‘Dark Magic’; that’s still there but much of it has got easier. I expect any modern storage device to be easily manageable on a day-to-day basis; I expect the GUI to be intuitive; I expect the CLI or API to be logical and I hope the nomenclature used by most players to be common. 

The Storage Admin does more day-to-day and does it quicker; the estates are growing ever larger but the number of Storage Admins is not increasing in-line. But that part of the role still exists and could be done by an converged Infrastructure team and often is. 

So why do people keep insisting the role is dead? 

I think because they focus on the day-to-day LUN monkey stuff and that can be done by anyone. 

I’m looking at things differently; I want people who understand business requirements who then turn these into technical requirements who can then talk to vendors and sort the wheat from the chaff. People who can filter bullshit; the crap that flies from all sides; the unreal marketing and unreal demands of the Business.

People who look at complex systems and can break them down quickly; who understand different types application interactions, who understand the difference between IOPS, latency and throughput.  

People who are prepared to ask pertinent and sometimes awkward questions; who look to challenge and change the status-quo. 

In any large IT infrastructure organisation; there are two teams who can generally look at their systems and make significant inferences about the health, the effectiveness and a difference to the infrastructure. They are often the two teams who are the most lambasted; one is the network team and the other the storage team. They are the two teams who are changing the fastest whilst maintaining a legacy infrastructure and keeping the lights on. 

The Server Admin role has hardly really changed…even virtualisation has little impact on the role; the Storage and Network teams are changing rapidly, many are embracing Software-Defined whilst the Industry is trying to decide what Software-Defined is.

Many are already pretty DevOps in nature; they just don’t call it that but you try to manage the rapid expansion in scale without a DevOps type approach. 

I think many in the industry seem to want to kill off the Storage specialist role; it is more needed than ever and is becoming a lot more key…you probably just won’t call them LUN Monkeys any more..they’ve evolved!

But we persist…

Reality is persistent

I see quite a few posts about this storage or that storage..how it is going to change everything or has changed everything. And yet, I see little real evidence that storage usage is really changing for many.  So why is this? 

Let’s take on some of the received wisdom that seems to be percolating around. 

Object Storage can displace block and file?

It depends; replacing block with object is somewhat hard. You can’t really get the performance out of it; you will struggle with the APIs especially to drive performance for random operations and partial updates.

Replacing file with object is somewhat easier, most unstructured data could happily be stored as object and it is. It’s an object called a file. I wonder how many applications even using S3  APIs treat Object Storage anything other than a file-store, how many use some of the extended metadata capabilities? 

In many organisations; what we want is cheaper block and file. If we can fake this by putting a gateway device in front of Object Storage; that’s what we will do. The Object vendors have woken up to this and that is what they are doing. 

But if a vendor can do ‘native’ file with some of the availability advantages of a well-written erasure coding scheme at a compelling price point, we won’t care.

And when I can boot from Object Storage..call me.   

All new developers are Object Storage aficionados?

I’m afraid from my limited sample size; I find this is rarely the case. Most seem to want to interact with file-systems or databases for their persistence layer. Now the nature of the databases that they want interact with is changing with more becoming comfortable with NoSQL databases.

Most applications just don’t produce enough data to warrant any kind of persistence layer that requires Object or even any kind of persistence layer at all.  

Developers rarely care about what their storage is; they just want it to be there and work according to their needs. 

Technology X will replace Technology Y

Only if Technology Y does not continue to develop and only if Technology X has a really good economic advantage. I do see a time when NAND could replace rotational rust for all primary storage but for secondary and tertiary storage; we might still be a way off. 

It also turns out that many people have a really over-inflated idea about how many IOPs their application need; there appears to be a real machismo about claiming that you need 1000s of IOPS…when our monitoring shows that someone could write with a quill pen and still fulfil the requirement. Latency does turn out to be important; when you do your 10 IOPS, you want it to be quick. 

Storage is either free or really cheap?

An individual gigabyte is basically free; a thousand of these is pretty cheap but a billion gigabytes is starting to get a little pricey.

A Terabyte is not a lot of storage? 

In my real life, I get to see a lot of people who request a terabyte of storage for a server because hell, even their laptop has this amount of storage. But for many servers, a terabyte is a huge amount of storage..many applications just don’t have this level of requirement for persistent data. A terabyte is still a really large database for many applications; unless the application developers haven’t bother to write a clean-up process.

Software-Defined is Cheaper? 

Buy a calculator and factor in your true costs. Work out what compromises you might have to make and then work out what that is worth to you. 

Google/Amazon do it, so we can too?

You could but is it really your core business? Don’t try to compete with the web-scale companies unless you are one..focus on providing your business with the service it requires. 

Storage Administration is dead?

It changed, you should change too but there is still a role for people who want to manage the persistent data-layer in all it’s forms. It’s no longer storage…it’s persistence.

Mine is the only reality?

I really hope not…

 

 

 

 

Skating with Cerberus

I imagine there was a sharp intake of breathe as Microsoft announced SQL Server for Linux and then a checking of dates. And yet it makes perfect sense, a very sensible strategic move for Microsoft.

My question and I know I’m not the only person asking this is; what is the future of Windows in the data-centre? If SQL Server runs well on Linux; there are a vanishing small number of workloads that I would want to run on Windows Server in a data-centre. Yes there are alot of third party applicatons that run on Windows and this is going to continue for many years but I do really wonder if Microsoft’s heart is really in the Windows Server business.

Microsoft appear to have decided that their future is in Cloud; not the Enterprise DC. I mean it’s always been questionable whether anyone sane would run Exchange and now you don’t have to; Office 365 takes care of that for you.

A lot of people like Azure and sure Microsoft would prefer you to run your cloud apps in Azure but if you want to run them elsewhere; they would like to still make money out of you. SQL Server on Linux will remove some of the friction for deployment in the Cloud.

SQL Server running on Linux also allows them to compete with Oracle in those data-centres that Windows is simply a grudging presence; there are certainly those who will have you believe that SQL Server is not Enterprise but many of those comments have been driven by the stigma of Windows. I work with DBAs who do both; for most workloads, SQL Server and Oracle are equally good.

So what’s left for Microsoft to do?

Well, if Microsoft announce AD Services running on Linux; you’ll really know that their heart is no longer in the Windows Data-centre.

2016 and Beyond…

Predictions are a mug’s game…the trick is to keep them as non-specific as possible and not name names…here are mine!

What is the future for storage in the Enterprise? 2016 is going to pan out to be an ‘interesting’ year; there’s company integrations and mergers to complete with more to come so I hear; cascading acquisitions seem likely as well.

There will IPOs; they will be ‘interesting’! People are looking for exits, especially from the flash market. A market that looks increasingly crowded with little to really tell between the players.

Every storage vendor is going to struggle with maintaining growth; technology changes has meant that it is likely that just to maintain current revenues that twice as much capacity is going to have to be shipped. Yet data efficiency improvements from thin-provisioning to compression to dedupe mean that customers are storing more data on less capacity.

Add in the normal year-on-year decline of the price of storage, this is a very challenging place.

Larger storage customers are becoming more mecurial about what they buy; storage administration has got so easy that changing storage vendors is not the big deal it used to be. The primary value these days of having some dedicated storage bods is that they should be pretty comfortable with any storage put in front of them.

As much as vendors like to think that we all get very excited by their latest bell or whistle; I’m afraid that we don’t any more. Does it make my job easier; can I continue to more with less or best case the same.

Data volumes do continue to grow but the amount of traditional primary data growth has slowed somewhat in my experience.

Data from instrumentation is a real growth area but much of this is transitory; collect, analyse, archive/delete…and as people start to see an ever increasing amount of money flowing to companies like Splunk expect some sharp intakes of breath.

Object Storage will continue to under-perform but probably less so. S3 will continue its rise as the protocol/API of choice for native object. Many file-stores will become object at the back-end but with traditional SMB/NFS front-ends. However, sync and share will make inroads formally into the enterprise space; products like Dropbox Enterprise will have an impact there.

Vendors will continue to wash their products in ‘Software Defined’ colours; customers will remain unimpressed. Open-source storage offerings will grow and cause more challenges in the market. Some vendors might decide to open-source some of their products; expect at least one large company to take this route and be accused of abandonware. And watch everyone try to change their strategy to match this.

An interesting year for many…so with that, I shall be off and wrap presents!

May you all have a Happy Christmas, a prosperous New Year and may your bits never rot!!

Waffle to burn?

NetApp have finally bitten the bullet and bought an AFA vendor; plumping for the technology driven Solidfire as opposed to some of the marketing driven competitors in the space.

At less than a billion dollars; it appears to be a very good deal for NetApp and perhaps with an ever decreasing number of suitors, it is a good deal for Solidfire and avoids the long march to IPO.

Obviously the whole deal will be painted as complementary to NetApp’s current product set but many will hope that Solidfire will long-term supplant the long-in-the-tooth OnTap. NetApp need to swallow their pride and need to move on from the past.

It can’t do this immediately; it needs work and it is not yet a solution for unstructured data. But putting data-services on top of it should not be a massive task as long as that is what NetApp decide to do and they don’t decide to try to integrate it with OnTap. NetApp can’t afford another decade of engineering faff! Funnily enough though , FC is seen as a relatively weak-point for Solidfire; where have we heard the before?

This could be as big a deal for them as EMC’s acquisition of Data General in 1999; the Clariion business brought some great engineers and a business that turned into a cash-cow for them. It allowed them to move into a different space and gave them options; it probably saved the company whilst they were messing up the Symmetrix line.

And whilst EMC/Dell are integrating themselves; NetApp have a decent opportunity to steal a march on their arch-rivals; especially if they take a light touch and continue to allow Solidfire to act like an engineering-led start-up.

I still have my doubts whether a storage-focused behemoth can actually survive long-term as data-centres change and buying behaviours change. But for the time being, NetApp have an interesting product again.

Interesting times for friends at both companies…

p.s anyone want to buy a pair of Solidfire socks?

Object Lessons?

I was hoping that one of the things that I might be able to write about after HPE Discover was that HPE finally had a great solution for Scale-Out storage; either NAS or Object.

There had been hints that something was coming; yes, HPe had done work with Cleversafe and Scality for Object Storage but the hints were that they were doing something of their own. And with IBM having taken Cleversafe into their loving bosom, HPE are the only big player without their own object platform.

Turns out however that HPE’s big announcement was their ongoing partnership with Scality; now Scality is a good object platform but there are bits that need work as is the case with Cleversafe and the others.

I don’t think that I am the only one is left disappointed by the announcement and the not the only person who was thinking…why didn’t they just buy Scality?

Are HPE still thinking of doing their own thing? Well, it’s gone very quiet and there’s some sheepish looking people about and some annoyed HPErs wondering when they will get their story straight.

Like HPE’s Cloud strategy; confusion seems to reign.

If there is any take-away from the first HPE Discover….it seems that HPE are discovering slowly and the map that is being revealed has more in common with the Mappa Mundi than an Ordinance Survey map…vaguely right, bits missing and centralised on the wrong thing.

Dude – You’re Getting An EMC

Just a few thoughts on the Dell/EMC takeover/merger or whatever you want to call it. 

  1. In a world where IT companies have been busy splitting themselves up; think HP, Symantec, IBM divesting from server business…it seems a brave move to build a new IT behemoth. 
  2. However; some of the restructuring already announced hints at a potential split in how Dell do business. Dell Enterprise to be run out of Hopkinton and using EMC’s Enterprise smarts in this space.
  3. Dell have struggled to build a genuine storage brand since going their different ways; arguably their acquisitions have under-performed.
  4. VMware is already under attack from various technologies – VMware under control of hardware server vendor would have been a problem a decade ago but might be less so as people have more choices for both virtualising Heritage applications and Cloud-Scale. VMware absolutely now have to get their container strategy right.
  5. EMC can really get to grips with how to build their hyper-converged appliances and get access to Dell’s supply chain. 
  6. That EMC have been picked up by a hardware vendor just shows how hard it is to transition from a hardware company to a software company. 
  7. A spell in purdah seems necessary for any IT company trying to transition their business model. Meeting the demands of the market seems to really hamper innovation and change. EMC were so driven by a reporting cycle, it drove very poor behaviours.
  8. All those EMC guys who transitioned away from using Dell laptops to various MacBooks…oh dear!
  9. I doubt this is yet a done deal and expect more twists and turns! But good luck to all my friends working at both companies! May it be better!