Storagebod Rotating Header Image

Data Centres

Virtually Pragmatic?

So why have EMC joined the storage virtualisation party and although they are calling it federation, it is what IBM, HDS and NetApp amongst others call storage virtualisation? So why do this at this time after warning about dire consequences about doing so in the past.

There are probably a number of reasons to do this; there have certainly been commercial pressures to do so, I know of a number of RFPs which have gone out from large corporates which have mandated this capability; money talks and in an increasingly competitive market, EMC probably have to tick this feature box.

The speed of change in the spinning rust market appears to be slowing, certainly the incessant increase in the size of hard disks is slowing and there means that there might be less pressure to technically refresh the spindles and a decoupling of the disk from the controller makes sense. EMC can protect their regular upgrade revenues at the controller level and forgo some of the spinning rust revenues. They can more than make up for this out of maintenance revenues on the software.

But I wonder if there is a more pressing technological reason and trend that means that it is a good time to do this; that is the rapid progress of flash into the data-centre and how EMC can work to increase the acceleration of adoption. It is conceivable that EMC could be looking shipping all-flash arrays which allow a customer to continue to enjoy their existing array infrastructure and realise the investment that they have made. It is also conceivable that EMC could use a VMAX like appliance to integrate their flash-in-server more simply with a third party infrastructure.

I know nothing for sure but the size of this about turn from EMC should not be understated; Barry Burke has railed against this approach to storage virtualisation for such a long time, there must be some solid reasoning to justify it in his mind.

Pragmatism or futurism, a bit of both I suspect.

Local Storage, Cloud Access

Just as we have seen a number of gateways to allow you to access public cloud storage in a more familiar way and making it appear as local to your servers, we are beginning to see services and products which do the opposite.

To say that these turn your storage into cloud storage is probably a bit of a stretch but what they do is to allow your storage to be accessed by a multitude of devices where-ever they happen to be. They bring the convenience of Dropbox but with a more comfortable feeling of security because the data is stored on your storage. Whether this is actually any more secure will be entirely down to your own security and access policies.

I’ve already blogged about Teamdrive and I’ll be blogging about it again and also the Storage Connector from Oxygen Cloud in the near future. I must say that some of the ideas and the support for Enterprise storage by the folks at Oxygen Cloud looks very interesting.

I do wonder when or if we’ll see Dropbox offer something similar themselves, Dropbox with it’s growing software ecosphere would be very attractive with the ability to self-host. It would possibly give some of the larger storage vendors something to consider.

These new products do bring some interesting challenges which will need to be addressed; you can bet that your users will start to install these on their PCs, both at work and at home. The boundaries between corporate data and personal data will become ever blurred; much as I hate it, the issue of rights management is going to become more important. Forget the issue of USB drives being lost, you could well find that entire corporate shares are exposed.

But your data any time, any place is going to become more and more important; convenience is going to trump security again and again. I am becoming more and more reliant on cloudy storage in my life but for me it is a knowing transition; I suspect for many others, they are simply not aware of what they are doing.

This is not a reason to simply stop them but a reason to look at offering the services to them but also to educate. The offerings are coming thick and fast, the options are getting more diverse and interesting. The transition to storage infrastructure as software has really opened things up. Smaller players can start to make an impact, let’s hope that the elephants can dance.

Sticky Servers

I read the announcements from HP around their Gen 8 servers with some interest and increasing amusement. Now HP are an intrinsically amusing funny company but it isn’t that which is amusing me, it’s the whole server industry and an interesting trend.

The Intel server industry was built on the back of the ‘PC Compatible’ desktop; where you could buy a PC from pretty much any vendor and run MS-DOS and run the same application anywhere. They all looked the same and if you could maintain one, you could maintain any of them.

Along came the PC Server and it was pretty the same thing; if you could maintain Server Brand X, you could maintain Server Brand Y. And so it pootled along until blade-servers came along and muddied the water a bit but it wasn’t so hard.

If you wanted to migrate between server vendors, it wasn’t rocket science; if you wanted to move from Compaq to Dell to IBM, it was not a big deal to be honest. Although sometimes the way people carried on, you would have thought you were moving from gas-powered computers to electric computers to computers with their own nuclear reactors in.

And then along come Cisco with UCS and the Intel server got bells, whistles and fancy pants. All in the name of ‘Ease of Use and Management’; it’s all fancy interfaces and APIs; new things to learn and all slightly non-standard.

And now HP follow along with Gen-8; it’s all going to be slightly non-standard and continue to drift away from the original whitebox server. The rest of the vendors are all moving this way, how do I make sure that customers remain loyal and sticky.

It’s all going to get increasingly hard to migrate between server vendors without major rethinks and retrains. Perhaps this is all going to accelerate the journey to the public cloud because I don’t want to care about that!

And as a storage guy, I can’t help but laugh!  Welcome to our world!

Dear Santa – 2011

Dear Santa,

it’s that time of year again when I write to you on behalf of the storage community and beyond. 2011 promised much but delivered less than hoped; the financial crisis throughout the world has put a damper on the party and there are some gloomy faces around. But as we know, the world will always need more storage, so what do we need to deliver in 2012.

Firstly, what we don’t need is Elementary Marketing Crud from the Effluent Management Cabal; perhaps this was a last grasp at a disappearing childhood as they realise that they need to be a grown-up company.

What I would like to see is some more serious discussion about what ‘Big Data’ is and what it means both from a Business point of view but also from a social responsibility point of view. I would like to see EMC and all get behind efforts to use data for good; for example, get behind the efforts to review all drug trial data ever produced to build a proper evidence based regime for the use and prescription of drugs, especially for children who often just get treated as small adults. This is just one example of how we can use data for good.

There are so many places where ‘Big Data’ can be used beyond the simple analysis of Business activities that it is something which really could change the world. Many areas of science from Climate Research to Particle Physics generate huge amounts of data that need analysing and archiving for future analysis that we can look at this being a gift to the world.

And Santa, it can also be used to optimise your route around the world, I’m sure it is getting more complicated and in these days of increasing costs, even you must be looking at ways of being more efficient.

Flying through clouds on Christmas Night, please remember us down below who are still trying to work out what Cloud is and what it means; there are those who feel that this is not important but there are others who worry about there being no solid definition. There are also plenty of C-level IT execs who are currently loosing sleep as to what Cloud in any form means to them and their teams.

So perhaps what is needed is less spin, more clarity and leadership. More honesty from vendors and users, stop calling products and projects, Cloud; focus on delivery and benefits. A focus on deliverables would remove much of the fear around the area.

Like your warehouses at this time of year, our storage systems are full and there is an ever increasing demand for space. It does not slow down and unlike you, our storage systems never really empty.  New tools for data and storage management allowing quick and easy classification of data are a real requirement along with standards based application integration for Object storage; de-facto standards are okay and perhaps you could get some of the vendors to stop being precious about ‘Not Invented Here’.

I would like to see the price of 10GbE come down substantially but also I would like to see the rapid introduction of even faster networks; I am throwing around huge amounts of data and the faster I can do it, the better. A few years ago, I was very positive about FCoE; now I am less so, certainly within a 10 GbE network it offers very little but faster networks might make me more positive about it again.

SSDs have changed my desktop experience but I want that level of performance from all of my storage; I’ve got impatient and I want my data *NOW*. Can you ask the vendors to improve their implementation of SSDs in Enterprise Arrays and obviously drive down the cost as well? I want my data as fast as the network can supply it and even faster if possible; local caching and other techniques might help.

But most of all Santa, I would like a quiet Christmas where nothing breaks and my teams get to catch up on some rest and spend time with their families. The next two years’ roadmap for delivery is relentless and time to catch our breath may be in short supply.

Merry Christmas,

Storagebod

 

Trading Commodities

‘Would you trust your business on a storage array built from commodity hardware?’ to paraphrase a remark which came up in a meeting today? This comment took me aback as we were discussing another array which is also built from commodity hardware, although the questioner seemed blissfully unaware of that. I left meeting feeling a little perturbed and put out with something nagging going round my head.

The comment is not an uncommon one to be honest but does it really mean anything at all? And then it hit me,

‘We risk all of our business on commodity hardware all the time; what the hell do you think those servers are?’

Most of the time, they will be clustered to fail over in a very similar manner to a commodity dual-head storage device. And as we put virtualise more and more services; the impact of a failed server or server-chassis is possibly very similar to the impact of a the failure of a head in a dual-head storage array.

So would I trust my business to commodity hardware? Well, I don’t think we’ve got that much choice these days, do you? Be it storage or servers; its getting to be pretty much the same thing!

Toxic IT?

Often we find ourselves talking about Legacy IT; especially when we are discussing the move to ‘Cloud’. What do we do with the legacy? At CloudCamp London, Simon Wardley has suggested that we need to start calling Legacy IT, Toxic IT. And the more I think about it, the more I agree but not just when are talking about the move to Cloud.

Many organisations have Legacy IT; oh, the accounts system; that’s legacy; the HR systems, they are legacy; that really important system which we rely on and is foundational, that’s legacy. What? These are all key systems, how have they become legacy? And the longer we leave them, the more toxic they become.

Businesses and especially Enterprises run on legacy systems; whilst we rush to the new and roll-out exciting services built on the new Cloud, we leave the legacy behind. And so they moulder and rot eventually become toxic.

All of us working in Enterprise IT can probably point to at least one key service which is running on infrastructure (hardware and software) that is long out of support. Services which may well be responsible to a large proportion of the company’s revenue or the company’s survival. These services have been around since the company was founded; that’s why I talk about them being foundational.

But why are they left behind, maybe it’s because it is easier to ask for budget for new stuff that brings new value and markets to a company? What is the return on investment on maintaining your account systems? It’s not going to add to your bottom line is it? Still, if your accounts systems collapses; you won’t even know what you your bottom line is.

So, that Legacy IT; it’s rapidly becoming toxic and it is going to cost you more and more to clean up that toxic pile the longer you leave it.

I think it’s time for many Enterprises to run an Honesty Commission where they ask their IT teams to identify all systems which are becoming toxic and commit to cleaning it up. Just because it hasn’t broke yet, it does not mean that it is not broken! Just because your services are not showing symptoms of toxicity, it does not mean that they are not slowly breaking down. Many poisons work rather slowly.

Yes, you might decide that you are going to move it to the Cloud but you might just commit to maintaining it properly.

Glistening Gluster

There seems to be be more and more stuff appearing about Gluster; there’s a really nice article about Rolling Your Own Fail-Over SAN Cluster with Thin Provisioning, Deduplication and Compression using Ubuntu which just goes how far you can go with the DIY approach to building your own storage devices.

Please note that this article utilises iSCSI for it’s SAN connectivity but there’s no reason why you shouldn’t do a little more work and support FC as well and I daresay, that putting together FCoE is not beyond the realms of possibility.

I’d also suggest that people have a look at the 3.3beta stuff for reveal about what is coming down the line.

And I am certainly not suggesting that you should run your mission critical business applications on it but it really goes to show how far we’ve moved; premium features are beginning to turn-up in open-source systems.

A threat to the existing Storage Cabal? Not yet but for the more adventurous of you, there is a huge amount of potential.

 

Presumptuous Thinking

A couple of emails floated into my inbox recently which brought home to me about how long the journey is going to be for many companies as they try to move to a service oriented delivery for IT. I think many are going to be flailing around for some years to come as they try to make sense of ‘new’ paradigms; not just the IT functions but this impacts beyond this.

The technological changes are important but actually, much could be achieved without changing technologies massively. All that is required is mindset change.

Pretty much all traditional IT is delivered based on a presumption based delivery model; everything is procured and provisioned based on presumption.

A project will look at its requirements and talk to the IT delivery teams; both teams often make the presumption that both sides know what they are talking about and a number of presumptions are made about the infrastructure which is required. An infrastructure is procured and provisioned and this becomes often a substantial part of the project costs; it is also something which is set in stone and cannot change.

I don’t know about you but if look at the accuracy of these presumptions; I suspect you will find massive over-provisioning and hence the cost of many projects are overstated. Or sometimes, it is the other way round but examining most IT estates (even those heavily virtualised) there is still lots of spare capacity.

However, you will find that once the project funding business unit has been allocated the infrastructure; they are loath to let it go. Why should we let the other guy get his project cheap? And once a project is closed, it is often extremely hard to retrospectively return money to it.

Of course, this is nonsense and it is all money which is leaving the Business but business units are often parochial and do not take the wider picture into account. This is even more true when costs are being looked at, you don’t want to let the other guy look more efficient by letting them take advantage of your profligacy. It is politically more astute to ensure that everyone is over-provisioning and ensuring that everyone is equally inefficient!

In IT, we make this even easier by allowing an almost too transparent view into our provisioning practises. Rate-cards for individual infrastructure components may seem like a great idea but it encourages all kinds of bad practise.

‘My application is really important, it must sit on Tier 1` has often lead to a Tier 1 deployment fair in excess of what is really required. However if you are caught moving a workload to a lesser tier, all kinds of hell can break out; we’d paid for that tier and we are jolly well going to use it.

‘My budget is a little tight, perhaps I can get away with it sitting on a lower tier or not actually provision enough disk’; I’ve seen this happen on the grounds that by the time the application is live and the project closed; it becomes an IT Support problem. The project team has moved on and its not their problem.

The presumption model is broken and leads to dissatisfaction both in the IT teams and the Business teams. In fact it is probably a major factor in the overwhelming view that IT is too expensive.

The consumption model is what we need to move to but this does mean some fundamental changes to thinking about IT by Business Leaders and IT Leaders. If you want to retain a private IT infrastructure and many do; you almost have to take a ‘build it and they will come approach’; the Service Provider competitor already does this, their model is based entirely on this.

You need to think about your IT department as a Business; however, you have an advantage over the external competitor or at least you should.

  • You should know your Holding company’s Business.
  • You only have to break even and cover your costs, you do not need to make a profit and any profit you do make should be ploughed straight back into your business. This could be in the form of R&D to make yourself more efficient and effective or it could be on infrastructure enhancement but you do not have to return anything to your shareholders apart from better service.
  • You should have no conflicting service demands; there should be no suspicion that another company is getting a better deal or better service. You can focus! You can be transparent.

When I talk about transparency, you should beware of component level rate cards; you should have service rate cards based on units consumed; not units presumed to be allocated. In order to do this, you will need a dynamic infrastructure that will grow to service the whole. It would be nice if the infrastructure could shrink with reduced demand but realistically that will be harder. However, many vendors are now savvy to this and can provision burst capacity with a usage-based model but beware of the small print.

There might be ways of using redundant capacity such as DR and development capacity to service peaks but this needs to be approached with caution.

And there is the Holy Grail of public Cloud-Bursting but currently most sensible experts believe that this is currently not really viable except for the most trivial workloads.

If you have a really bursty workload, this might be a case when you do negotiate with the Business for some over-provisioning or pre-emptable workloads. Or you could consider that this is an appropriate workload for the Public Cloud, let the Service Provider take the investment risk in this case.

But stop basing IT on presumption and focus on consumption.

 

 

Cloud-Bailing

Chuck Hollis pondered whether using a remote DR facility could ever be consider ‘Cloud Bursting’ and much conversation is ensuing along the lines that ‘Cloud Bursting’ is a marketing thing which currently doesn’t exist and won’t exist until applications can be architected which automagically scale and move themselves to utilise capacity where-ever else it maybe. I am paraphrasing some much brighter people than me who know a lot more about Cloud than a mere Storagebod but that’s kind of the message I took away.

Anyway, what Chuck was pondering is not exactly new; for decades, we have moved workloads about; sometimes moving them temporarily to a DR site or second site to free up capacity for a transient capacity demand or whilst waiting for a capacity upgrade. Mainframe houses are/were pretty well versed in this; shifting workloads at peak times and then bringing them back when the crisis has averted.

It’s not really bursting; bursting is something which just happens and is dynamic, immediate and exciting. This sort of workload management is somewhat akin to bailing out a boat or perhaps transferring a liquid from a now too small container whilst you either stem the flow or get a bigger container.

And yes, you could use the public Cloud as your temporary container but you could also use your DR site, perhaps your development kit…but to ‘Cloud-Wash’ it as ‘Cloud-Bursting’ is probably pushing things a little far. So perhaps ‘Cloud-Bailing’; simply on the grounds that it sort of brings to mind of something which is neither elegant and a little haphazard.

Or perhaps we could just call it Workload Management and consider that its a general discipline which could be applied equally to Cloud and to more traditional IT?

White Box Data Centre

As the traditional IT vendors try to build infrastructure stacks and package them at a premium cost; there is part of me which is beginning to wonder whether this makes as much sense as it appears. Obviously it makes sense for the vendor who is trying maximum their profits but for customers?

I read tweet after tweet from various of the stack vendors about how they are winning service provider business but does this show a remarkable lack of imagination and technical chops from the service providers? Is it time for more service providers and larger enterprises who are in the market for many thousands and many tens of thousands of servers to look at cutting out the middle man and working with the ODM’s directly.

Obviously we already have companies like Google and Facebook who are already working with the ODMs; the ODMs are hence gaining experience in dealing with end-customers and dealing with a more application-centric view and might this not start to become a threat to the more traditional vendors? And might we see them start to work with companies which have smaller scale requirements, large service providers of all kinds might find the option of servers which fit their requirements exactly very attractive.

HP for example are offering what is basically a data-centre in a shipping container but might this be something that the ODMs might be better placed to build and provision for the customer who has such a requirement.

And if big data is a real thing; ODMs might well be very well placed to attack this market? White-box storage appliances utilising open-source software such as Ceph, Lustre and Gluster for example? Networking appliances built on commodity hardware?

Now, I don’t think this is an overnight change in the market and it’s probably not an immediate threat to the traditional vendors but I can see an interesting tension between the ODMs and the traditional vendors developing.

As an end-user of IT; it’s all good, choice is good both in vendors and architectural principles.