Storagebod Rotating Header Image

Big Ideas

Big Answers Need Big Data?

At a SNW briefing session today, X-IO (Xiotech) talked a lot of sense about Big Data; in fact it was almost the most sense that I have heard spoken about Big Data in a long time. The fact is that most Big Data isn’t really that big and the data-sets are not huge; there are exceptions but most big data-sets that many companies will use can be measured in a few terabytes and not the tens or hundreds of terabytes that the big storage vendors want to talk about.

Sentiment data which can derived from social networking, these are not necessarily big data sets. A tweet for example is 140 characters, so 140 bytes…a terabyte is 1 099 511 627 776 bytes; we can store a lot of tweets in a terabyte and within that data, there is a lot of information that can be extracted.

In fact, there are probably some Big Answers in that not so Big Data but we need to get rid of the noise; in order to do this, we need to be able to process this data differently and directly. The most important thing that the storage can do is to vanish and become invisible; allow data processing to be carried out in the most natural way and not require various work-arounds which hide the deficiencies of the storage.

If your storage vendor spends all their time talking about the bigness of data; then perhaps they might be the wrong vendor.

Wellies!

I was watching the iPhone 5 announcement with a sinking feeling; I am at the stage where I am thinking about upgrading my phone and have been thinking about coming back to Apple and I really wanted Apple to smash the ball over the pavilion and into the car-park (no baseball metaphors for me). But they didn’t, it’s a perfectly decent upgrade but nothing which has made my mind up for me.

I am now at the situation where I am considering another Android phone, an iPhone or even the Lumia 920 and there’s little to choose between them; I don’t especially want any of them, they’ll all do the job. I just want someone to do something new in the smartphone market but perhaps there’s nothing new to do.

And so this brings me onto storage; we are in the same place with the general purpose corporate storage; you could choose EMC, NetApp, HDS, HP or even IBM for your general purpose environment and it’d do the job. Even price-wise, once you have been through the interminable negotiations mean that there is little between them. TCO, you choose the model which supports your decision; you can make it look good or bad as you want. There’s not even a really disruptive entry to the market; yes, Nexanta are getting some traction but there’s no big market swing.

I don’t get the feeling that there is a big desire for change in this space. The big boys are packaging their boring storage with servers and networking and trying to make it look interesting and revolutionary. It’s not.

And yet, there are more storage start-ups in storage than ever before but they are all focused around some very specific niches and we seeing these niches becoming mainstream or gaining mainstream attention.

SSD and flash-accelerated devices aimed at the virtualisation market; there’s a proliferation of these appearing from players large and small. These are aimed at VMware environments generally, once I see them appearing for Hyper-V and other rivals; then I’ll believe that VMware is really being challenged in the virtualisation space.

Scalable bulk storage; be it Object or traditional file protocols; we see more and more players in this space. And there’s no real feeling of a winner or a dominant player; this is especially true in the Object space where the lack of or even the perceived lack of a standard is hampering adoption by many who would really be the logical customers.

And then there is the real growth where the exciting stuff is happening; this is the like of Dropbox, Evernote and others; this is really where the interesting stuff is happening, it is all about the application and the API access. This is kind of odd, people seem to be willing to build applications, services and apps around these proprietary protocols in a way that people feel unwilling to do so with the Object Storage vendors. Selling an infrastructure product is hard, selling an infrastructure product masquerading as a useful app….maybe that is the way to go.

It is funny that some of the most significant changes in the way that we will do infrastructure and related services in the future is being driven from completely non-traditional spaces..but this kind of brings me back round to mobile phones, Nokia didn’t start as a mobile company and who knows perhaps it’ll go back to making rubber boots again.

Start-Ups Galore

Recently it seems that there are more storage start-ups than ever before; be it flash-based storage, object storage, storage aimed at virtual environments, cloud storage, storage as software, storage appliances; it seems that every day more and more press releases announcing yet another innovation in the storage space hit my email address.

How many of these are truly innovative, not so many I guess but it seems that the storage start-up industry is in rude health. It seems that the barrier to entry into the market has significantly dropped and that the introduction of commodity-based hardware and software has really changed things.

And yet we still see the doom merchants predicting the end of the storage administrator and to be fair, a few years ago, I might have been in agreement but the sheer diversity of storage infrastructures, big data growth and just general growth leads me to feel that the storage administrator role still has life. Yes, it will change the role and the role will evolve much as storage has evolved and the role may become more virtualisation focussed but there will still be storage specialists and there will probably be as many as ever.

I am going to do my bit to ensure that the role of the ‘Storage Bod’ continues and encourage the diversity which will drive more complexity; I am a judge for the Tech Trailblazers awards, so if you are a new storage start-up and your product can further drive the complexity into the storage environment, you should enter. But if your product is really simple, just works and makes lives easier, please don’t bother….we want the environment to stay complex and a black-art.

Of course I am probably in the minority and some of the judges will be looking for more sensible things, so I guess start-ups with products both complex and simple should probably enter. There’s some good prizes, some great sponsors and excellent judges (well, better qualified than me anyway).

As I say the barrier for entry to the market seems to have fallen somewhat but some extra cash and help is always handy.

Patience is a Virtue?

Or is patience just an acceptance of latency and friction? A criticism oft made of today’s generation is that they expect everything now and this is a bad thing but is it really?

If a bottle of fine wine could mature in an instant and be good as a ’61; would this be a bad thing? If you could produce a Michelin quality meal in a microwave, would it be a bad thing?

Yes, today we do have to accept that such things take time but is it really a virtue? Is there anything wrong with aspiring to do things quicker whilst maintaining quality?

We should not just accept that latency and friction in process is inevitable; we should work to try to remove them from the way that we work.

For example, change management is considered to be a necessary ITIL process but does it have to be the lengthy bureaucratic process that it is? If your infrastructure is dynamic, surely your change process should be dynamic too? If you are installing a new server, should you have to raise a change

1) to rack and stack
2) to configure the network
3) to install the operating system
4) to present the storage
5) to add the new server to the monitoring solution etc, etc

Each of these being an individual change being raised by separate teams. Or should you be able to do this all programmatically? Now obviously in a traditional data-centre, some of these require physical work but once the server has been physically commissioned, there is nothing there which should not be able to be done programmatically and pretty much automatically.

And so it goes for many of the traditional IT processes; they simply introduce friction and latency to reduce the risk of the IT department smacking into a wall. This is often deeply resented by the Business who simply want to get their services up and running, it is also resented by the people who are following the processes and then it is thrown away in an emergency (which happens more often than you would possibly expect 😉 ).

This is not a rant against ITIL, it was tool for a more sedate time but in a time when Patience is no longer really a virtue..do we need a better way. Or perhaps something like an IT Infrastructure API?

Don’t throw away the rule-book but replace it with something better.

p.s Patience was actually my grand-mother; she had her vices but we loved her very much.

Not Special

As we grow up, there are a various times in our lives when we realise that we are not as special as we always believed that we are; or certainly that we are less important than we thought. This can be the arrival of a younger sibling or the birth of a child, these sort of events can effect us greatly and the feelings resulting from them can be quite painful but it is all a necessary part of growing, learning who we are and changing our perspective.

And as it is for people, so it is for Business and Business function; the moment you believe that you are special as a matter of right, something is going to come along and disrupt that centre.

Internal IT functions have for a long time believed that they are special, we all know that they are not. But so do many Businesses and other functions; I’ve lost track of the number times that someone has tried to convince me that they don’t have to follow a process because they are special. And yet we find ourselves kow-towing to that attitude all the time; internally and externally we find ourselves making exceptions to rules….whether it is to the Mega-Corporation who does not want to pay tax or the the Senior Manager who believes that they should not have to follow the internal IT policy.

However, I do believe that we should embrace difference; the department that wants to work differently because it supports their processes, they should be supported. You change the rules but don’t make exceptions; if the rules don’t work, don’t ignore the rules but change them. And at times, don’t be afraid to tear up the rule book and come up with a completely new set of rules; pick up that ball and run with it.

I look around at the moment and I see so many people and companies trying to put in exceptions and workarounds to fit their business models and activities; trying to foreclose on the potential disruption that is coming…believing that they are special; from banking to broadcast…when they might be better tearing up their play-book and starting again.

No-one believed that you could win a major Football tournament without strikers, Spain showed that you can…you just have to play differently.

Meltdown

The recent RBS systems meltdown and the rumoured reasons for it are a salutary reminder to all as to how much we are all reliant on the continued availability of core IT systems; these systems are pretty much essential to modern life. Yet arguably the corporations that run these systems have become incredibly cavalier and negligent about these systems; their maintenance and long-term sustainability even in supposedly heavily regulated sectors such as Banking is woeful.

There is a ‘It Aint Broke, So Don’t Fix It’ mentality that has led to systems that are unbelievably complex and tightly coupled; this is especially true of those early adopters of IT technologies such as the Banking sectors.

I spent my early IT years working for a retail bank in the UK and even twenty years ago, this mentality was prevalent and dangerous; code that no-one understood sat at the core of systems, wrappers written to try to hide the ancient code meant that you needed to be half-coder, half-historian to stand a chance of working out exactly what it did.

If we add another twenty years to this, twenty years of rapid change where we have seen the rise of the Internet, 24 hour access to information and services, mobile computing and a financial collapse; you have almost a perfect storm. Rapidly changing technology coupled with intense pressure on costs has led to under-investment on core infrastructure whilst Business chases the new. Experience has oft been replaced with expedience.

There is simply no easy Business Case that flies that justifies the re-writing and redevelopment of your core legacy applications, even if you still understand them; well, there wasn’t until last week. If you don’t do this and if you don’t start to understand your core infrastucture and applications; you might well find yourself in the same position that the guys in RBS have.

Systems that have become too complex and are hacked together to do things that they were never supposed to do; systems which if I’m being generous were developed in the 80s but more likely the 70s trying to cope with the demands of the 24 hour generation; systems which are carrying out more processing in realtime and yet are at their heart, batch systems.

If we continue with this route, there will be more failures and yet more questions to be answered. Dealing with legacy should no longer be ‘It Aint Broke, So Don’t Fix It’ but ‘It Probably Is Broke, You Don’t know It…yet!’ Look at your Business, if it has changed out of all recognition, if your processes and products no longer resemble those of twenty years ago, it is unlikely that IT systems designed twenty years are fit for purpose. And if you’ve stuck twenty years worth of sticking plaster on them to try and make them fit for purpose; it’s going to hurt when you try to remove the sticking plaster.

This is not a religious argument about Cloud, Distributed Systems, Mainframe but one about understanding the importance of IT to your Business and investing in it appropriately.

IT may not be your Business but IT makes your Business…you probably wouldn’t leave your offices to fall into disrepair, patching over the cracks until it falls down…don’t do the same your IT.

The Last of the Dinosaurs?

Myself and Chris ‘The Storage Architect’ Evans were having a twitter conversation during the EMC keynote where they announced the VMAX 40K; Chris was watching the live-stream and I was watching the Chelsea Flower Show, from Chris’ comments, I think that I got the better deal.

But we got to talking about the relevance of the VMAX and the whole bigger is better thing. Every refresh, the VMAX just gets bigger and bigger, more spindles and more capacity. Of course EMC are not the only company guilty of the bigger is better hubris.

VMAX and the like are the ‘Big Iron’ of the storage world; they are the choice of the lazy architect, the infrastructure patterns that they support are incredibly well understood and text-book but do they really support Cloud-like infrastructures going forward?

Now, there is no doubt in my mind that you could implement something which resembles a cloud or let’s say a virtual data-centre based on VMAX and it’s competitors. Certainly if you were a Service Provider which aspirations to move into the space; it’s an accelerated on-ramp to a new business model.

Yet just because you can, does that mean you should? EMC have done a huge amount of work to make it attractive, an API to enable to you to programmatically deploy and manage storage allows portals to be built to encourage self-service model. Perhaps you believe that this will allow light-touch administration and the end of the storage administrator.

And then myself and Chris started to talk about some of the realities; change control on a box of this size is going to be horrendous; in your own data-centre, co-ordination is going to be horrible but as a service provider? Well, that’s going to be some interesting terms and conditions.

Migration, in your own environment,  to migrate a petabyte array in a year means migrating 20 terabytes a week more or less. Now, depending on your workload, year-ends, quarter-ends and known peaks, your window for migrations could be quite small. And depending how you do it, it is not necessarily non-service impacting; mirroring at the host level means significantly increasing your host workload.

As a service provider; you have to know a lot about the workloads that you don’t really influence and don’t necessarily understand. As a service provider customer, you have to have a lot of faith in your service provider. When you are talking about massively-shared pieces of infrastructure, this becomes yet more problematic. You are going to have to reserve capacity and capability to support migration; if you find yourself overcommitting on performance i.e you make assumptions that peaks don’t all happen at once, you have to understand the workload impact of migration.

I am just not convinced that these massively monolithic arrays are entirely sensible; you can certainly provide secure multi-tenancy but can you prevent behaviours impacting the availability and performance of your data? And can you do it in all circumstances, such as code-level changes and migrations.

And if you’ve ever seen the back-out plan for a failed Enginuity upgrade; well the last time I saw one, it was terrifying.

I guess the phrase ‘Eggs and Baskets’ comes to mind; yet we still believe that bigger is better when we talk about arrays.

I think we need to have some serious discussion about optimum array sizes to cope with exceptions and when things go wrong. And then some discussion about the migration conundrum. Currently I’m thinking that a petabyte is as large as I want to go and as for the number of hosts/virtual hosts attached, I’m not sure. Although it might be better to think about the number of services an array supports and what can co-exist, both performance-wise but also availability window-wise.

No, the role of the Storage Admin is far from dead; it’s just become about administering and managing services as opposed to LUNs. Yet, the long-term future of the Big Iron array is limited for most people.

If you as an architect continue to architect all your solutions around Big Iron storage…you could be limiting your own future and the future of your company.

And you know what? I think EMC know this…but they don’t want to scare the horses!

Big Data Values for All?

The jury is probably still out on the real value of ‘Big Data’ and what it will mean to our lives; whether it is a power for good or ill or even if it is a power for anything is probably still up for debate. But there is one thing which is probably true, ‘Big Data’ will change data-processing for the better.

At present, you will find that the prevailing wisdom is that if you have Data to store, you should store it in a relational database but the ‘new’ data processing techniques which ‘Big Data’ brings to the party changes this or at least seriously questions this wisdom.

I know many applications that currently store their data into relational databases that could possibly benefit from a change of focus; these are often log-oriented applications which are only using one or two tables to store their Data and often the indexes to enable fast processing are larger than the data stored.

So even if you have no ‘Big Data’, you may find that you have more candidates than you realise for ‘Big Data’ processing techniques….and I suspect this is what really scares our friends at Oracle. For too long now, serious Data processing required serious relational databases and that road took us into the realms of Oracle; increasing costs and infrastructure complexity.

The problem is that re-writes show little immediate business value and the investment will take two or three years to pay-off; it is this that your RDMS account manager is counting on. Yet as soon as you start to factor in maintenance, upgrade and recurring costs; this should be an economic no-brainer for the IT Manager with foresight.

 

 

No Pain, No Gain?

I always reserve my right to change my mind and I am almost at the stage that I have changed my mind on blocks/stacks or whatever you want to call them? And for non-technical and non-TCO related reasons.

I think in general componentised and commodity-based stacks make huge sense; whether you are building out private or a public infrastructure; a building block approach is the only really scalable and sustainable approach. And I wrote internal design documents detailing this approach eight or nine years ago; I know I’m not the only one and we didn’t call it cloud…we called it governance and sensible.

But where I have changed my opinions is on the pre-integrated vendor stacks; I think that they are an expensive way of achieving a standardised approach to deploying infrastructure and I have not changed from this opinion.

However I think that this cost may well be the important catalyst for change; if you can convince a CFO/CEO/CIO/CTO etc that this cost is actually an investment but to see a return on the investment that you need to re-organise and change the culture of IT, it might well worth be paying.

If you can convince them that without the cultural change, they will fail….you might have done us all a favour. If it doesn’t hurt, it probably won’t work. If it is too easy to write things off when it’s tough…it’ll be too easy to fall back into the rut.

So EMC, VCE, Cisco, IBM, NetApp, HP etc….make it eye-wateringly expensive but very compelling please. Of course, once we’ve made the hard yards, we reserve the right to go and do the infrastructure right and cheap as well.

Archicultural….

It seems the more that I consider the architectural and technical challenges and changes to the Corporate IT world, the more I come back to the cultural issues which exist within many IT departments and the more I find myself feeling strongly that this is where the work really needs to be done.

Unfortunately it is pretty hard to buy a culture from a vendor, even though I suspect if Chuck could work out exactly how to do so; we’d have a product from EMC called V-CLT (or is that VMware?); so building a culture is going to be have to be an internal thing and that means it is going to be tough.

Too often the route into IT Management means either promoting excellent techies into management or sometimes promoting people into positions where they can do no more harm as opposed to moving people into positions which suits them and their personalities. I am sure that we can all think of examples of both; this is especially true in end-user organisations as the career paths are less varied than that of the vendor organisation. Vendor organisations have sales, marketing and other avenues for progression; they also have the traditional IT paths as well.

But all IT organisations are suffering from cultures which neither scale or are sustainable in the long term. There needs to be a long term shift which ensure that training and development are in more than just technical skills; there needs to be a move away from a hero culture that sees staff at all levels of an organisation regularly halving their hourly rates by working longer than their contracted hours, not taking leave and forgetting that you ‘Work to Live’.

Careers need to be thought of more than the fastest route to the top and when people find their natural level; this does not mean that they do not stop being valuable members of an organisation. Work on developing people horizontally (and you with the dirty mind can stop sniggering); I think that there is something relatively unhealthy when you find managers who have worked their way up through a team and only worked in one team.  Horizontal moves have immense value; I have learnt such a lot in the past couple of years running a test team as well as a storage team.

Horizontal moves will help to break down some of the siloed mentality; even if you do not believe in DevOps, moving people between these two disciplines even on secondment must have value.

If you have a graduate scheme in place, the natural roles that most graduates gravitate to are in development; make sure that they have a placement in an Operations/Infrastructure team. They will learn so much.

And if you work in management; you are doing a pretty hard job, make it easier on yourself by standing on the shoulders of giants and actually study the art of management and leadership. Most get to management by being good at something; being good at that something does not mean you know anything about management.