Storagebod Rotating Header Image

Cloud

Singularly Selfish….

Despite the ever shifting sands which are most large IT departments/divisions/directorates; the underlying structure has not changed for many years. Indeed, most IT departments look very similar to the IT departments of the mainframe era and mostly are analogues of the past. And the longer something stays the same, the longer it stays the same; the resistance to change will cause most organisations to simply spring back to a previous form.

Yet we are talking about new paradigms and new delivery models but how do we stop this simply springing back into the previous form.

Transformational change is required but can most organisations actually achieve this? Only you can answer that question for your own organisation but I would suggest looking around you and think about the changes that you can make. It might be worth being a little bit selfish and ask yourself these questions?

How do I have more fun and go home happy pretty much every day? How do I make my job more satisfying?

I think until you can answer that question and decide on your destination; all the technological changes in the world are probably going to be little difference to the delivery of the IT Service. I think ultimately, a new service model will make your life a lot easier and a lot more fun…in fact, the positive impact on you may well be greater than that of it on the Business.

Yes, we spend a lot of time talking about technological and organisational change but take it back to you and work on being happier.

Simple Scalability

As more and more organisations are moving into petascale environments; driven by big data, unstructured explosions, day-to-day growth and generally poor data management; the ability to manage at scale is becoming increasing important.

Now from a vendor point of view, there has been a focus on getting to scale but that is less than half the story; if it is hard to implement and manage that scale, hard-pressed data-management teams are going to start looking elsewhere.  Managing at scale needs to be as easy and seamless as managing a single array or filer.

Implementation and expansion needs to be quick and painless; the ability to expand without little effort is a major show-stopper for many scalable implementations. I need to be able to add capacity to my systems to support I/O or data-growth but it needs to be transparent and non-disruptive; it needs to be automatic in its optimisation; quite frankly no-one has the time to re-layout a multi-petabyte environment manually with the almost inevitable disruption that brings.

Petascale computing almost always comes with a 24x7x365 availability requirement; Big Data analysis often involves long-running jobs.

But these huge environments bring other challenges as well; you will have large files, small files, tiny files, spread throughout your systems; access characteristics are different, some will be random, some will be sequential and in some cases, you might find both. Some files will have a single user and some will have hundreds of users. However, the data-management team will want to manage these all in a consistent and seamless manner; yet again, they will want to do this with a minimum amount of intervention.

Let’s think about the impact of a self-service environment where teams can throw up new environments; the data-management team will have little control of the files and type of files that these applications create. The provisioning tool may ask questions, will you produce large files or small files but in an agile environment, the answer given yesterday may not reflect the reality of the code written today.

This all leads us to a key requirement and feature for anyone who wants to sell Petascale data-management and storage tools; ‘Simple Scalability’. Yes, it is important that it is fast but it is equally important that it is simple to support and manage throughout its life-cycle.

Lets not kid ourselves; as we move to petascale and beyond; these environments are going to life-spans which far outstretch those of our current SAN environments because the practical realities of migrating petabytes of data stored in single system being accessed by many services is going drive this.

So the next time you are benchmarketing a system; ask yourself, is really practical or is it just a ‘My Dad is bigger than your Dad’ playground argument?

 

A Star Configuring…..

So in the latest little wing-ding between EMC and NetApp and who can do the fastest lap; I do wonder if they miss the point some-what. Benchmarks unfortunately generally focus on one thing, who can do ‘x’ faster than the competition; this is especially true of storage benchmarks which seem to throw up all kind of marketing monstrosities.

The problem with this is that life is not often that simple and performance is just one factor when purchasing storage.

I do wonder if the benchmarking industry could do with taking something out of Top Gear’s book and have ‘A Star Configuring An Extortionately Priced Array’; we could get a random star who has a book/film/album to promote and get them to configure an array to carry out a specific task.

The measure would then not only how well the array runs but also how long it takes them to get it to first I/O.

And I can see a whole series; ‘A Star Configuring An Extortionately Priced Private Cloud’ or perhaps ‘A Star Configuring An Reasonably Priced Public Cloud (just make sure that they’ve read the small print)’.

 

Mad Science Experiments

Whilst NetApp and EMC scrap over meaningless willy waving; the rest of the world try to get on with real work and real world problems. Yet again, sat in a meeting where the requirement is for a grow-forever and delete nothing archive but with the requirement for ‘instant retrieval’ and with a really simple user interface so that a non-technical user can cope. In fact they would really like the archive just to appear as a drive letter on their desktop and be able to just drag files back.

There is part of me who fancies doing a mad science experience using a bizarre mix of an Avere NAS accelerator and NFS-exporting an entire tape library using LTFS. I do wonder if it’d work…build a tape-based Dropbox like solution. Perhaps Avere fancy building a proof of concept system and taking that to NAB or something, I think it might get some attention.

Of course we’ll probably do something sensible and more conventional but a tape-cloud…that’d be kind of fun!!

 

Fear of Failure

I’ve just finished Steve Jobs: The Exclusive Biography; which may not be a complete warts and all biography and I suspect there are more tales to tell but it does give you an insight into the man and his drives. Well worth reading and it does have some relevance to what is going on in the world of Enterprise IT.

Firstly, Steve’s insistence of controlling the stack both hardware and software has a lot of resonance with VCE’s vBlock, Oracle’s Exadata and many of the other stack plays in play at the moment. The control of the complete user experience has worked wonders for many and people have certainly been willing to pay a premium in the home. If you are not an inveterate tinkerer like me, it certainly makes sense. I find it interesting that amongst most of the inveterate tinkerers; the laptop of choice is a MacBook of some sort; if it’s closed already, you might as well get the best engineered laptop you can.

But does this have resonance for the corporate IT department; actually, I think where vBlock et al make the most sense at the moment is for the small-medium Enterprise who don’t have the economy of scale at the moment with server specialists, network specialists, storage specialist but a small team of generalists focused on providing IT in general. And if you don’t have a huge investment in legacy, it makes sense…actually it makes sense at that point to simply deploy into the public Cloud in my opinion. You have less cultural change and resistance to deal with in a smaller company.

I think the second take-away from the book is the almost breath-taking arrogance of the man; a man who believed that he knew better than his customers or at least never asked them what they really wanted.

But it’s not just vision, it’s also hard graft mixed with agility; Apple are known to prototype, refine, prototype, throw it away and then prototype again until they get something which really works. Too often, we don’t do this; we prototype, it kind of works, we put it into production, blame the users, refine it a little bit, blame the users a bit more and forget about the fact that prototyping really means that we should have been prepared to fail at this stage.

We are simply too scared to fail and so we fail a lot in and in public (and inflict our failures on the public).

Innovate At Your Peril

This article was linked to by some of the usual EMC suspects; a fluff and puff piece about Private Cloud with the normal warnings about security in the Public Cloud. It is this section of the article which I find especially disturbing both in tone and message…

I’ll leave you with what has become my favorite story and it was told at CIO 100: Apparently, two engineers at a pharmaceutical company had to complete a critical project quickly and bid it out to IT. IT came back with a massive cost and a timeline in months. The engineers instead used their credit cards to use cloud services and completed the project in a few weeks and won an award for cost savings. The day after winning the award, both were terminated for violating the firm’s security policy as the project, which was ultra-secret, hadn’t been adequately secured.

I can almost imagine the teller of the tale’s gleeful smile as he recounted that story, perhaps the CIO involved. Now I think there should have been several different actions, none of which lead to the dismissal of two obviously talented and thoughtful engineers.

1) The CIO should have been hauled up and made to explain why his team could not provide the services that the engineers needed in a cost effective and timely manner. He put them in the position that do their job properly, they had to bend the rules. In fact he should be the person loosing his job and as a result of his inability to provide service; the company had had to terminate two valuable employees.

2) The team which looks after security should have been asked to look at the project and what the engineers had done; make a proper security assessment and work with them to ensure that such projects could be delivered in the Public Cloud in a secure manner. Proper procedures and guidelines should be put in place to support innovation.

But instead, a vengeful IT department decided that best thing to do is to shut down anyone innovating in their space.

And if anyone thinks that the large pharmaceuticals are not using public cloud; you should probably think again. They are regularly and I suspect securely; or perhaps, its not 100% secure but the opportunity for quicker delivery is worth risk.

Security is an issue but don’t let vendors and IT departments use it to block innovation and keep their castle intact. Security needs to move on from ‘No!’ to ‘How can we help you achieve your goals!?’; a bit like IT departments in general.

500 Not Out! How Did This Happen?

Another milestone for the Storagebod blog; this is my five hundredth post!  I should write something really meaningful but I’m not sure I can think of anything, so I thought I’d just put down a number of short ideas which might get developed more into full blogs.

Enterprise IT – Enterprise IT is a meaningless term; it is insulting and disparaging of anyone else who is not an Enterprise. If you run a business and your IT infrastructure is core to the continuation of your business; you need IT which is reliable, scalable and all those other good things. You can of course leave off paying the premium for what people call ‘Enterprise’!

RFPs – Request For Pain. RFPs generally exist for one reason in IT; that is to give a bunch of vendors a kicking. In the storage world, you probably have little reason to change vendor but you might as well kick the crap out of your incumbent for laughs. The result of nearly all RFPs is driven by politics and not technical reasons; it is probably better for your sanity if you acknowledge that up front. If a customer really wants to change, they will.

It’s a PC Plus World; the reality is that if you’ve got a desktop, you will probably keep a desktop. Don’t expect this to change any time soon. Yes, you will probably be able to get access to some services via an alternative device but I suspect that most desktop users will stay just that.  You will see more mobile devices about but we all know that it’s a pose and it gives us something to do in tedious meetings.

Big Data; use Big Data to make better decisions, don’t use it as an excuse to dive into analysis paralysis. If it has all the characteristics of a duck, it probably is a duck…you don’t need to decode it’s genome before you serve it with Orange sauce.

Cloud; it’s a way of delivering service but it’s not the only way of delivering service. If you find yourself getting religious about how you deliver a service as opposed to delivering the service…take a holiday and get some perspective.

Internal Service Providers; you only have one customer to focus on. This is your biggest strength and weakness.

IT Management; take the chance if possible to manage a team outside of your technical experience. You learn to manage, delegate and trust your team; you focus on managing and not trying to do two jobs. You can always go back to managing in your technical discipline but you will bring new insight and ideas.

Work/Life Balance; you will die, this is inevitable. Make sure that the people you love remember you for the right reasons and not for times you weren’t there.

So that’s post 500 done…here’s to the next 500!!

 

More Data

One of the problems with Big Data is that it is Big; this may seem obvious but actually to many people it’s not. When they hear Big Data what they end up thinking is that is simply More Data; so they engineer a solution based upon that premise.

What they don’t realise is that they are not simply dealing with More Data; they are dealing with Big Data! So for example, I know of one initiative which currently captures, lets say 50,000 data points today and someone has decided that it might be better if it captured 5,000,000 data points. Now, the solution to this is simply to throw bigger hardware at it and  not to re-engineer the underlying database.

Yes, there’s going to be some tweaking and work done on the indices but ultimately, it will be the same database. Now, this is not to say that this will not work but will it work when the 5,000,000 inevitably becomes 50,000,000 data points? It is simply not enough to extrapolate performance from your existing solutions but how much of capacity planning is simply that?

If you are already in the position that you are looking at More Data; it will probably become Big Data before you know it and if you haven’t already engineered for it, you are going to have a legacy, dare I say it ‘Toxic’ situation in short order.

Everything is changing; don’t get left behind.

Big /=More; Big == Different.

Think Big, Think Different.

 

Toxic IT?

Often we find ourselves talking about Legacy IT; especially when we are discussing the move to ‘Cloud’. What do we do with the legacy? At CloudCamp London, Simon Wardley has suggested that we need to start calling Legacy IT, Toxic IT. And the more I think about it, the more I agree but not just when are talking about the move to Cloud.

Many organisations have Legacy IT; oh, the accounts system; that’s legacy; the HR systems, they are legacy; that really important system which we rely on and is foundational, that’s legacy. What? These are all key systems, how have they become legacy? And the longer we leave them, the more toxic they become.

Businesses and especially Enterprises run on legacy systems; whilst we rush to the new and roll-out exciting services built on the new Cloud, we leave the legacy behind. And so they moulder and rot eventually become toxic.

All of us working in Enterprise IT can probably point to at least one key service which is running on infrastructure (hardware and software) that is long out of support. Services which may well be responsible to a large proportion of the company’s revenue or the company’s survival. These services have been around since the company was founded; that’s why I talk about them being foundational.

But why are they left behind, maybe it’s because it is easier to ask for budget for new stuff that brings new value and markets to a company? What is the return on investment on maintaining your account systems? It’s not going to add to your bottom line is it? Still, if your accounts systems collapses; you won’t even know what you your bottom line is.

So, that Legacy IT; it’s rapidly becoming toxic and it is going to cost you more and more to clean up that toxic pile the longer you leave it.

I think it’s time for many Enterprises to run an Honesty Commission where they ask their IT teams to identify all systems which are becoming toxic and commit to cleaning it up. Just because it hasn’t broke yet, it does not mean that it is not broken! Just because your services are not showing symptoms of toxicity, it does not mean that they are not slowly breaking down. Many poisons work rather slowly.

Yes, you might decide that you are going to move it to the Cloud but you might just commit to maintaining it properly.

The DAS Alternative?

There’s a lot of discussion about the resurgence of DAS and alternatives to SAN and NAS; whether these be virtual appliances, clustering storage, object or just plan old direct-attached-disk; all of these are seen as ways to replace the expensive network storage be it SAN or NAS attached.

But is this actually important or even especially new? It is certainly the case that the software vendors such as Microsoft and Oracle would like a piece of the action but we also have new players coming in via the virtualisation space.

Personally I see it just as another evolution in the realm of Networked Shared Storage; SAN, NAS and Clustered-Storage. The clustered storage will generally be built around commodity disk but it will not be exclusively so; it may be accessed in a variety of ways, clustered file-systems such as StorNext, GPFS and Gluster will all provide a block-level access but you can also throw in object technologies such as Caringo and you may still decide to access via the traditional NAS protocols.

There are certainly some interesting possibilities where block and file access could be provided to the same data; build yourself a storage-cluster and add in client nodes which see the storage as ‘local filesystems’ but also have remote access via NAS/CIFS or even object.

But is this really a resurgence of DAS? Not really, it’s still networked storage but just different. Existing SAN infrastructures can be leveraged to provide access to the physical bits (the rise of SSDs means no more storage is rust!). We simply have a new (actually old) tool in the box.

And it just reflects what we already know; Storage is Software…