Storagebod Rotating Header Image

April, 2012:

Politics, Practicality, Principles and Pragmatism

Many IT infrastructure decisions are made for reasons which have little to do with the capability of the technologies and very few are even made with due consideration of investment returns, long term costs and even fewer are revisited with the light of truth shone upon them.

So it is a wonder that any IT infrastructure works at all?

Well not really, as we have moved into a pretty much homogenised environment where all is interchangeable and pretty much all is good enough; the decisions are going to be made for reasons other than technology.

Many decisions are made simply are the grounds that more of the same is the path of least resistance. You have already learnt to love what you hated about a product and you are comfortable with it.  You might have grown close to the account team, they know all your favourite restaurants and sporting events; why change? And change is costly.

Of course, then you get the obverse; you have learnt to hate what you loved and the account team has grown far too comfortable. Perhaps there’s been a change in account manager or simply you decide that you’ve spent too much money with a company. Of course at this point, you suddenly find that what you have been paying is far too much and the incumbent slashes their costs to keep the account. But you’ve had enough and you decide to change.

Then you get the principled decision; the decision which could be based on the belief that open-source is the right thing to do or perhaps you believe the security through obscurity myth. Sometimes these look like technological decisions but they are really nothing to do with technology in general.

So have we moved to a market where the technology is pretty much irrelevant and why?

I think that we have and for a pretty good reason; you can’t manage what you can’t measure and quite simply, we are still lousy in measuring what we do and what it means. It means that all decisions have to made based on reasons which often have dubious links with reality.

For all discussions about metering and service-based IT; I don’t believe that we are anywhere near it. Internal metering tools are often so expensive and invasive to implement that we don’t bother.

And what is worse, we are often working in environments which do not care really care; who really cares if solution ‘X’ is cheaper over five years than solution ‘Y’ as long as solution ‘Y’ is cheaper today. Tomorrow can look after itself, tomorrow is another budget year.

So not only is measurement not easy; perhaps we simply don’t care?

Perhaps the only option is just carry on doing what we think is as right as possible in the context that we work in?

 

Thinking Architecturally

If you start an architecture with a shopping list of technologies that must be used; that architecture will be compromised. However this does not mean that you start working without an appreciation of the possible, obviously you need to be aware of limitations such as constants such as the speed of light and other real constraints.

But currently I see a trend from many, both vendors and users, trying to fix round-hole problems with square-shaped blocks. Not enough time is spent on the problem definition and truly understanding the problem; your existing tools may not be sufficient and although it may feel that it is more expensive to implement something new, at times it might be cheaper in the long-term to implement something right.

Also be aware of falling into the trap of implementing a feature just because you’ve made the mistake of purchasing something that does not fit your problem definition. If you’ve been sold something that you can’t use effectively, you have a couple of option; suck it up and learn from experience or shout and holler at your vendor/partner for selling you something which is merely shelf-ware. In my experience, the latter is often ultimately pointless and simply results in the vendor promising you some other product which you put on a shelf and not use. Use the experience to move away from architecting to utilise a feature and architecting to solve a problem.

This does not mean that you simply purchase a new system/technology for every problem; governance has a role but I would suggest that governance should be applied after the initial high-level-architecture. I like to think of it like more more traditional bricks and mortar architecture; the architect relies on a whole bunch of technical people to fulfil their vision and bring it to reality. At times these technical people will tell the architect that the architect is a complete moron; sometimes the architect will agree and sometimes the architect will work with the technical teams to come up with something innovative and new.

But in general the architect does not start their design with a specific make of brick in mind. Neither should an IT architect.

Into the Pit

Well, it seems that mobile has really come of age and the standard sysadmin tool of SSH really doesn’t cut it any more; anyone who has suffered the frustration of a dropped connection in the middle of doing something or just shutting their laptop lid by mistake when docking is going to love MOSH.

MOSH is a replacement for SSH which supports roaming and intermittent connections; actually, you still need SSH to make the initial connection but when the connection is made, it is handed over to the mosh-server. There are many cool things about MOSH; firstly it doesn’t run as a daemon and in fact, you don’t even need to get your friendly admin to install it for you. You can happily run it from your own home directory; of course, I would suggest that you do get your friendly admin to install it for everyone and themselves!

For laggy connections, MOSH does not wait for the server to respond before displaying what you’ve typed; on a laggy connection, it’s a bit reminiscent of using the old mainframe terminals but much, much nicer. On an unreliable connection, MOSH will underline outstanding actions so that you should never get lost. This even works with VIM and other full-screen editors, this will be a bit of mind-f**k at first but you’ll soon get used to it.

It is still missing some SSH functionality but it appears to be coming on quickly.

MOSH is cool but there’s a catch; there’s no Windows client yet, I’m sure someone will get round to it. And there’s no mobile clients yet as far as I could tell; it is crying out for an Android and iOS client to become truly awesome.

But give it at go; I think that it’ll eventually become my default remote client…

MOSH can be found here

Designed to Fail

Randy Bias has written an interesting piece here on the impact of complexity on reliability and availability; as you build more complex systems, it becomes harder and harder to engineer in multiple 9’s availability. I read the piece with a smile on my face and especially the references to storage; sitting with an array flat on it’s arse and already thinking about the DAS vs SAN argument for availability.

How many people design highly-available systems with no single points of failure until it hits the storage array? Multiple servers with fail-over capability, multiple network paths and multiple SAN connections; that’s pretty much standard but multiple arrays to support availability? It rarely happens. And to be honest, arrays don’t fall over that often, so people don’t tend to even consider it until it happens to them.

An array outage is a massive headache though; when an array goes bad, it is normally something fairly catastrophic and you are looking at a prolonged outage but often not so prolonged that anyone invokes DR. There are reasons for not invoking DR, most of them around the fact that few people have true confidence in their ability to run in DR and even fewer have confidence that they can get back out of DR, but that’s a subject for another blog.

I have sat in a number of discussions over the years where the concept of building a redundant array of storage arrays has been discussed i.e stripe at the array level as opposed to the disk level. Of course, rebuild times become interesting but it does remove the array as a single point of failure.

But then there are the XIVs, Isilons and other clustered storage products which are arguably extremely similar to this concept; data is striped across multiple nodes. I won’t get into the argument about implementations but it does feel to me that this is really the way that storage arrays need to go. Scale-out ticks many boxes but does bring challenges with regards to metadata and the like.

Of course, you could just go down the route of running a clustered file-system on the servers and DAS but this does mean that they are going to have to cope with striping, parity and the likes. Still, with what I have seen in various roadmaps, I’m not betting against this as an approach either.

The monolithic storage array will continue for some time but ultimately, a more loosely coupled and more failure tolerant storage infrastructure will probably be in all our futures.

And I suppose I better find out if that engineer has resuscitated our array yet.

 

Fashionably Late

Like Royalty, IBM have turned up late to what is arguably their own party with their PureSystems launch today. IBM, the company which invented converged systems in the form of the mainframe, have finally got round to launching their own infrastructure stack product. But have they turned up too late and is everyone already tucking into the buffet and ignoring the late-comer?

For all the bluster and talk about the ability to have Power and x86 in the same frame and dare I whisper mainframe; this is really an answer to the vBlock, FlexPod and Matrix et all. IBM can wrap it and clothe it but this is a stack and if pushed they will admit this.

But when I first had the pitch a few months ago; I must admit, despite the ‘so what’ reaction, I was impressed with what appears to be a lot of thought and detail from an infrastructure engineering point of view. It looks pretty good as slide-ware.

Still the question is…is it any better than the competitors; well even if you treat it as a pure x86 infrastructure ‘stack in a rack’, it certainly appears to be more flexible than some of the competitors. You have choices as to what hypervisor it’ll support for starters. It appears to be more polished and less bodged together from a hardware point of view.

But at the end of the day, it is what it is and what is going to be really important is whether it can really deliver the management efficiencies and improve IT’s effectiveness. And that, as is with all it’s competitors is still a question where there is not yet a solid answer.

As a product, it looks at least as good as the rest…as an answer? The workings are still being worked upon.

Reality for Scality

You know that I have somewhat mixed feelings about Object Storage; there is part of me which really believes that it is the future of scalable storage but there is another part of me that lives in the real world. This is the world where application vendors and developers are currently unwilling to rewrite their applications to support Object Storage; certainly whilst there is no shipping product supporting an agreed standard. And there are a whole bunch of applications which simply are not going to be re-written any time soon.

So for all their disadvantages; we’ll be stuck with POSIX filesystems for some time; developers understand how to code to them and applications retain a level of storage independence. You wouldn’t write an application to be reliant on a particular POSIX filesystem implementation so why would you tie yourself to an Object Store?

I was pleased to see this announcement from the guys at Scality; they are good guys and their product looks sound but no matter how sound your product is, you have to try to make it easier for yourself and your customers. Turning their product into a super-scalable filer is certainly a way to remove some of the barriers to adoption but will it be enough?

Of course, then there are the file-systems which are beginning to go the other way; a realisation that if you are already storing some kind of metadata, it might not be a huge stretch to turn it into a form of object store.

File-systems, especially those of the clustered kind and Object Storage seem to be converging rapidly. I think that this is only for the good.

 

Your Life, Their Product

So whilst the UK was recovering from over-indulging in chocolate eggs; across the Atlantic, Facebook were splashing out $1 Billion on Instagram. And still the world continued to spin and orbit the Sun. So what does this mean to us all; there will be a lot of soul searching and discussion but ultimately this just continues to productise your life and your experience.

I watched the Google’s Project Glass video prior to the Facebook announcement and was thinking if Google were to buy someone like Instagram; the anonymity of the crowd has gone, the glasses could identify the person you were looking at immediately. Of course Facebook could do the same thing and create their own Project Social Glass. You will no longer be able to sit in coffee shop quietly unrecognised, you would be instantly identifiable. Would you be entirely comfortable with that? I know I won’t be.

There are times in our lives where we just want to want be alone and not identified; to remove that opportunity and to have that constant feeling that you are being watched will change our natures. In our allowing of our lives to productised, we may lose something which is essential to our well-being; Facebook is arguably already removing the right to make mistakes and the ability to forget.

Could it remove the right to be anonymous? Are we heading towards the perfect storm which shatters our illusions of privacy? For even if it is an illusion, it is an important one.

We have to be very careful as to where this road takes us.

Of course Facebook could have just spent $1 Billion on an app to make crap photos look like they were taken 40 years ago.

 

The Steve Ratio

I’ve seen a few blogs recently about management and especially man management in IT but before I begin, I’ll explain the title! In my career, I have had four managers called Steve; two have been good, one indifferent and one who words cannot describe, not when I know that are some ladies who read this. And actually, the ratio of 2:1:1 is pretty much representative of the managers I have had so far.

So what makes a good manager? Well for me, the most important quality of the two good Steves was that they allowed me to make decisions that I was comfortable with and when I made a mistake, they would sit down with me and work through the mistake and then let me fix it. Let’s break that down

1) They trusted me….good managers trust!

2) They let me make mistakes….good managers do not second guess and do not blame!

3) They gave me guidance but not the answers! Good managers show the way but don’t carry you!

4) They trusted me….good managers don’t let mistakes destroy trust!

Now with my current numbers of direct reports, around thirty (I loose count); I have no choice but to follow those principles and do any more would rapidly hasten my descent into insanity but I owe those two Steves a great debt.

For me, man management is the great under-rated skill in IT; they aren’t always the most senior people and sometimes they aren’t the greatest techies but they ensure your organisation works and that you can do your job.

It is ironic that so many people say that the road to Cloud requires a cultural change; perhaps this should also include a change in attitude to man management. Perhaps we can change the Steve Ratio and make it 3:1:0; no arse-holes and a majority of good managers.

Do you need a desktop?

Work provide me with a laptop which spends most of its time locked to my desk. It’s quite a nice business laptop but really I can’t be bothered to carry it around. On occasion, when I’m working from home and realise that I am going to need access to some of corporate applications which require VPN access, it’ll come home with me but mostly not.

To be quite honest, even my MBA doesn’t travel that much, up and down the stairs is about as far as it goes. It is quite the nicest and most practical laptop that I’ve ever owned but I think we are getting close to the stage where a tablet can do almost everything that I need where-ever I am.

I was thinking as I was working today whether what I was doing required the traditional desktop experience and could I simply use my iPad as the access device instead. The answer is mostly yes, almost all the applications that I use are generic enough that there are good enough replacements on the iPad or they are accessed by a web interface anyway.

There are a few blockers tho’ at present

1) at present I can’t get my iPad onto the corporate wireless, this means that I can’t access a number of key applications due to ‘security’ restrictions but I can access email which appears to be our preferred file delivery/transfer mechanism.

2) I need a real keyboard to type on, there is a limit to how much I am prepared to type on a screen keyboard. I could overcome this relatively easily by bringing a bluetooth keyboard in.

3) Wired Ethernet is a necessity when working in some of our data centres or secure areas.

4) Unfortunately, I struggle without PowerPoint and Visio unfortunately; I can cope without Word, Excel is a little more problematic but it’s manageable. Keynote is nice but it makes a real mess of rendering PowerPoint in my experience.

5) Working on an external display is often a much nicer experience than using the tablet screen, even tho’ the retina display is the wonderful. But I have both the HDMI and VGA dongles which gets round this. But I wish that Apple could find a way to put a mini-DisplayPort on the iPad as using the adapters means that I loose any chance of using a USB device. Not important most of the time but very useful for transferring files from cameras and other devices.

But then I started thinking some more, perhaps I don’t really need a tablet either for work. Perhaps a smartphone which I dock would do? What we could do with is a standard dock for all mobile devices which charges, displays on an external screen and allows input from a standard keyboard/mouse.

Planes, trains, hotels and the like could simply provide a dock and you would end up carrying even less. At that point a device the size of a Samsung Note or Kindle Fire becomes a very interesting proposition.

And yet, I still expect to keep my PC desktop for some time….why? It’s still the best serious gaming platform out there. But for almost everything else I could probably manage with a mobile device.

Price is Right?

As the unit cost of storage continues to trend to zero and that is even with the current premium being charged due to last year’s floods; how do we truly measure the cost of storage and it’s provision?

Now, many of you are now thinking ‘zero’? Really?

And my answer would be that many of the enterprise class arrays are down to a few dollars per gigabyte over five years; certainly for SATA and NL-SAS. So the cost of storing data on spinning rust is heading towards zero; well certainly the unit cost is trending this way.

Of course, as we store ever more data; the total cost of storage continues to rise as the increase in the amount of data we store outstrips the price decline. Dedupe and the likes are a temporary fix which may mean that you can stop buying for period of time but inevitably you will need to start increasing your storage estate at some point.

So what are you going to do? I don’t think that we have a great solution at the moment and current technologies are sticking plasters with a limited life. Our storage estates are becoming like landfill; we can sift for value but ultimately it just hangs around and smells.

It is a fact of life that data management is a discipline much ignored; lots of people point at the Cloud as a solution but we are simply shifting the problem and at some point that’ll come unstuck as well. Cloud appliances will eventually become seen as a false economy; fixing the wrong problem.

Storage has simply become too cheap!

Which is a kind of odd statement for Storagebod to make….