Storagebod Rotating Header Image

Cloud

Sticky Servers

I read the announcements from HP around their Gen 8 servers with some interest and increasing amusement. Now HP are an intrinsically amusing funny company but it isn’t that which is amusing me, it’s the whole server industry and an interesting trend.

The Intel server industry was built on the back of the ‘PC Compatible’ desktop; where you could buy a PC from pretty much any vendor and run MS-DOS and run the same application anywhere. They all looked the same and if you could maintain one, you could maintain any of them.

Along came the PC Server and it was pretty the same thing; if you could maintain Server Brand X, you could maintain Server Brand Y. And so it pootled along until blade-servers came along and muddied the water a bit but it wasn’t so hard.

If you wanted to migrate between server vendors, it wasn’t rocket science; if you wanted to move from Compaq to Dell to IBM, it was not a big deal to be honest. Although sometimes the way people carried on, you would have thought you were moving from gas-powered computers to electric computers to computers with their own nuclear reactors in.

And then along come Cisco with UCS and the Intel server got bells, whistles and fancy pants. All in the name of ‘Ease of Use and Management’; it’s all fancy interfaces and APIs; new things to learn and all slightly non-standard.

And now HP follow along with Gen-8; it’s all going to be slightly non-standard and continue to drift away from the original whitebox server. The rest of the vendors are all moving this way, how do I make sure that customers remain loyal and sticky.

It’s all going to get increasingly hard to migrate between server vendors without major rethinks and retrains. Perhaps this is all going to accelerate the journey to the public cloud because I don’t want to care about that!

And as a storage guy, I can’t help but laugh!  Welcome to our world!

Refactoring the Future….

Most of my career has been spent in infrastructure and I guess infrastructure related technologies is the thing which really interests me professionally but I have spent time working as a developer, leading a development team managing the transition to agile development and more recently, as well as running a storage team, I also manage an Integration and Test team; this has hopefully led me into becoming more rounded professionally. And I think we as Infrastructure techies have a lot to learn from our colleagues in development; they’ve probably got more to learn from us but we can learn something from them.

Recently I’ve been wondering if Infrastructure could be treated more as an application; something that is more dynamic and less static than it is traditionally. Can an Infrastructure be Agile? And can it be managed in an Agile manner? The received wisdom is almost certainly, no! More outages are caused by change in infrastructure than any other cause;’  ‘if aint broke, don’t fix it’ is not actually a bad way to go but is that necessarily the case?

One of the big changes that Cloud brings should be that applications are engineered to cater for failure and change; an application should be able to move, replicate, grow, shrink and be resilient in spite of infrastructure.

Now this could mean that we could deploy, change and improve infrastructure a lot more rapidly; refresh becomes a constant task and not something which needs a special project to do, tuning and maintenance become routine and not scary. We have an infrastructure that is being constantly refactored in the same way that good developers refactor code; we fix kludges and workarounds, no longer living with them because we are fearful of the consequences of changing them.

Yet we also need discipline not to change for change’s sake; anyone who has run an Agile team will tell you about the importance of eg0-less developers (an oxymoron if there ever was) but you can run into the problem where more time is spent refactoring than adding value. I am not convinced that we will have this problem initially in Infrastructure, the culture of  ‘if aint broke, don’t fix it’ is heavily ingrained.

Refactoring is a small part of Agile but it is something that we can learn from in our Infrastructure domains. However there is a big problem; vendors don’t really like change and they throw all kinds of obstacles in your way, like interoperability matrices and the like. They really don’t like the idea of incremental change; they want big change where they can bill big tickets and big services.

I think that vendors and service providers who are going to win big are those who can deal with incremental change and improvements; those who are reliant on big changes and refresh cycles may well struggle as we move to a more dynamic model. I include internal IT suppliers in  this as well; the big change is a cultural change to accept that change is constant and good.

Your Data, Your Responsibility

I’ve been thinking recently about the post-PC era and what it really means; for some people it means the end of the desktop and the traditional PC but I think that this is slightly wrong-headed. For me, the post-PC era is my content anywhere and at any time.

Access to data is more important than anything but you might still use a traditional desktop to do heavy-lifting and manipulation; for example, tablets are great but for many tasks, I would still want to use a traditional keyboard, mouse and big screen. But when I’m away from base, I still want access to my data and perhaps do some lightweight manipulation.

So the post-PC world is moving us away from a single tethered end-point device to a multitude of devices, some mobile and some fixed. The applications we use on these devices may be different, in both scope and function but the data will be common and accessible everywhere.

This will bring challenges to us as individuals and as businesses; where do we store that data and how do we protect that data, both ensuring it is stored securely but also that it remains available. The recent Megaupload closure has already lead some people to question the long-term viability of cloud-storage. What happens if the site you store your data on is suddenly shut-down?

Question where you are putting your data; if it becomes obvious that a site has a slightly dubious reputation, then perhaps you should ask yourself whether you want to rely on its availability. But even if it is a site which has the highest reputation, ‘Shit Does Happen’; so you probably want to ensure that you have multiple copies stored in multiple places.

But also be aware of the underlying service, if both your Cloud storage providers are reselling storage from the same Cloud provider; question again.

Your data, your responsibility…

 

What-ever happened to Object Storage?

We have heard a lot about Object Storage but really how much impact has it had on the storage market so far? EMC make lots of noise about Atmos for sure but I hear very much conflicting stories on the take-up; NetApp bought Bycast and I hear a deafening silence; HDS have HCP and seem to be doing okay in some niche markets; Dell have their DX platform and there are many smaller players.

But where is it being deployed? Niche markets like medical and legal uses but general deployment? I hear of people putting Object Storage behind NAS gateways and using it as a cheaper NAS but is that not missing the point.  If you are just using NAS to dump files as objects into an Object Store, you are not taking advantage of much of the meta-data which is the advantage of Object Storage and you continue to build systems which are file-system centric. And if you really want a cheaper NAS, there might be better ways to do it.

For Object Storage to take off, we need a suite of applications and APIs which are object-centric; we need a big education effort around Object Storage but not aimed at the storage community but at the development and data community.

Object Storage is currently being sold to the wrong people; don’t sell it to Storage Managers, we’ll manage it when there is a demand for it but we are probably not the right people to go out and educate people about it. Yes, we are interested in it but developers never listen to us anyway.

I hear Storage Managers saying ‘we’d be interested in implementing an Object Storage solution but we don’t know what we’d use it for’; this isn’t that surprising as most Storage Managers are not developers or that application-centric.

If you don’t change your approach, if you don’t educate users about the advantages, if you continue to focus on the infrastructure; then we’ll be asking this question again and again. Object Storage changes infrastructure but it is probably more akin to a middle-ware sale than an infrastructure sale.

 

Happy New Year

Hope everyone had a nice break and is ready to get back into the swing of things; 2012 is upon us and for us living in London, we look forward to a summer of travel chaos and ever increasing levels of hyperbole. It is both the London Olympics but also the Queen’s Diamond Jubilee, so a great time to visit London and probably a great time to be living elsewhere.

Next week sees the Dell Storage Forum in London and the first #storagebeers of the year. Dell has had a year now to get their storage portfolio in order and 2012 must be the year that they begin to see their acquisitions deliver; yet, even that might not be enough and we need to see some innovation and road-maps presented. From Exanet to Compellent via Equallogic, there is enough product and I am looking forward to see how it gets woven into a strategy.

Yet Dell are not the only company who need to start weaving a strategy, arguably with the exception of EMC, this is the year when everyone needs to start drawing the weft and clothing their products with strategy and coherence.

And it is not just the vendors who need to get their strategies in order; this is very much the case for the end-user as well. Too much product and too much fluff still proliferates in many end-user organisations, this often due to a confusion between flexibility and choice.

From Cloud to Data Analytics; there has been a lot of playing with these technologies but many organisations need to move beyond this and into delivery of investment and results. As in every year, there is lots to do and as in every year, there might be too much to do. Start stripping away the fluff and delivering.

 

Dear Santa – 2011

Dear Santa,

it’s that time of year again when I write to you on behalf of the storage community and beyond. 2011 promised much but delivered less than hoped; the financial crisis throughout the world has put a damper on the party and there are some gloomy faces around. But as we know, the world will always need more storage, so what do we need to deliver in 2012.

Firstly, what we don’t need is Elementary Marketing Crud from the Effluent Management Cabal; perhaps this was a last grasp at a disappearing childhood as they realise that they need to be a grown-up company.

What I would like to see is some more serious discussion about what ‘Big Data’ is and what it means both from a Business point of view but also from a social responsibility point of view. I would like to see EMC and all get behind efforts to use data for good; for example, get behind the efforts to review all drug trial data ever produced to build a proper evidence based regime for the use and prescription of drugs, especially for children who often just get treated as small adults. This is just one example of how we can use data for good.

There are so many places where ‘Big Data’ can be used beyond the simple analysis of Business activities that it is something which really could change the world. Many areas of science from Climate Research to Particle Physics generate huge amounts of data that need analysing and archiving for future analysis that we can look at this being a gift to the world.

And Santa, it can also be used to optimise your route around the world, I’m sure it is getting more complicated and in these days of increasing costs, even you must be looking at ways of being more efficient.

Flying through clouds on Christmas Night, please remember us down below who are still trying to work out what Cloud is and what it means; there are those who feel that this is not important but there are others who worry about there being no solid definition. There are also plenty of C-level IT execs who are currently loosing sleep as to what Cloud in any form means to them and their teams.

So perhaps what is needed is less spin, more clarity and leadership. More honesty from vendors and users, stop calling products and projects, Cloud; focus on delivery and benefits. A focus on deliverables would remove much of the fear around the area.

Like your warehouses at this time of year, our storage systems are full and there is an ever increasing demand for space. It does not slow down and unlike you, our storage systems never really empty.  New tools for data and storage management allowing quick and easy classification of data are a real requirement along with standards based application integration for Object storage; de-facto standards are okay and perhaps you could get some of the vendors to stop being precious about ‘Not Invented Here’.

I would like to see the price of 10GbE come down substantially but also I would like to see the rapid introduction of even faster networks; I am throwing around huge amounts of data and the faster I can do it, the better. A few years ago, I was very positive about FCoE; now I am less so, certainly within a 10 GbE network it offers very little but faster networks might make me more positive about it again.

SSDs have changed my desktop experience but I want that level of performance from all of my storage; I’ve got impatient and I want my data *NOW*. Can you ask the vendors to improve their implementation of SSDs in Enterprise Arrays and obviously drive down the cost as well? I want my data as fast as the network can supply it and even faster if possible; local caching and other techniques might help.

But most of all Santa, I would like a quiet Christmas where nothing breaks and my teams get to catch up on some rest and spend time with their families. The next two years’ roadmap for delivery is relentless and time to catch our breath may be in short supply.

Merry Christmas,

Storagebod

 

2011 – A Vendor Retrospective….

So, we’re winding down to Christmas and looking forward to spending time with our families, so I guess it’s time for me to do a couple of Christmas blog entries. It’s been a funny year really, a lot has happened in the world of technology but nothing really has changed in my opinion; there’s certainly some interesting tremors and fore-shadowing though.

HP started the year in a mess and finish the year in a mess; they got themselves into a bigger mess in the middle of the year but appear to have pulled themselves from the brink of the abyss. I can still hear the pebbles bouncing of the walls of the abyss as HP scramble but I think they’ll be okay. 3Par is going to turn into a huge win for them.

EMC started the year with a Big Bang of nothing announcements and some fairly childish marketing but their ‘Big Data’ meme appears to be building up a head of steam. Isilon appears to be doing great for them and although EMC still don’t appear to understand some of the verticals that they play in now, they seem to understand that they don’t and are generally letting the Isilon guys get on with it. Yes, they’ve lost a few people but that’s always the case. Their JV with Cisco; I hear mixed reviews, I think that they are doing well in the Service Provider space but less well in the other verticals; still, they are certainly marketing well to partner organisations.

HDS still struggle around message but they seem to be getting a better selling stuff and are going aggressively after business. Much of this seems to be by ‘ripping the arse’ out of prices but a newly hungry and aggressive HDS is not such a bad thing. I still think that they are not quite sure how to sell outside of their comfort zone but some of the arrogance has gone.

IBM; Incoherent Basic Marketing. There’s a huge opportunity for IBM and yet they seem to be confused. They do have a vision and they do have technology but they do seem to struggle with the bit in the middle.  And they never seem to finish a product; so much feels half-done.

NetApp bought Engenio; a great buy but have they confused themselves? Revenues appear to be plateauing and from my anecdotal evidence, adoption of OnTap 8 is slow. I think in hindsight that some within NetApp may agree that OnTap 8 shipped too early and it was a ‘release anything’ type move; OnTap 8.1 is really OnTap 8.

Oracle ‘bought’ Pillar and still have no storage story. Larry should bite the bullet and buy NetApp; much as that might upset some of  my friends at NetApp.

I started the year with great hopes for Dell and I finish the year with some great hopes for Dell but they need to move fast with a sober HP on the horizon. HP could shut them out.

Elsewhere in the industry, pure-play SSD start-ups seem to be hot and there’s a lot of new players in that space. There’s going to be more in that space as people start to treat SSDs as a new class of storage as opposed to simply faster spinning rust. I do worry at the focus on VMware by some of these start-ups and their exposure to VMware doing something which impacts the start-up’s model and technology. Design with virtualisation in mind but ensure that you are agile enough to dodge the slings and arrows of misfortune.

One thing which has saddened me over the past eighteen months is the fall off in blog entries by some of the more notable bloggers. I know you are busy guys but is an entry every other week or so too much to ask? I miss reading some of you!! Hey, I even miss some of the heated spats in the comments.

 

Forecast is Cloudy

So, you’ve built your Cloud or perhaps at least built the case for the use of Cloud; private, public, and hybrid; so what do you next? What are those next steps?

Perhaps you’ve even got some ‘Lighthouse Deployments’; some early projects running in the Cloud? How do take your Cloud mainstream? Or do you just sit round and self-congratulate yourselves for the next six months?

Actually, what have you implemented? Have you perhaps built a vBlock infrastructure? A Flexpod? So what applications are you running in this Cloud? The low-hanging fruit such as email is popular but really have you actually done Cloud at this point? All you have done is moved an application into a virtualised infrastructure.

Now there are some significant benefits in taking existing workloads and virtualising them but is that enough?

It seems that a great deal of focus on the journey to Cloud is still on the underlying infrastructure and a very low-level of infrastructure at that. Putting Big Data to one side, where are the applications and workloads which can be genuinely said to be Cloud?

If we continue to simply take existing workloads and just shift them into a more dynamic infrastructure; we risk missing the transformative possibilities of Cloud. We miss the possibility of using a new generation of applications to redefine process and business.

The beauty of the PC was not that it just brought computing power to the desktop but it brought new applications with it; from the Office applications that we all love to hate to the browser which we all love to hate; it has transformed the way we do business and our lives.

The infrastructure to support Cloud is mostly there or within reach but the applications which define Cloud and change business and lives; these are mostly missing, certainly if you move outside of the social applications and Big Data.

And even migrating existing workloads to the Cloud is often a simple lift and shift with little redevelopment of the workload to take advantage of the possibilities or even the realities of Cloud. Workloads are not designed and architected to take advantage of elastic resource which might vanish at any moment.

The plumbing is done; now it’s time to heat the house.

Stacks…

Over the past eighteen months or so, I have pretty much managed to avoid the vendor roadmap discussion despite many attempts by vendors to draw me in but I have found that it really does not hamper my decision making. Knowing short-term futures and what is about to be announced in the next six months is often useful but the longer term less so.

Roadmaps change so much and often features that are promised are not there, so it is not worth designing systems and infrastructures around them; game-changing features turn out rarely to be so and game-changers can often be completely unexpected.

I am going to say something rather surprising here and those who know me will probably be calling the men in white coats; vBlock is game-changing and will continue to change the game. But it is not for what it is but it is what for it makes people do. It actually encourages people to start thinking about the whole stack and how it does integrate; VCE (amongst others) will of course suggest that once you do this that the best thing to do is to then buy that whole stack from a single person. It might be…it might not, your mileage will vary.

But what it should do is drive the idea of standardisation and governance and it actually allows you ask some other interesting questions which almost go against the very idea of an integrated stack.

If a vendor turns up and says ‘My software is marvellous but you must run it on *my stack*; if you have your own stack, you are in a better position to ask why is this so? Why must I run it on your expensive and overpriced stack, I have built my stack and it runs just fine? Is the value of your integration and certification really worth that much?’

Your stack might be VCE, it might be something completely different and something that you have defined but you really do need one now. vBlock is game-changing because of the thought process it should drive…

 

Solutions and Specialists…

Solutions are great, a vendor turns up and sells a turn-key solution, it’s a marvellous world and everything just works; it’s all certified and lovely. There is a single supplier to kick and trouble-shoot the problem. Or at least that’s the theory….

But what happens when the supplier can’t fix the problem? Who do you turn to then? Funnily enough, that’s the situation I’m in today. A turn-key media solution which we’ve been kept at arms length from for years has developed issues and now who do the customer turn to when not getting good answers from the solutions vendor? That’s right, our little team of specialists..

A couple of hours of investigation and a meeting with the vendor where we expose what we see as issues by treating the solution as just another piece of infrastructure has been enlightening to both our internal customer and the solutions vendor. There will be no arms-length going forward; solutions still need specialists.

You need specialists to ensure that the wool is not being pulled over your eyes; you need specialists to ask the right questions and know when the answers are not good enough. Too often you will find that the solutions vendor themselves have little clue about the underlying infrastructure; focusing on the application whilst using the hardware as a nice little revenue lift. This is fine until you hit problems.

If you are buying integrated solutions and stacks; make sure that they are integrated and that the solutions provider can actually support the stack. Don’t be afraid to dig into what is being provided as part of the stack/solution and keep some specialists around.