Storagebod Rotating Header Image

Data Centres

The Reptile House

I was fortunate enough to spend an hour or so with Amitabh Srivastava of EMC; Amitabh is responsible for the Advanced Software division in EMC and one of the principal architects behind ViPR. It was an open discussion about the inspiration behind ViPR and where storage needs to go. And we certainly tried to avoid the ‘Software Defined’ meme.

Amitabh is not a storage guy; in fact his previous role with Microsoft sticks him firmly in the compute/server camp but it was his experience in building out the Azure Cloud offering which brought him appreciation of the problems that storage and data face going forward. He has some pretty funny stories about how the Azure Cloud came about and the learning experience it was; how he came to realise that this storage stuff was pretty interesting and more complex that just allocating some space.

Building dynamic compute environments is pretty much a solved problem; you have a choice of solutions and fairly mature ones. Dynamic networks are well on the way to being solved.

But building a dynamic and agile storage environment is hard and it’s not a solved problem yet. Storage and more importantly the data it holds has gravity or as I like to think of it, long-term persistence. Compute resource can be scaled up and down; data rarely has the idea of scaling down and generally hangs around. Data Analytics just means that our end-users are going to hug data for longer. So you’ve got this heavy and growing thing…it’s not agile but there needs to be some way of making it appear more agile.

You can easily move compute workloads and it’s relatively simple to change your network configuration to reflect these movements but moving large quantities of data around, this is a non-trivial thing to do…well at speed anyway.

Large Enterprise Storage environments are heterogeneous environments, dual supplier strategies are common; sometimes to keep vendors honest but often there is an acceptance the different arrays have difference capabilities and use-cases. Three or four years ago, I thought we were heading towards general purpose storage arrays; we now have more niche and siloed capabilities than ever before. Driven by developments in all-flash arrays, commodity hardware and new business requirements; the environment is getting more complex and not simpler.

Storage teams need a way of managing these heterogenous environments in a common and converged manner.

And everyone is trying to do things better, cheaper and faster; operational budgets remain pretty flat, headcounts are frozen or shrinking. Anecdotally, talking to my peers; arrays are hanging around longer, refresh cycles have lengthened somewhat.

EMC’s ViPR is attempt to solve some of these problems.

Can you lay a new access protocol on top of already existing and persistent data?  Can you make so that you don’t have to migrate many petabytes of data to enable a new protocol?  And can you ensure that your existing applications and new applications can use the same data without a massive rewrite? Can you enable your legacy infrastructure to support new technologies?

The access protocol in this case is Object; for some people Object Storage is religion…all storage should be object, why the hell do you want some kind of translation layer. But unfortunately, life is never that simple; if you have a lot of legacy applications running and generating useful data, you probably want to protect your investment and continue to run those applications but you might want to mine that data using newer applications.

This is heresy to many but reflects today’s reality; if you were starting with a green-field, all your data might live in an object-store but migrating a large existing estate to an object-store is just not realistic as a short term proposition.

ViPR enables your existing file-storage to be accessible as both file and object. Amitabh also mentioned block but I struggle with seeing how you would be able to treat a raw block device as an object in any meaningful manner. Perhaps that’s a future conversation.

But in the world of media and entertainment, I could see this capability being useful; in fact I can see it enabling some workflows to work more efficiently, so an asset can be acquired and edited in the traditional manner; then ‘moving’ into play-out as an object with rich-metadata but without moving around the storage environment.

Amitabh also discussed possibilities of being able to HDFS your existing storage, allowing analytics to be carried out on data-in-place without moving it. I can see this being appealing but challenges around performance, locking and the like become challenging.

But ultimately moving to an era where data persists but is accessible in appropriate ways without copying, ingesting and simply buying more and more storage is very appealing. I don’t believe that there will ever be one true protocol; so multi-protocol access to your data is key. And even in a world where everything becomes objects, there will almost certainly be competing APIs and command-sets.

The more real part of ViPR; when I say real, I mean it is the piece I can see huge need for today; is the abstraction of the control-plane and making it look and work the same for all the arrays that you manage. Yet after the abomination that is Control Center; can we trust EMC to make Storage Management easy, consistent and scalable? Amitabh has heard all the stories about Control Center, so lets hope he’s learnt from our pain!

The jury doesn’t even really have any hard evidence to go on yet but the vision makes sense.

EMC have committed to open-ness around ViPR as well; I asked the question…if someone implements your APIs and makes a better ViPR than ViPR? Amitabh was remarkably relaxed about that, they aren’t going to mess about with APIs for competitive advantage and if someone does a better job than them; then that someone deserves to win. They obviously believe that they are the best; if we move to a pluggable and modular storage architecture, where it is easy to drop-in replacements without disruption; they better be the best.

A whole ecosystem could be built around ViPR; EMC believe that if they get it right; it could be the on-ramp for many developers to build tools around it. They are actively looking for developers and start-ups to work with ViPR.

Instead of writing tools to manage a specific array; it should be possible to write tools that manage all of the storage in the data-centre. Obviously this is reliant on either EMC or other storage vendors implementing the plug-ins to enable ViPR to manage a specific array.

Will the other storage vendors enable ViPR to manage their arrays and hence increase the value of ViPR? Or will it be left to EMC to do it; well, at launch, NetApp is already there. I didn’t have time to drill into which versions of OnTap however and this where life could get tricky; the ViPR-control layer will need to keep up with the releases from the various vendors. But as more and more storage vendors are looking at how their storage integrates with the various virtualisation-stacks; consistent and early publications of their control functionality becomes key. EMC can use this as enablement for ViPR.

If I was a start-up for example, ViPR could enable me to fast-track management capability of my new device.I could concentrate on storage functionality and capability of the device and not on the periphery management functionality.

So it’s all pretty interesting stuff but it’s certainly not a forgone conclusion that this will succeed and it relies on other vendors coming to play. It is something that we need; we need the tools that will enable us to manage at scale, keeping our operational costs down and not having to rip and replace.

How will the other vendors react? I have a horrible suspicion that we’ll just end up with a mess of competing attempts and it will come down to the vendor who ships the widest range of support for third party devices. But before you dismiss this as just another attempt from EMC to own your storage infrastructure; if a software vendor had shipped/announced something similar, would you dismiss it quite so quickly? ViPR’s biggest strength and weakness is……EMC!

EMC have to prove their commitment to open-ness and that may mean that in the short term, they do things that seriously assist their competitors at some cost to their business. I think that they need to almost treat ViPR like they did VMware; at one point, it was almost more common to see a VMware and NetApp joint pitch than one involving EMC.

Oh, they also have to ship a GA product. And probably turn a tanker around. And win hearts and minds, show that they have changed…

Finally, let’s forget about Software Defined Anything; let’s forget about trying to redefine existing terms; it doesn’t have to be called anything…we are just looking for Better Storage Management and Capability. Hang your hat on that…

 

More Thoughts On Change…

This started as a response to comments on my previous blog but seemed to grow into something which felt like a blog entry in it’s own right. And it allowed me to rethink a few things and crystalise some ideas.

Enterprise Storage is done; that sounds like a rash statement, how can a technology ever be done? So I better explain what I mean. Pretty much all the functionality that you might expect to be put into a storage array has been done and it is now done by pretty much every vendor.

Data Protection – yep, all arrays have this.

Clones, Snaps – yep, all arrays have this and everyone has caught up with the market-leader.

Replication – yep, everyone does this but interestingly enough, I begin to see this abstracted away from array

Data Reduction – mostly, dedupe and compression are on almost every array; slightly differing implementations, some architectural limitations showing.

Tiering – mostly, yet again varying implementations but fairly comparable.

And of course, there is performance and capacity. This is good enough for most traditional Enterprise scenarios; if you find yourself requiring something more, you might be better at looking at non-traditional Enterprise storage. Scale-Out for capacity and All-Flash for performance. Now, the traditional Enterprise Vendors are having a good go at hacking in this functionality but there is a certain amount of round pegs, square holes and big hammers going on.

So the problem for the Enterprise Storage vendors is as their arrays head towards functionality completeness is how they compete. Do we end up in a race to the bottom? And what is the impact of this? Although their technology still has value, it’s differentiation is very hard to quantify. It’s become commodity.

And as we hit functionality completeness; it is more likely that open-source technologies will ‘catch-up’; then you end up competing with free. How does one compete with free?

You don’t ignore it for starters and you don’t pretend that free can’t compete on quality; that did not work out so well for some of the major server vendors as Linux ate into their install base. But you can look at how Red-Hat compete with free; they compete on service and support.

You no longer compete on functionality; Centos pretty much has the same functionality as Red Hat. You have to compete differently.

But firstly you have to look at what you are selling; the Enterprise Storage vendors are selling software running on what is basically commodity hardware. Commodity, should not be taken as some kind of second-rate thing; it really means that we’ve hit a point where it is pretty standard, there is little differentiation.

Yet this does not necessarily mean cheap, Diamonds are a commodity. However, customers can see this and they can compare your price of the commodity hardware that your software runs on against the spot-price of that hardware on the open market.

In fact if you were open and honest, you might well split out the licensing costs of your software and the cost of the commodity hardware?

This is the very model that Nexenta use. Nexenta publish a HSL of components that they have tested Nexenta-stor on; there are individual components and also complete servers. This enables customers to white-box if they want or leverage existing server support contracts. If you go off piste; they won’t necessarily turn you away but there will be a discussion. The discussion may result in something new going onto the support list; it may end up finding out something definitively does not work.

We also have VSAs popping up in one form or another; these piggy-back on the VMware HCL generally.

So is it really a stretch to suggest that the Enterprise Storage vendors might take it a stage further; a fairly loose hardware support list that allows you to run the storage personality of your choice on the hardware of your choice?

I suspect that there are a number of vendors who are already considering this; they might well be waiting for someone to break formation first. There’s quite a few of them who already have; they don’t talk about it but there are some hyper-scale customers who are already running storage personalities on their own hardware. If you’ve built a hyper-scale data-centre based around a standard build of rack, server etc; you might not want a non-standard bit of kit messing up your design.

If we get some kind of standardisation in the control-plane APIs; the real money to be made will be in the storage management and automation software. The technologies which will allow me to use a completely commoditised Enterprise Storage Stack are going to be the ones that are interesting.

Well, at least until we break away from an array-based storage paradigm; another change which will eventually come.

 

 

5 Minutes

One of the frustrations when dealing with vendors is actually getting real availability figures for their kit; you will get generalisation,s like it is designed to be 99.999% available or perhaps 99.9999% available. But what do those figures really mean to you and how significant are they?

Well, 99.999% available equates to a bit over 5 minutes of downtime and 99.9999% equates to a bit over 30 seconds downtime over a year. And in the scheme of things, that sounds pretty good.

However, these are design criteria and aims; what are the real world figures? Vendors, you will find are very coy about this; in fact, every presentation I have had with regards to availability are under very strict NDA and sometimes not even notes are allowed to be taken. Presentations are never allowed to be taken away.

Yet, there’s a funny thing….I’ve never known a presentation where the design criteria are not met or even significantly exceeded. So why are the vendors so coy about their figures? I have never been entirely sure; it may be that their ‘mid-range’ arrays display very similar real world availability figures to their more ‘Enterprise’ arrays…or it might be that once you have real world availability figures, you might start ask some harder questions.

Sample size; raw availability figures are not especially useful if you don’t know the sample size. Availability figures are almost always quoted as an average and unless you’ve got a real bad design; more arrays can skew figures.

Sample characteristics; I’ve known vendors when backed into a corner to provide figures do some really sneaky things; for example, they may provide figures for a specific model and software release. This is often done to hide a bad release for example. You should always try to ask for the figures for the entire life of a product; this will allow you to judge the quality of the code. If possible as for a breakdown on a month-by-month basis annotated with the code release schedule.

There are many tricks that vendors try to pull to hide causes of downtime and non-availability but instead of focusing on the availability figures; as a customer, it is sometimes better to ask different specific questions.

What is the longest outage that you have suffered on one of your arrays? What was the root cause? How much data loss was sustained? Did the customer have to invoke disaster recovery or any recovery procedures? What is the average length of outage on an array that has gone down?

Do not believe a vendor when they tell you that they don’t have these figures and information closely and easily to hand. They do and if they don’t; they are pretty negligent about their QC and analytics. Surely they don’t just use all their Big Data capability to crunch marketing stats? Scrub that, they probably do.

Another nasty thing that vendors are in the habit of doing is forcing customers to not disclose to other customers that they have had issues and what they were. And of course we all comply and never discuss such things.

So 5 minutes…it’s about long enough to ask some awkward questions.

Doctors in the Clouds

At the recent London Cloud Camp; there was a lot of discussion about DevOps on the UnPanel; as the discussion went on, I was expecting the stage to be stormed by some of the older members in the audience. Certainly some of the tweets and the back-channel conversations which were going on were expressing some incredulity at some of the statements from the panel.

Then over beer and pizza; there were a few conversations about the subject and I had a great chat with Florian Otel who for a man who tries to position HP as a Cloud Company is actually a reasonable and sane guy (although he does have the slightly morose Scandinavian thing down pat but that might just be because he works for HP). The conversation batted around the subject a bit until I hit an analogy for DevOps that I liked and over the past twenty-four hours, I have knocked it around a bit more in my head. And although it doesn’t quite work, I can use it as the basis for an illustration.

Firstly, I am not anti-DevOps at all; the whole DevOps movement reminds me of when I was fresh-faced mainframe developer; we were expected to know an awful lot about our environment and infrastructure. We also tended to interact and configure our infrastructure with code; EXITS of many forms were part of our life.

DevOps however is never going to kill the IT department (note: when did the IT department become exclusively linked with IT Operations?) and you are always going to have specialists who are required to make and fix things.

So here goes; it is a very simple process to instantiate a human being really. The inputs are well known and it’s a repeatable process. This rather simple process however instantiates a complicated thing which can go wrong in many ways.

When it goes wrong, often the first port of call is your GP; they will poke and prod, ask questions and the good GP will listen and treat the person as a person. They will fix many problems and you go away happy and cured. But most GPs actually have only a rather superficial knowledge of everything that can go wrong; this is fine, as many problems are rather trivial. It is important however that the GP knows the limits of their knowledge and knows when to send the patient to a specialist.

The specialist is a rather different beast; what they generally see is a component that needs fixing; they often have lousy bedside manners and will do drastic things to get things working again. They know their domain really well and you really wouldn’t want to be without them. However to be honest, are they a really good investment? If a GP can treat 80% of the cases that they are faced with, why bother with the specialists? Because having people drop dead for something that could be treated is not especially acceptable.

As Cloud and Dynamic Infrastructures make it easier to throw up new systems with complicated interactions with other systems; unforeseeable consequences may become more frequent, your General Practitioner might be able to fix 80% of the problems with a magic white-pill or tweak here or there….but when your system is about to collapse in a heap, you might be quite thankful that you still have your component specialists who make it work again. Yes, they might be grumpy and miserable; their bedside manner might suck but you will be grateful that they are there.

Yes, they might work for your service provider but the IT Ops guys aren’t going away; in fact, you DevOps have taken away a lot of the drudgery of the Ops role. And when the phone rings, we know it is going to be something interesting and not just an ingrown toe-nail.

Of course the really good specialist also knows when the problem presented is not their speciality and pass it on. And then there is the IT Diagnostician; they are grumpy Vicodin addicts and really not very nice!

Love to VMAX

It may come as a surprise to some people, especially reading this but I quite like the Symmetrix (VMAX) as a platform. Sure it is long in the tooth or in marketing speak ‘a mature platform’ and it is a bit arcane at times; Symmetrix Administrators seem at times to want to talk a different language and use acronyms when a simple word might do but it’s rock solid.

Unfortunately EMC see it as a cash-cow and have pushed it when at times better fits in their own product set would suit a customer better. This means that many resent it and like to hold it up as an example of all that is wrong with EMC. I certainly have done in the past.

And it might end up being the most undersold and under-rated product; the product pushed into the marketing niche that is Enterprise Storage. Yet it could be so much more.

I think that there is a huge market for it in the data-centres of the future; more so than EMC’s other ‘legacy’ array in VNX. For many years, I thought that EMC should drop the Symmetrix and build up the Clariion (VNX) but I see now that I was wrong; EMC need to drop the Clariion and shrink down the Symmetrix. They need to produce a lower-end Symmetrix which can scale-out block much in the way that Isilon can scale out file. Actually a smaller Isilon would be a good idea too; a three node cluster that could fit into three or four U; presenting 20-40 terabytes.

In fact, for those customers who want it; perhaps a true VNX replacement utilising the Virtual Versions of  the Symmetrix and Isilon might be the way to go but only if there is a seamless way to scale out.

I guess this will never happen apart from in the labs of the mad hackers because EMC will continue to price the Symmetrix at a premium price…which is a pity really.

Defined Storage…

Listening to the ‘Speaking In Tech’ podcast got me thinking a bit more about the software-defined meme and wondering if it is a real thing as opposed to a load of hype; so for the time being I’ve decided to treat it as a real thing or at least that it might become a real thing…and in time, maybe a better real thing?

So Software Defined Storage?

The role of the storage array seems to be changing at present or arguably simplifying; the storage array is becoming where you store stuff which you want to persist. And that may sound silly but basically what I mean is that the storage array is not where you are going to process transactions. Your transactional storage will be as close to the compute as possible or at least this appears to be the current direction of travel.

But there is also a certain amount of discussion and debate about storage quality of service, guaranteed performance and how we implement it.

Bod’s Thoughts

This all comes down to services, discovery and a subscription model. Storage devices will have to publish their capabilities via some kind of API; applications will use this to find what services and capabilities an array has and then subscribe to them.

So a storage device may publish available capacity, IOP capability, latency but it could also publish that it has the ability to do snapshots, replication, thick and thin allocation. It could also publish a cost associated with this.

Applications, application developers and support teams might make decisions at this point what services they subscribe to; perhaps a fixed capacity and IOPs, perhaps take the array-based snapshots but do the replication at an application layer.

Applications will have a lot more control about what storage they have and use; they will make decisions whether certain data is pinned in local SSD or never gets anywhere near the local SSD; whether it needs sequential storage or random access..It might have it’s RTO and RPO parameters; making decisions about what transactions can be lost and which need to be committed now.

And this happens, the data-centre becomes something which is managed as opposed to the siloed components.

I’ve probably not explained my thinking as well as I could do but I think it’s a topic that I’m going to keep coming back to over the months.

 

 

 

Enterprising Marketing

I love it when Chuck invents new market segments, ‘Entry-Level Enterprise Storage Arrays’ appears to be his latest one; he’s a genius when he comes up with these terms. And it is always a space where EMC have a new offering.

But is it a real segment or just m-architecture? Actually, the whole Enterprise Storage Array thing is getting a bit old and I am not sure whether it has any real meaning any more and it is all rather disparaging to the customer. You need Enterprise, you don’t need Enterprise…you need 99.999% availability, you only need 99.99% availability.

As a customer, I need 100% availability; I need my applications to be available when I need them. Now, this may mean that I actually only need them to be available an hour a month but during that hour I need them to be 100% available.

So what I look for vendors is the way that they mitigate against failure and understand my problems but I don’t think the term ‘Enterprise Storage’ brings much value to the game; especially when it is constantly being misused and appropriated by the m-architecture consultants.

But I do think it is time for some serious discussions about storage architectures; dual-head, scale-up architectures vs multiple-head, scale-out architectures vs RAIN architectures; understanding the failure modes and behaviours is probably much more important than the marketing terms which surround them.

EMC have offerings in all of those spaces; all at different cost points but there is one thing I can guarantee, the ‘Enterprise’ ones are the most expensive.

There is also a case for looking at the architecture as a whole; too many times I have come across the thinking that what we need to do is make our storage really available, when the biggest cause of outage is application failure. Fix the most broken thing first; if your application is down because it’s poorly written or architected, no amount of Enterprise anything is going to fix it. Another $2000 per terabyte is money you need to invest elsewhere.

Flash is dead but still no tiers?

Flash is dead; its an interim technology with no future and yet it continues to be a hot topic and technology. I suppose I really ought to qualify the statement, Flash will be dead in the next 5-10 years and I’m really thinking about the use of Flash in the data-centre.

Flash is important as it is the most significant improvement in storage performance since the introduction of the RAMAC in 1956; disks really have not improved that much and although we have had various kickers which have allowed us to improve capacity, at the end of the day they are mechnical devices and are limited.

15k RPM disks are pretty much as fast as you are going to get and although there have been attempts to build faster spinning stuff,; reliability, power and heat have really curtailed these developments.

But we now have a storage device which is much faster and has very different characteristics to disk and as such, this introduces a different dynamic to the market. At first, the major vendors tried to treat Flash as just another type of disk; then various start-ups questioned that and suggested that it would be better to design a new array from the ground-up and treat Flash as something new.

What if they are both wrong?

Storage tiering has always been something that has had lip-service paid to but no-one has ever really done it with a great deal of success. And when you had spinning rust; the benefits were less realisable, it was hard work and vendors did not make it easy.  They certainly wanted to encourage you to use their more expensive Tier 1 disk and moving data around was hard.

But Flash came along and with an eye-watering price-point; the vendors wanted to sell you Flash but even they understood that this was a hard-sell at the sort of prices they wanted to charge. So, Storage Tiering became hot again; we have the traditional arrays with Flash in and the ability to automatically move data around the array. This appears to work with varying degrees of success but there are architectural issues which mean you never get the complete performance benefit of Flash.

And then we have the start-ups who are designing devices which are Flash only; tuned for optimal performance and with none of the compromises which hamper the more traditional vendors. Unfortunately, this means building silos of fast storage and everything ends up sitting on this still expensive resource. When challenged about this, the general response you get from the start-ups is that tiering is too hard and just stick everything on their arrays. Well obviously they would say that.

I come back to my original statements, Flash is an interim technology and will be replaced in the next 5-10 years with something faster and better. It seems likely that spinning rust will hang-around for longer and we are heading to a world where we have storage devices with radically different performance characteristics; we have a data explosion and putting everything on a single tier is becoming less feasible and sensible.

We need a tiering technology that sits outside of the actual arrays; so that the arrays can be built optimally to support whatever storage technology comes along. Where would such a technology live? Hypervisor? Operating System? Appliance? File-System? Application?

I would prefer to see it live in the application and have applications handle the life of their data correctly but that’ll never happen. So it’ll probably have to live in the infrastructure layer and ideally it would handle a heterogeneous multi-vendor storage environment; it may well break the traditional storage concepts of a LUN and other sacred cows.

But in order to support a storage environment that is going to look very different or at least should look very different; we need someone to come along and start again. There are a various stop-gap solutions in the storage virtualisation space but these still enforce many of the traditional tropes of today’s storage.

I can see many vendors reading this and muttering ‘HSM, it’s just too hard!’ Yes it is hard but we can only ignore it for so long. Flash was an opportunity to do something; mostly squandered now but you’ve got five years or so to fix it.

The way I look at it; that’s two refresh cycles; it’s going to become an RFP question soon.

 

 

 

 

Software Sucks!

Every now and then, I write a blog article that could probably get me sued, sacked or both; this started off as one of those and has been heavily edited as to avoid naming names…

Software Quality Sucks; the ‘Release Early, Release Often’ meme appears to have permeated into every level of the IT stack; from the buggy applications to the foundational infrastructure, it appears that it is acceptable to foist beta quality code on your customers as a stable release.

Having run a test team for the past few years has been eye-opening; by the time my team gets hands on your code…there should be no P1s and very few P2s but the amount of fundamentally broken code that has made it to us is scary.

And then also running an infrastructure team, this is beyond scary and heading into realms of terror and just to make things nice and frightening, every now and then, I ‘like’ to search vendor patch/bug databases for terms like ‘data corruption’, ‘data loss’ and other such cheery terms; don’t do this if you want to sleep well at night.

Recently I have come across such wonderful phenomena as a performance monitoring tool which slows your system down the longer it runs; clocks that drift for no explicable reason and can lock out authentication; reboots which can take hours; non-disruptive upgrades which are only non-disruptive if run at a quiet time; errors that you should ignore most of the time but sometimes they might be real; files that disappear on renaming; updates replacing a update which makes a severity 1 problem worse..even installing fixes seems to be fraught with risk.

Obviously no-one in their right minds ever takes a new vendor code release into production; certainly your sanity needs questioning if you put a new product which has less than two year’s GA into production. Yet often the demands are that we do so.

But it does lead me wondering, has software quality really got worse? It certainly feels that it has? So what are the possible reasons, especially in the realms of infrastructure?

Complexity? Yes, infrastructure devices are trying to do more; no-where is this more obvious than in the realms of storage where both capabilities and integration points have multiplied significantly. It is no longer enough to support the FC protocol; you must support SMB, NFS, iSCSI and integration points with VMware and Hyper-V. And with VMware on an 12 month refresh cycle pretty much, it is getting tougher for vendors and users to decide which version to settle on.

The Internet? How could this cause a reduction in software quality? Actually, the Internet as a distribution method has made it a lot easier and cheaper to release fixes; before if you had a serious bug, you would find yourself having to distribute physical media and often in the case of infrastructure, mobilising a force of Engineers to upgrade software. This cost money, took time and generally you did not want to do it; it was a big hassle. Now, send out an advisory notice with a link and  let your customers get on with it.

End-users? We are a lot more accepting of poor quality code; we are used to patching everything from our PC to our Consoles to our Cameras to our TVs; especially, those of us who work in IT and find it relatively easy to do so.

Perhaps it is time to start a ‘Slow Software Movement’ which focuses on delivering things right first time?

More Musings on Consumers and Cloud

I must stop looking at Kickstarter, I will end up bankrupt but it does give some insight in what is going to drive the Cloud and Big Data from a consumer point of view.

From Ubi to Ouya to Memeto; these are all new consumer devices that are going to further drive our relience on the Cloud and Cloud Services. The latter being a life-logging device that could drive the amount of storage that we require through the roof; they believe that an individual could be generating 1.5Tb of data per annum, they really don’t need a huge amount of customers to be generating multiple petabytes of data per year. And if they are successful, more devices will appear, some offering higher resolution, some potentially starting to offer video….

Many people will look at such services and wonder why anyone would be interested but it really doesn’t matter…these services are going to make today’s services look frugal in their use of storage.

And then the growth of voice recognition services where the recognition has been driven to the Cloud and causing a massive increase in the growth of compute requirements.

Throw in non-linear viewing of media and services like Catch-Up TV and we have almost a perfect storm…