Storagebod Rotating Header Image

Cloud

Keep On Syncing…Safely..

Edward Snowden’s revelations about the activities of the various Western security organisations have been both neither a surprise and yet also an a wake-up call to how the landscape of our own personal data security has changed.

Multiple devices and increased mobility have meant that we have looked for ways to ensure that we have access to our data where-ever and when-ever;  gone are the days when even the average household has a single computing device and it is also increasingly uncommon to find an homogeneous household in the terms of manufacturer or operating-system.

It is now fairly common to find Windows, OSX, Android, iOS and even Linux devices all within a single house; throw in digital cameras and smart-TVs, it is no wonder that we have a situation that makes data-sharing in a secure fashion more and more complex for the average person.

So file-syncing and sharing products such as Dropbox, Box, SkyDrive and GoogleDrive are pretty much inevitable consequences and if you are anything like me;  you have a selection of these, some free and some charged but pretty much all of them are insecure; some terribly so.

Of course it would be nice if the operating system manufacturers could agree on a standard which included encryption of data in-flight and at rest with a simple and easy to use key-sharing mechanism. Yet even with this, we would probably not trust it anymore but it might at least provide us an initial level of defence.

I have started to look at ways of adding encryption to the various cloud services I use; in the past, I made fairly heavy use of TrueCrypt but it is not especially seamless and can be clunky. However this is becoming more feasible as apps such as Cryptonite and DiskDecipher are appearing for mobile devices.

Recently I started to play with BoxCryptor and EncFS; BoxCryptor seems nice and easy to use; certainly on the desktop. It supports multiple Cloud providers; although the free version only supports a single Cloud provider; if you want to encrypt your multiple cloud stores, you will have to pay. There are alternatives such as Cloudfogger but development for BoxCryptor seems to be ongoing.

And there perhaps there is the option of building your own ‘Sync and Share’ service; Transporter recently successfully kickstarted and looks good; Plug is in the process of kickstarting.

Synology Devices have Cloud Station; QNAP have myQNAPcloud.

You can go totally build your own and use ownCloud.

In the Enterprise, you have a multitude of options as well but there is one thing; you do not need to store your stuff in the Cloud in an insecure manner. You have lots of options now, from keeping it local to using an Cloud service provider; encryption is still not as user-friendly as it could be but it has got easier.

You can protect your data; you probably should…

 

 

From Servers to Service?

Should Enterprise Vendors consider becoming Service Providers? When Rich Rogers of HDS  tweeted this and my initial response was

This got me thinking, why does everyone think that Enterprise Vendors shouldn’t become Service Providers? Is this a reasonable response or just a knee-jerk, get out of my space and stick to doing what you are ‘good’ at.

It is often suggested that you should not compete with your customers; if Enterprise Vendors move into the Service Provider space, they compete with some of their largest customers, the Service Providers and potentially all of their customers; the Enterprise IT departments.

But the Service Providers are already beginning to compete with the Enterprise Vendors, more and more of them are looking at moving to a commodity model and not buying everything from the Enterprise Vendors; larger IT departments are thinking the same. Some of this is due to cost but much of it is that they feel that they can do a better job of meeting their business requirements by engineering solutions internally.

If the Enterprise Vendors find themselves squeezed by this; is it really fair that they should stay in their little box and watch their revenues dwindle away? They can compete in different ways, they can compete by moving their own products to more of a commodity model, many are already beginning to do so; they could compete by building a Service Provider model and move into that space.

Many of the Enterprise Vendors have substantial internal IT functions; some have large services organisations; some already play in the hosting/outsourcing space.  So why shouldn’t they move into the Service Provider space? Why not leverage the skills that they already have?

Yes, they change their business model; they will have to be careful that they ensure that they compete on a level playing field and look very carefully that they are not utilising their internal influence on pricing and development to drive an unfair competitive advantage. But if they feel that they can do a better job than the existing Service Providers; driving down costs and improving capability in this space….more power to them.

If an online bookstore can do it; why shouldn’t they? I don’t fear their entry into the market, history suggests that they have made a bit of a hash of it so far…but guys fill your boots.

And potentially, it improves things for us all; as the vendors try to manage their kit at scale, as they try to maintain service availability, as they try to deploy and develop an agile service; we all get to benefit from the improvements…Service Providers, Enterprise Vendors, End-Users…everyone.

 

The Reptile House

I was fortunate enough to spend an hour or so with Amitabh Srivastava of EMC; Amitabh is responsible for the Advanced Software division in EMC and one of the principal architects behind ViPR. It was an open discussion about the inspiration behind ViPR and where storage needs to go. And we certainly tried to avoid the ‘Software Defined’ meme.

Amitabh is not a storage guy; in fact his previous role with Microsoft sticks him firmly in the compute/server camp but it was his experience in building out the Azure Cloud offering which brought him appreciation of the problems that storage and data face going forward. He has some pretty funny stories about how the Azure Cloud came about and the learning experience it was; how he came to realise that this storage stuff was pretty interesting and more complex that just allocating some space.

Building dynamic compute environments is pretty much a solved problem; you have a choice of solutions and fairly mature ones. Dynamic networks are well on the way to being solved.

But building a dynamic and agile storage environment is hard and it’s not a solved problem yet. Storage and more importantly the data it holds has gravity or as I like to think of it, long-term persistence. Compute resource can be scaled up and down; data rarely has the idea of scaling down and generally hangs around. Data Analytics just means that our end-users are going to hug data for longer. So you’ve got this heavy and growing thing…it’s not agile but there needs to be some way of making it appear more agile.

You can easily move compute workloads and it’s relatively simple to change your network configuration to reflect these movements but moving large quantities of data around, this is a non-trivial thing to do…well at speed anyway.

Large Enterprise Storage environments are heterogeneous environments, dual supplier strategies are common; sometimes to keep vendors honest but often there is an acceptance the different arrays have difference capabilities and use-cases. Three or four years ago, I thought we were heading towards general purpose storage arrays; we now have more niche and siloed capabilities than ever before. Driven by developments in all-flash arrays, commodity hardware and new business requirements; the environment is getting more complex and not simpler.

Storage teams need a way of managing these heterogenous environments in a common and converged manner.

And everyone is trying to do things better, cheaper and faster; operational budgets remain pretty flat, headcounts are frozen or shrinking. Anecdotally, talking to my peers; arrays are hanging around longer, refresh cycles have lengthened somewhat.

EMC’s ViPR is attempt to solve some of these problems.

Can you lay a new access protocol on top of already existing and persistent data?  Can you make so that you don’t have to migrate many petabytes of data to enable a new protocol?  And can you ensure that your existing applications and new applications can use the same data without a massive rewrite? Can you enable your legacy infrastructure to support new technologies?

The access protocol in this case is Object; for some people Object Storage is religion…all storage should be object, why the hell do you want some kind of translation layer. But unfortunately, life is never that simple; if you have a lot of legacy applications running and generating useful data, you probably want to protect your investment and continue to run those applications but you might want to mine that data using newer applications.

This is heresy to many but reflects today’s reality; if you were starting with a green-field, all your data might live in an object-store but migrating a large existing estate to an object-store is just not realistic as a short term proposition.

ViPR enables your existing file-storage to be accessible as both file and object. Amitabh also mentioned block but I struggle with seeing how you would be able to treat a raw block device as an object in any meaningful manner. Perhaps that’s a future conversation.

But in the world of media and entertainment, I could see this capability being useful; in fact I can see it enabling some workflows to work more efficiently, so an asset can be acquired and edited in the traditional manner; then ‘moving’ into play-out as an object with rich-metadata but without moving around the storage environment.

Amitabh also discussed possibilities of being able to HDFS your existing storage, allowing analytics to be carried out on data-in-place without moving it. I can see this being appealing but challenges around performance, locking and the like become challenging.

But ultimately moving to an era where data persists but is accessible in appropriate ways without copying, ingesting and simply buying more and more storage is very appealing. I don’t believe that there will ever be one true protocol; so multi-protocol access to your data is key. And even in a world where everything becomes objects, there will almost certainly be competing APIs and command-sets.

The more real part of ViPR; when I say real, I mean it is the piece I can see huge need for today; is the abstraction of the control-plane and making it look and work the same for all the arrays that you manage. Yet after the abomination that is Control Center; can we trust EMC to make Storage Management easy, consistent and scalable? Amitabh has heard all the stories about Control Center, so lets hope he’s learnt from our pain!

The jury doesn’t even really have any hard evidence to go on yet but the vision makes sense.

EMC have committed to open-ness around ViPR as well; I asked the question…if someone implements your APIs and makes a better ViPR than ViPR? Amitabh was remarkably relaxed about that, they aren’t going to mess about with APIs for competitive advantage and if someone does a better job than them; then that someone deserves to win. They obviously believe that they are the best; if we move to a pluggable and modular storage architecture, where it is easy to drop-in replacements without disruption; they better be the best.

A whole ecosystem could be built around ViPR; EMC believe that if they get it right; it could be the on-ramp for many developers to build tools around it. They are actively looking for developers and start-ups to work with ViPR.

Instead of writing tools to manage a specific array; it should be possible to write tools that manage all of the storage in the data-centre. Obviously this is reliant on either EMC or other storage vendors implementing the plug-ins to enable ViPR to manage a specific array.

Will the other storage vendors enable ViPR to manage their arrays and hence increase the value of ViPR? Or will it be left to EMC to do it; well, at launch, NetApp is already there. I didn’t have time to drill into which versions of OnTap however and this where life could get tricky; the ViPR-control layer will need to keep up with the releases from the various vendors. But as more and more storage vendors are looking at how their storage integrates with the various virtualisation-stacks; consistent and early publications of their control functionality becomes key. EMC can use this as enablement for ViPR.

If I was a start-up for example, ViPR could enable me to fast-track management capability of my new device.I could concentrate on storage functionality and capability of the device and not on the periphery management functionality.

So it’s all pretty interesting stuff but it’s certainly not a forgone conclusion that this will succeed and it relies on other vendors coming to play. It is something that we need; we need the tools that will enable us to manage at scale, keeping our operational costs down and not having to rip and replace.

How will the other vendors react? I have a horrible suspicion that we’ll just end up with a mess of competing attempts and it will come down to the vendor who ships the widest range of support for third party devices. But before you dismiss this as just another attempt from EMC to own your storage infrastructure; if a software vendor had shipped/announced something similar, would you dismiss it quite so quickly? ViPR’s biggest strength and weakness is……EMC!

EMC have to prove their commitment to open-ness and that may mean that in the short term, they do things that seriously assist their competitors at some cost to their business. I think that they need to almost treat ViPR like they did VMware; at one point, it was almost more common to see a VMware and NetApp joint pitch than one involving EMC.

Oh, they also have to ship a GA product. And probably turn a tanker around. And win hearts and minds, show that they have changed…

Finally, let’s forget about Software Defined Anything; let’s forget about trying to redefine existing terms; it doesn’t have to be called anything…we are just looking for Better Storage Management and Capability. Hang your hat on that…

 

Viperidae – not that venomous?

There’s a lot of discussion about what ViPR is and what it isn’t; how much of this confusion is deliberate and how much is simply the normal of fog of war which pervades the storage industry is debateable. Having had some more time to think about it; I have some more thoughts and questions.

Firstly, it is a messy announcement; there’s a hotch-potch of products here, utilising IP from acquisitions and from internal EMC initiatives. There’s also an attempt to build a new narrative which doesn’t seem to work; perhaps it worked better when put into the context of an EMC World event but not so much from the outside.

And quite simply, I don’t see anything breathtaking or awe-inspiring but perhaps I’m just hard to impress these days?

But I think there are some good ideas here.

ViPR as a tool to improve storage management and turn it into something which is automatable is a pretty good idea. But we’ve had the ability to script much of this for many years; the problem has always been that every vendor has some different way of doing this, syntax and tools are different and often not internally consistent between themselves.

Building pools of capability and service; calling it a virtual array…that’s a good idea but nothing special. If ViPR can have virtual arrays which federate and span multiple arrays; moving workloads around within the virtual array, maintaining consistency groups and the like across arrays from different vendors; now that’d be something special. But that would almost certainly put you into the data-path and you end up building a more traditional storage virtualisation device.

Taking an approach where the management of array is abstracted and presented in a consistent manner; this is not storage virtualisation, perhaps it is storage management virtualisation?

EMC have made a big deal about the API being open and that anyone will be able to implement plug-ins for it; any vendor should be able to produce a plug-in which will allow ViPR to ‘manage’ their array.

I really like the idea that this also presents a consistent API to the ‘user’; allowing the user to not care about what the storage vendor is at the other end; they just ask for disk from a particular pool and off it goes. This should be able to be done from an application, a web-front-end or anything else which interacts with an API.

So ViPR becomes basically a translation layer.

Now, I wonder how EMC will react to someone producing their own clean-room implementation of the ViPR API? If someone does a Eucalyptus to them? Will they welcome it? Will they start messing around with the API? I am not talking about plug-ins here, I am talking about a ViPR-compatible service-broker.

On more practical things, I am also interested on how ViPR will be licensed? A capacity based model? A service based model? Number of devices?

What I am not currently seeing is something which looks especially evil! People talk about lock-in? Okay, if you write a lot of ViPR based automation and provisioning, you are going to be kind of locked-in but I don’t see anything that stops your arrays working if you take ViPR out. As far as I can see, you could still administer your arrays in the normal fashion?

But that in itself could be a problem; how does ViPR keep itself up to date with the current state of a storage estate? What if your storage guys try to manage both via ViPR and the more traditional array management tools?

Do we again end up with the horrible situation where the actual state of an environment is not reflected in the centralised tool.

I know EMC will not thank me for trying to categorise ViPR as just another storage management tool ‘headache’ and I am sure there is more to it. I’m sure that there will be someone along to brief me soon.

And I am pretty positive about what they are trying to do. I think the vitriol and FUD being thrown at it is out of all proportion but then again, so was the announcement.

Yes, I know have ignored the Object on File or File on Object part of the announcement. I’ll get onto that in a later post.

 

 

Tiny Frozen Hand

So EMC have finally announced VMAX Cloud Edition; a VMAX iteration that has little to do with technology and everything to do with the way that EMC want us to consume storage. I could bitch about the stupid branding but too many people are expecting that!

Firstly, and in many ways the most important part of the announcement is around the cost model; EMC have moved to a linear cost model; in the past, purchasing a storage array had a relatively large front-loaded cost in that you have to purchase the controllers etc; this meant that your cost per terabyte was high to start with, it then declined and the potentially rose again as you added more controllers and then declined again.

This led to a storage-hugging attitude; that’s my storage array and you can’t use it. A linear cost model allows IT to provide the Business with a fixed cost per terabyte whether you were the first to use it or last to use it.  This allows us to move to a consumption and charging model that is closer to that of Amazon and the Cloud providers.

It is fair to point out that actually EMC and other vendors already have various ways to doing this already but they could be complex and used financial tools to enable.

Secondly, EMC are utilising a RESTful API to allow storage to be allocated programmatically from a service catalogue. There are also methods of metering and charging back for storage utilisation. Along with an easy to use portal; the consumption model continues to move to an on-demand model. If you work in IT and are not comfortable with this, you are in for a rough ride for quite some time.

Thirdly, the cost models that I have seen are very aggressive; EMC want to push this model and this technology.  If you want to purchase 50Tbs and beyond and you want it on EMC, I can’t see why you would buy any other block storage from EMC. It is almost as if EMC are forcing VNX into a SMB niche. In fact, if EMC can hit some of the price-points I’ve had hinted at; everyone is in a race to the bottom. It could be a Google vs Amazon price-battle.

Fourthly and probably obviously; EMC are likely to be shipping more capacity than an end-user requires, allowing them to grow with minimal disruption. If I was EMC, I’d ship quite a lot of extra capacity and allow a customer to burst into at no charge for a fair proportion of the year. Burst capacity often turns into bought capacity; our storage requirements are rarely temporary and quickly temporary becomes permanent. Storage procurement is never zipless; it always has long term consequences but if EMC can make it look and feel zipless…

I’m expecting EMC also to move to a similar model for the Isilon storage as well; it is well suited to this sort of model. And yet again,  this leaves VNX in an interesting position.

Out in the cold, with a tiny frozen hand….dying of consumption.

 

Doctors in the Clouds

At the recent London Cloud Camp; there was a lot of discussion about DevOps on the UnPanel; as the discussion went on, I was expecting the stage to be stormed by some of the older members in the audience. Certainly some of the tweets and the back-channel conversations which were going on were expressing some incredulity at some of the statements from the panel.

Then over beer and pizza; there were a few conversations about the subject and I had a great chat with Florian Otel who for a man who tries to position HP as a Cloud Company is actually a reasonable and sane guy (although he does have the slightly morose Scandinavian thing down pat but that might just be because he works for HP). The conversation batted around the subject a bit until I hit an analogy for DevOps that I liked and over the past twenty-four hours, I have knocked it around a bit more in my head. And although it doesn’t quite work, I can use it as the basis for an illustration.

Firstly, I am not anti-DevOps at all; the whole DevOps movement reminds me of when I was fresh-faced mainframe developer; we were expected to know an awful lot about our environment and infrastructure. We also tended to interact and configure our infrastructure with code; EXITS of many forms were part of our life.

DevOps however is never going to kill the IT department (note: when did the IT department become exclusively linked with IT Operations?) and you are always going to have specialists who are required to make and fix things.

So here goes; it is a very simple process to instantiate a human being really. The inputs are well known and it’s a repeatable process. This rather simple process however instantiates a complicated thing which can go wrong in many ways.

When it goes wrong, often the first port of call is your GP; they will poke and prod, ask questions and the good GP will listen and treat the person as a person. They will fix many problems and you go away happy and cured. But most GPs actually have only a rather superficial knowledge of everything that can go wrong; this is fine, as many problems are rather trivial. It is important however that the GP knows the limits of their knowledge and knows when to send the patient to a specialist.

The specialist is a rather different beast; what they generally see is a component that needs fixing; they often have lousy bedside manners and will do drastic things to get things working again. They know their domain really well and you really wouldn’t want to be without them. However to be honest, are they a really good investment? If a GP can treat 80% of the cases that they are faced with, why bother with the specialists? Because having people drop dead for something that could be treated is not especially acceptable.

As Cloud and Dynamic Infrastructures make it easier to throw up new systems with complicated interactions with other systems; unforeseeable consequences may become more frequent, your General Practitioner might be able to fix 80% of the problems with a magic white-pill or tweak here or there….but when your system is about to collapse in a heap, you might be quite thankful that you still have your component specialists who make it work again. Yes, they might be grumpy and miserable; their bedside manner might suck but you will be grateful that they are there.

Yes, they might work for your service provider but the IT Ops guys aren’t going away; in fact, you DevOps have taken away a lot of the drudgery of the Ops role. And when the phone rings, we know it is going to be something interesting and not just an ingrown toe-nail.

Of course the really good specialist also knows when the problem presented is not their speciality and pass it on. And then there is the IT Diagnostician; they are grumpy Vicodin addicts and really not very nice!

Just How Much Storage?

A good friend of mine recently got in contact to ask my professional opinion on something for a book he was writing; it always amazes me that anyone asks my professional opinion on anything…especially people who have known me for many years but as he’s a great friend, I thought I’d  try to help.

He asked me how much a petabyte of storage would cost today and when I thought it would affordable for an individual? Both parts of the question are interesting in their own way.

How would a petabyte of storage cost? Why, it very much depends; it’s not as much as it cost last year but not as a cheap as some people would think. Firstly, it depends on what you might want to do with it; capacity, throughput and I/O performance are just part of the equation.

Of course then you’ve got the cost of actually running it; 400-500 spindles of spinning stuff takes a reasonable amount of power, cooling and facilities. Even if you can pack it densely, it is still likely to fall through the average floor.

There are some very good deals to be had mind you but you are still looking at several hundred thousand pounds, especially if you look at a four year cost.

And when will the average individual be able to afford a petabyte of storage? Well without some significant changes in storage technology; we are some time away from this being feasible. Even with 10 Terabyte disks, we are talking over a hundred disks.

But will we ever need a petabyte of personal storage? That’s extremely hard to say; I wonder if we will we see the amount of personal storage peak in the next decade?

And as for on-premises personal storage?

That should start to go into decline, for me it is already beginning to do so; I carry less storage around than I used to…I’ve replaced my 120Gb iPod with a 32 Gb phone but if I’m out with my camera, I’ve probably got 32Gb+ of cards with me. Yet with connected cameras coming and 4G (once we get reasonable tariffs), this will probably start to fall off.

I also expect to see the use of spinning rust go into decline as PVRs are replaced with streaming devices; it seems madness to me that a decent proportion of the world’s storage is storing redundant copies of the same content. How many copies of EastEnders does the world need to be stored on a locally spinning drive?

So I am not sure that we will get to a petabyte of personal storage any time soon but we already have access to many petabytes of storage via the Interwebs.

Personally, I didn’t buy any spinning rust last year and although I expect to buy some this year; this will mostly be refreshing what I’ve got.

Professionally, looks like over a petabyte per month is going to be pretty much run-rate.

That is a trend I expect to see continue; the difference between commercial and personal consumption is going to grow. There will be scary amounts of data around about you and generated by you; you just won’t know it or access it.

More Musings on Consumers and Cloud

I must stop looking at Kickstarter, I will end up bankrupt but it does give some insight in what is going to drive the Cloud and Big Data from a consumer point of view.

From Ubi to Ouya to Memeto; these are all new consumer devices that are going to further drive our relience on the Cloud and Cloud Services. The latter being a life-logging device that could drive the amount of storage that we require through the roof; they believe that an individual could be generating 1.5Tb of data per annum, they really don’t need a huge amount of customers to be generating multiple petabytes of data per year. And if they are successful, more devices will appear, some offering higher resolution, some potentially starting to offer video….

Many people will look at such services and wonder why anyone would be interested but it really doesn’t matter…these services are going to make today’s services look frugal in their use of storage.

And then the growth of voice recognition services where the recognition has been driven to the Cloud and causing a massive increase in the growth of compute requirements.

Throw in non-linear viewing of media and services like Catch-Up TV and we have almost a perfect storm…

Amazon Goes Glacial

Amazon have announced a pretty interesting low-cost archive solution called Amazon Glacier; certainly the pricing which works out at $120 per terabyte per annum with 11 9′s availability is competitive. For those of us working with large media archives could this be a competitor with tape and all the tape management headaches that tape brings. I look after a fairly large media archive and there are times when I would gladly see the back of tape for-ever.

So is the Amazon Glacier the solution we are looking for? Well for the type of media archives that I manage, unfortunately the answer is not yet or not for all use cases. The 3-4 hour latency introduced on a recall by the Glacier does not fit many media organisations, especially those who might have a news component. At times even the minutes that retrieving from tape take seems to be unacceptable, especially to news editors and the like. And even at $120 per terabyte; when you are growing at multiple petabytes a year, the costs fairly quickly add-up.

Yet, this is the first storage product which has made be sit up and think that we could replace tape. If the access times were reduced substantially and it looked more like a large tape library; this would be an extremely interesting service. I just need the Glacier to move a bit faster.

Enterprise and Cloud

Anybody working in storage cannot fail to come across the term ‘Enterprise Storage’; a term which is often used to justify the cost of what it is commodity item that is stuck together with some clever software; ask a sales-man from a vendor as to what makes their storage ‘Enterprise’ and you will get a huge amount of fluff but with little substance. ‘Enterprise Storage’ is a marketing term.

And now we are seeing the word ‘Enterprise’ being used by some Cloud Service Providers and Cloud Vendors to try to distinguish their Cloud server from their competitors, especially when trying to differentiate themselves from Amazon. Yet, is this just a marketing term again? I don’t think it is but not for entirely positive reasons.

If your application has been properly architected and designed to run in a Cloud based infrastructure; you almost certainly don’t need to be running in an ‘Enterprise Cloud’ and the extra expense that brings; if you have tried to shoe-horn an existing application into the Cloud, you might well need to consider an Enterprise Cloud. Because many Enterprise Clouds are simply hosted environments re-branded as Cloud, often utilising virtualisation sitting on top of highly resilient hardware; they remove many of the transition costs to the Cloud by not actually transitioning to a Cloud Model.

A properly designed Cloud application will meet all the availability and performance requirements of the most demanding Enterprise and users whether it runs in an commodity cloud or an Enterprise Cloud. Redeveloping your existing application portfolio may well feel prohibitively expensive and hence many will avoid doing this. Ultimately though, many of these existing applications which live in the Enterprise Cloud will transition to a SAAS environment; CRM, ERP and other common enterprise applications are the obvious candidates. This will leave the those applications which make your Business special and from which you derive some kind of competitive advantage; these are the applications and architectures that you should be thinking about re-architecting and re-developing, not just dumping them into an ‘Enterprise Cloud’.

Try not to buy into the whole ‘Enterprise Cloud’ thing apart from as a transitionary step; think about what you need to do to run your business on any ‘Commodity Cloud’; how you design applications which are scalable and resilient at the application layer as opposed to infrastructure, think about how you make those applications environmentally agnostic with the ability to take advantage of spot pricing and brokerage. Or if you really don’t believe in the Cloud, stop pretending to and stop using Enterprise as camouflage.