Storagebod Rotating Header Image

The Last of the Dinosaurs?

Myself and Chris ‘The Storage Architect’ Evans were having a twitter conversation during the EMC keynote where they announced the VMAX 40K; Chris was watching the live-stream and I was watching the Chelsea Flower Show, from Chris’ comments, I think that I got the better deal.

But we got to talking about the relevance of the VMAX and the whole bigger is better thing. Every refresh, the VMAX just gets bigger and bigger, more spindles and more capacity. Of course EMC are not the only company guilty of the bigger is better hubris.

VMAX and the like are the ‘Big Iron’ of the storage world; they are the choice of the lazy architect, the infrastructure patterns that they support are incredibly well understood and text-book but do they really support Cloud-like infrastructures going forward?

Now, there is no doubt in my mind that you could implement something which resembles a cloud or let’s say a virtual data-centre based on VMAX and it’s competitors. Certainly if you were a Service Provider which aspirations to move into the space; it’s an accelerated on-ramp to a new business model.

Yet just because you can, does that mean you should? EMC have done a huge amount of work to make it attractive, an API to enable to you to programmatically deploy and manage storage allows portals to be built to encourage self-service model. Perhaps you believe that this will allow light-touch administration and the end of the storage administrator.

And then myself and Chris started to talk about some of the realities; change control on a box of this size is going to be horrendous; in your own data-centre, co-ordination is going to be horrible but as a service provider? Well, that’s going to be some interesting terms and conditions.

Migration, in your own environment,  to migrate a petabyte array in a year means migrating 20 terabytes a week more or less. Now, depending on your workload, year-ends, quarter-ends and known peaks, your window for migrations could be quite small. And depending how you do it, it is not necessarily non-service impacting; mirroring at the host level means significantly increasing your host workload.

As a service provider; you have to know a lot about the workloads that you don’t really influence and don’t necessarily understand. As a service provider customer, you have to have a lot of faith in your service provider. When you are talking about massively-shared pieces of infrastructure, this becomes yet more problematic. You are going to have to reserve capacity and capability to support migration; if you find yourself overcommitting on performance i.e you make assumptions that peaks don’t all happen at once, you have to understand the workload impact of migration.

I am just not convinced that these massively monolithic arrays are entirely sensible; you can certainly provide secure multi-tenancy but can you prevent behaviours impacting the availability and performance of your data? And can you do it in all circumstances, such as code-level changes and migrations.

And if you’ve ever seen the back-out plan for a failed Enginuity upgrade; well the last time I saw one, it was terrifying.

I guess the phrase ‘Eggs and Baskets’ comes to mind; yet we still believe that bigger is better when we talk about arrays.

I think we need to have some serious discussion about optimum array sizes to cope with exceptions and when things go wrong. And then some discussion about the migration conundrum. Currently I’m thinking that a petabyte is as large as I want to go and as for the number of hosts/virtual hosts attached, I’m not sure. Although it might be better to think about the number of services an array supports and what can co-exist, both performance-wise but also availability window-wise.

No, the role of the Storage Admin is far from dead; it’s just become about administering and managing services as opposed to LUNs. Yet, the long-term future of the Big Iron array is limited for most people.

If you as an architect continue to architect all your solutions around Big Iron storage…you could be limiting your own future and the future of your company.

And you know what? I think EMC know this…but they don’t want to scare the horses!


5 Comments

  1. […] The Last of the Dinosaurs? #socialbuttonnav li{list-style:none;overflow:hidden;margin:0 auto;background:none;overflow:hidden;width:62px; height:80px; line-height:10px; margin-right:1px; float:left; text-align:center;} […]

  2. […] do they really support Cloud-like infrastructures going forward?</p></blockquote> via storagebod.com You might also want to read these other posts…Storage in Cloud is Not the Centre of The […]

  3. […] understood and text-book but do they really support Cloud-like infrastructures going forward? via storagebod.com You might also want to read these other posts…Storage in Cloud is Not the Centre of The Universe […]

  4. First, allow me to note that there’s still a very decent market for IT dinosaurs (mainframes AND big storage). There are many enterprises whose enterprise-class data needs exceed the previous 2PB capacity of a maxed-out VMAX.

    To extend your analogy, when you have more eggs than fit in one basket, you’d rather get a larger basket than try to carry 2 or 3 🙂

    Related, VMAX SP introduces multi-tenancy administration, operation, charge-back, etc. – the feature-foundation of Cloud Service Providers. And features like Federated Live Migration and VPLEX are a start to load-balancing across arrays.

    In fact, pretty much all the features that EMC announced at EMC World provide incremental movement toward enabling the Hybrid Cloud – even while delivering immediate value to customers and markets who are just beginning their transformation.

  5. Barry

    Unfortunately I don’t have access to the latest Enginuity release notes, however I’m interested in the multi-tenancy features. Has multi-tenancy enabled multiple symconfigure commands to be run in parallel? Obviously in a multi-tenant environment there should be the ability to multi-thread configuration changes. What about SRDF & Timefinder failover, how many concurrent operations can be executed against one array?

    Chris

Leave a Reply to Chris M Evans Cancel reply

Your email address will not be published. Required fields are marked *