I love it when Chuck invents new market segments, ‘Entry-Level Enterprise Storage Arrays’ appears to be his latest one; he’s a genius when he comes up with these terms. And it is always a space where EMC have a new offering.
But is it a real segment or just m-architecture? Actually, the whole Enterprise Storage Array thing is getting a bit old and I am not sure whether it has any real meaning any more and it is all rather disparaging to the customer. You need Enterprise, you don’t need Enterprise…you need 99.999% availability, you only need 99.99% availability.
As a customer, I need 100% availability; I need my applications to be available when I need them. Now, this may mean that I actually only need them to be available an hour a month but during that hour I need them to be 100% available.
So what I look for vendors is the way that they mitigate against failure and understand my problems but I don’t think the term ‘Enterprise Storage’ brings much value to the game; especially when it is constantly being misused and appropriated by the m-architecture consultants.
But I do think it is time for some serious discussions about storage architectures; dual-head, scale-up architectures vs multiple-head, scale-out architectures vs RAIN architectures; understanding the failure modes and behaviours is probably much more important than the marketing terms which surround them.
EMC have offerings in all of those spaces; all at different cost points but there is one thing I can guarantee, the ‘Enterprise’ ones are the most expensive.
There is also a case for looking at the architecture as a whole; too many times I have come across the thinking that what we need to do is make our storage really available, when the biggest cause of outage is application failure. Fix the most broken thing first; if your application is down because it’s poorly written or architected, no amount of Enterprise anything is going to fix it. Another $2000 per terabyte is money you need to invest elsewhere.