Storagebod Rotating Header Image

Designed to Fail

Randy Bias has written an interesting piece here on the impact of complexity on reliability and availability; as you build more complex systems, it becomes harder and harder to engineer in multiple 9’s availability. I read the piece with a smile on my face and especially the references to storage; sitting with an array flat on it’s arse and already thinking about the DAS vs SAN argument for availability.

How many people design highly-available systems with no single points of failure until it hits the storage array? Multiple servers with fail-over capability, multiple network paths and multiple SAN connections; that’s pretty much standard but multiple arrays to support availability? It rarely happens. And to be honest, arrays don’t fall over that often, so people don’t tend to even consider it until it happens to them.

An array outage is a massive headache though; when an array goes bad, it is normally something fairly catastrophic and you are looking at a prolonged outage but often not so prolonged that anyone invokes DR. There are reasons for not invoking DR, most of them around the fact that few people have true confidence in their ability to run in DR and even fewer have confidence that they can get back out of DR, but that’s a subject for another blog.

I have sat in a number of discussions over the years where the concept of building a redundant array of storage arrays has been discussed i.e stripe at the array level as opposed to the disk level. Of course, rebuild times become interesting but it does remove the array as a single point of failure.

But then there are the XIVs, Isilons and other clustered storage products which are arguably extremely similar to this concept; data is striped across multiple nodes. I won’t get into the argument about implementations but it does feel to me that this is really the way that storage arrays need to go. Scale-out ticks many boxes but does bring challenges with regards to metadata and the like.

Of course, you could just go down the route of running a clustered file-system on the servers and DAS but this does mean that they are going to have to cope with striping, parity and the likes. Still, with what I have seen in various roadmaps, I’m not betting against this as an approach either.

The monolithic storage array will continue for some time but ultimately, a more loosely coupled and more failure tolerant storage infrastructure will probably be in all our futures.

And I suppose I better find out if that engineer has resuscitated our array yet.

 


4 Comments

  1. […] on here Rate this: Share this:TwitterEmailLinkedInPrintDiggFacebook Leave a Comment by rogerluethy on […]

  2. James Kilby says:

    Completely agree with what you have said above. I tend to work in the SME end of the market where we would normally only have 1 SAN with dual controllers and dual everything else.

    If the SAN goes they would all be in a very bad way, however the cost of a second SAN makes this a risk they have to live with. As an aside we were made aware of an issue on the firmware on one of our SAN’s that would have killed both controllers after a period of time. This couldnt be fixed with a reset new controllers would need to be shipped from manufacturer. This isnt something i had ever really considered but its technically a SPOF as they are both running the same code.

    Striping across arrays I imagine would get very interesting and you would potentially need to stripe across different manufacturers.

  3. Martin

    We already have network distributed RAID with LeftHand (now HP) where multiple nodes can be physically dispersed and present the same LUN. Although we have replication at the array level, there’s a clear disconnect between the application and the array in terms of replication. Think of something simple as VMware where the LUN is replicated but can contain many virtual machines, many of which may not need to fail over.

    Your discussion gets us back to federated storage – which seems to have dropped off the radar. There’s still a long way to go in that space.

    Chris

  4. Chuck says:

    Another aspect of this is that putting more failure intelligence closer to the application can simplify the rest of the infrastructure tremendously. If you have a simple application with a client that only knows how to talk to a single web service URL and a web service that only knows how to talk to a single database DSN and that app is so important to your business that it can ‘never’ go down you end up with something like a load balancer, multiple web servers, a clustered database server, two SAN fabrics and a disk array, duplicate infrastructure at a recovery site, replication software, some global load-balancing or DNS update solution and vast array of compatibility matrices and network requirements.

    If instead the app knows how to find a backup server, whether via config file or ideally something centrally manageable like a SRV record, and the web service knows how to find another database, and you’ve made some application-level decisions about how to replicate and ensure the integrity of the data your infrastructure can be just about anything that meets the performance and capacity requirements of your application.

    As an example, Active Directory is a good case of a fairly critical client/server application where the infrastructure ‘just works’ most of the time, keeps working when components fail (usually with no impact that a user would notice), and can bring replacement components online with minimal pain. No one would use it if you needed multiple servers with hardware load-balancers and SAN storage capable of multisite/multimaster replication at every client site just to be sure users could log on in the morning.

Leave a Reply

Your email address will not be published. Required fields are marked *