Storagebod Rotating Header Image

Not So Potty

Virtual Openness

I don’t always agree with Trevor Pott but this piece on ServerSAN, VSAN and storage acceleration is spot on; the question about VSAN running in the kernel and the advantages that brings to performance; and indeed, I’ve also heard comments about reliability, support and the likes over competing products is very much one which has left me scratching my head and feeling very irritated.

If running VSAN in the kernel is so much better and it almost feels that it should be; it kind of asks another question, perhaps I would be better running all my workloads on bare-metal or as close as I can.

Or perhaps VMware need to be allowing a lot more access to the kernel or a pluggable architecture that allows various infrastructure services to run at that level. There are a number of vendors that would welcome that move and it might actually hasten the adoption of VMware yet further or at least take out some of the more entrenched resistance around it.

I do hope more competition in the virtualisation space will bring more openness to the VMware hypervisor stack.

And it does seem that we are beginning towards data-centres which host competing virtualisation technologies; so it would be good if that at a certain level that these became more infrastructure agnostic. From a purely selfish point of view; it would be good to have the same technology to present storage space to VMware, Hyper-V, KVM and anything else.

I would like to easily share data between systems that run on different technologies and hypervisors; if I use VSAN, I can’t do this without putting in some other technology on top.

Perhaps VMware don’t really want me to have more than one hypervisor in my data-centre; the same way that EMC would prefer that all my storage was from them…but they have begun to learn to live with reality and perhaps they need to encourage VMware to live in the real world as well.  I certainly have use-cases that utilise bare-metal for some specific tasks but that data does find its way into virtualised environments.

Speedy Storage

There are many products that promise to speed-up your centralised storage and they work very well, especially in simple use-cases. Trevor calls this Centralised Storage Acceleration (CSA); some are software products, some come with hardware devices and some are mixture of both.

They can have some significant impact on the performance of your workloads; databases can benefit from them especially (most databases benefit more with decent DBAs and developers how-ever); they are a quick fix for many performance issues and remove that bottleneck which is spinning rust.

But as soon as you start to add complexity; clustering, availability and moving beyond a basic write-cache functionality…they stop being a quick-fix and become yet another system to go wrong and manage.

Fairly soon; that CSA becomes something a lot closer to a ServerSAN and you are sticking that in front of your expensive SAN infrastructure.

The one place that a CSA becomes interesting is as Cloud Storage Acceleration; a small amount of flash storage on server but with the bulk of data sitting in a cloud of some sort.

So what is going on?

It is unusual to have such a number of competing deployment models for infrastructure; in storage, we have an increasing number of deployment models.

  • Centralised Storage – the traditional NAS and SAN devices
  • Direct Attached Storage – Local disk with the application layer doing all the replication and other data management services
  • Distributed Storage – Server-SAN; think VSAN and competitors

And we can layer an acceleration infrastructure on top of those; this acceleration infrastructure could be local to the server or perhaps an appliance sitting in the ‘network’.

All of these have use-cases and the answer may well be that to run a ‘large’ infrastructure; you need a mixture of them all?

Storage was supposed to get simple and we were supposed to focus on the data and providing data services. I think people forgot that just calling something a service didn’t make it simple and the problems go away.

 


2 Comments

  1. Etherealmind says:

    You said “From a purely selfish point of view; it would be good to have the same technology to present storage space to VMware, Hyper-V, KVM and anything else.”

    Why ?

    One of the larger problems of that I experience dealing with storage teams is that all storage problems have to fit into an existing solution regardless of whether that is a good or bad choice.

    In networking, we have different products for every problem. Firewalls, switches of all sizes, routers for different problems. All of those come together to become to “one network” and operate under a single function.

    Storage has many different requirements – slow/large, fast/small, distributed/centralised. So why would you expect to purchase a single unit that does everything ? That’s poor logic and leads to complex and expensive products called “storage arrays”.

    Storage needs more diversity, not less. Buy lots of different products, choose different technology and have many vendors to meet differing needs.

    1. storagebod says:

      My workflows involve moving data between systems of different types; so my ingest system might be running on big-iron with full-on grunt but once it’s processed, it might be shared by a virtualised farm. But without some kind of common access and shared storage name-space; this means a workflow that is less than optimal.

      So much of our time and money is spent moving and presenting the same data to multiple systems. I actually agree that we will have different storage for different use-cases but those use-cases may actually be complex and cross technological boundaries.

      I’m not asking for a single unit that does everything and the storage arrays you talk about do everything but in a such a way that it is not especially useful. Many of them are lousy at sharing data in any useful manner. Workflows are getting more complex; the Internet of Things will drive some interesting developments; data collection, data processing, information presentation.

      It’s about building a One Storage Space…in the same way that you have One Network.

Leave a Reply to storagebod Cancel reply

Your email address will not be published. Required fields are marked *