Storagebod Rotating Header Image

VSANity?

So VSAN is finally here in a released form; on paper, it sure looks impressive but it’s not for me.

I spend an awful lot of time looking at Scale-Out Storage systems; looking at ways to do them faster, cheaper and better. And although I welcome VMware and VSAN to the party; I think that their product falls some-way from the mark but I don’t think that I’m really the target market; it’s not really ready or appropriate for Media and Entertainment or anyone interested in HyperScale.

But even so I’ve got thoughts that I’d like to share.

So VSAN is better because it runs in the VMware kernel? This seems logical but this has tied VSAN to VMware in a way that some of the competing products are not; if I want to run a Gluster Cluster which encompasses not just VMware but also XEN, bare-metal and anything else, I could. And there might be some excellent reasons why I would want to do so, I’d transcode on bare-metal machines for example but might present out on VM-ed application servers. Of course, it is not only Media and Entertainment who have such requirements; there are plenty of other places where heavy lifting would be better done on the bare-metal.

I think that VMware need to be much more open about allowing third party access to the kernel interfaces; they should allow more pluggable options; so I could run GPFS, ScaleIO, Gluster, Stornext within the VMWare kernel.

VSAN limits itself by tying itself so closely to the VMware stack; it’s scalability is limited by the current cluster size. Now there are plenty good architectural reasons for doing so but most of these are enforced by a VMware-only mindset.

But why limit to only 35 disks per server? An HP ProLiant SL4540 takes 60 disks and there are SuperMicro chassis that take 72 disks. Increasing the spindle count not only increases the maximum capacity but the RAW IOps of the solution. Of course, there might be some saturation issues with regards to the inter-server communication.

Yet, I do think it is interesting how the converged IT stacks are progressing; the differences in approach; VMware itself is pretty much a converged stack now but it is a software converged stack; VCE and Nutanix converge onto hardware as well. And yes, VMware is currently the core of all of this.

I actually prefer the VMware-only approach in many ways as I think I could scale computer and storage separately within some boundaries; I’m not sure what the impact of having unbalanced clusters will be on VSAN? Whether it would make sense to have some Big Flipping Dense VSAN appliances rather than distributing the storage equally across the nodes?

But VSAN is certainly welcome in the market; it certainly validates the approaches being taken by a number of other companies…I just wish it were more flexible and open.

 


2 Comments

  1. Roger Weeks says:

    Martin: It’s not just the inter-server communication that can introduce latencies or bottlenecks. It’s also the SAS interface on the host server. I’ve done a lot of research here, and I’m surprised they go as high as 35 disks. Go over 24 disks and you really start to run into issues with SAS bridges and connections.

    I’m all for the white-box storage solution (as you know) but one benefit you do get from the storage array vendors is better engineering of the backplane that the disks connect to. A NetApp FAS, for instance, has at a minimum two 4-channel 6G SAS connections to each shelf, so at least 24G guaranteed bandwidth to that shelf of disks. EMC and Hitachi arrays have similar specs.

    It’s really hard to put together a white-box disk storage array with that kind of bandwidth to the internal disks. Typically you get a SAS card with some kind of bridges once you get over 24 disks. Those bridges become a performance bottleneck.

    Now, with 60 SATA drives, for example, you might have much less of a problem because those disks simply are much slower. If you’re running 10K SAS disks and expect SAN level performance, with 60 disks you will simply not get it.

    As an aside, even with the architecture mentioned above, when you look at shelf capacities from the storage array vendors, you don’t see more than 24 SAS disks in any given shelf

    Mark Nelson on the Ceph blog has done a lot of testing of disk controllers (in the context of Ceph, of course, but this is an issue across any distributed storage model that relies on whitebox servers and controllers).

    It’s a pretty interesting read: http://ceph.com/community/ceph-performance-part-1-disk-controller-write-throughput/

  2. Roger Weeks says:

    I also agree heartily with your point that VMware, if they want to actually embrace software-defined storage (whatever that means today), needs to open their kernel to other storage software.

    Running Data ONTAP or Ceph or Gluster or StorNext in the VMware kernel to me sounds very interesting. There’s plenty of differentiation between just those four storage softwares for companies to make money, and plenty of different customer use cases as well.

Leave a Reply

Your email address will not be published. Required fields are marked *