Storagebod Rotating Header Image

NFS, VMware and Unintended Consequences!

When VMWare allowed NFS to be used a valid datastore for ESX, I wonder if they really knew what they were doing? Until that time, NFS was really a Unix-geek ghetto and not something that many people played with. Yes, you were starting to be able to run Oracle and some serious-workloads over NFS but I would argue quite strongly that it was VMWare which brought NFS as a storage consolidation protocol to the fore.

NetApp were doing pretty well but the introduction of NFS into ESX completely changed their outlook; before NFS and VMWare, NetApp were being forced to support block storage to allow them to gain really serious traction in the data-centre. It was an uphill struggle for NetApp to get their block storage play accepted by the market, a mixture of competitor FUD and internal naivety made this a long process.

But ESX over NFS allowed NetApp to become a serious player in the data-centre and pretty much at the expense of their great rival and VMWare owner, EMC. I'm not saying that NetApp would not be the runaway success they are today without this but it moved VMWare firmly and squarely into NetApp's sweet spot.

NFS is now being pushed as a way to simplify your storage infrastructure; something which will cause much amusement for many long-term sys-admins who have seen some real hashes made with NFS in traditional environments. NFS has always worked best in well-designed environments with simplicity kept in mind unfortunately it's 'ease of abuse' has allowed it to be abused time and time again.

However VMWare environments are architecturally simple and it is unlikely that we will see NFS abused in the same way, so here it makes a huge amount of sense. 

Mounts are very easy manage and it is unlikely that you are going to get into horrible cross-mounting situations. The server-focused security model works very well and you only have a very limited number of users to worry about; in more traditional NFS environments, it was not uncommon to find UID and GUID mismatches; confusion often exasperated by the varying implementations of NIS often running along side NFS.

It is almost as if NFS was designed with VMWare in mind. Yet again, this is amusing because x86 virtualisation seriously damaged Sun's marketplace and was one of the factors leading to the eventual demise of Sun as an independent company. 

So VMWare on NFS has arguably damaged the owners/creators of both NFS and VMWare. Funny how things turn out.


9 Comments

  1. Your points don’t seem to support your conclusion. I don’t see you mention anywhere any “damage” to NFS due to its use with VMWare, or vice versa.
    Perhaps you mean the opposite? VMWare on NFS datastores is relatively simple and works well. Win-win.
    Also, I don’t see any unintended consequences that the title promised.

  2. Martin G says:

    Unintended consequences are for the decision to allow VMWare on NFS; it allowed NetApp to become a lot stronger in the data centre. Do you really think EMC wanted that to happen? A NAS-centric data centre was almost certainly not in EMC’s minds and probably in no-one else’s mind either. NetApp certainly had/has the strongest NAS product on the market and at one point, this was by a country mile.
    If VMWare had been a block-storage only product; I would argue very strongly that NetApp would not be making the ground they are at the moment. Certainly, the unified storage message is not as strong if you have a data centre dominated by block storage.
    And NFS has greatly simplified the deployment model for VMWare; x86 virtualisation certainly impacted Sun. And NFS came out of Sun.
    Will VMWare’s development impact NFS’s development; I think that it might; if there comes a time when NFS’ greatest usage is in VMWare environments, it almost certainly will. I have asked NetApp whether they know what percentage of their NFS deployments are hosting VMWare.

  3. Martin,
    Nice post. Funny, IDC stated that without VMware (please remember, lowercase w in VMware), iSCSI would have been a dead protocol. It’s a “virtual” block interface, thus perfect for FCoE. Every protocol and vendor messaging is getting re-cast through the lens of virtualization (and of course cloud these days too).

  4. Jeremy Barth says:

    In my view there are 2 ways that NFS still isn’t first class storage in VMware:
    1. In the absence of parallel NFS, which has been promised for years but still hasn’t really materialized, if you don’t have 10 GigE you’re limited to a single 1 GigE connection per NFS datastore, similar to the way it was with iSCSI until vSphere 4 was released with its ability to do iSCSI multipathing. Some would argue that this is just a design tradeoff with NFS and you can simply create more datastores, but I’d still argue that one option fewer is one option fewer.
    2. Even in v4.1, VMware still does not officially support Microsoft Clustering Services on anything but FC. This isn’t a knock on NFS since VMware doesn’t support iSCSI for MSCS either. Many people do it anyway 😉 but IMHO one can’t say NFS is first class storage in VMware-land so long as there’s no blessing for MSCS.

  5. Peter says:

    Nice post, well described
    Reading the comments proves that there is still a lot of naivety around out there…

  6. About Jeremy’s comment (and apologies for mistakes in my english).
    I cannot figure out why a network service like NFS is limited to a Gig link in yr.2010.
    In fact, one of the advantages of using Ethernet infrastructure is you actually can use facilities and features based on a common and extensively known environment like networks.
    If you can team network links in ESX servers trunk switch ports and use vifs on the FAS, why should anyone be limited to 1Gig speed?.
    Maybe I got something wrong here.
    Cheers
    Juan

  7. comment says:

    You’re not going to use both links between an ESX host and the NetApp with Teaming. You’re only going to use 1 link per source or destination address. NetApp doesn’t support exporting the same NFS share to multiple IP’s, using IP aliasing, so this is a real limitation for teaming. You may be able to do it, but it’s not supported. So you really are limited to 1 Gbps, even if you have aggregated multiple links.

  8. Sorry, comment, but I disagree
    1 link may consist of more than one port, that is, several level 2 connections are treated as 1 level 3 entity, an IP Adress.
    You can team NICs to do that, you can define trunks and LACP links on switches and you can define VIFs on a FAS. A teamed set of nics shares a unique IP adress, so does a VIF.
    I still can’t see the problem.

Leave a Reply

Your email address will not be published. Required fields are marked *