Storagebod Rotating Header Image

This was my idea!! But I have no patent and I won’t sue!!

We've been talking at work about how to do SAN to NAS conversions; it's a pain and time-consuming. At the moment, we're just copying at the host level. But I've had an idea and you are all going to tell me that I'm nuts!!

As a great number of the NAS heads out there are either bastardised Linux or BSD; why not simply allow the NAS heads to natively mount the file-systems and then present them out as a share; in the background, copy them across to the NAS' native disk format. I'm sure you could get a NetApp head to run Veritas for example as some sort of guest-filesystems. I reckon if you were really, really clever; you could take snap of the SAN disk; mount the snap on the head, do the copy leaving the primary disk running and then do a reconciliation of the files which have changed once the bulk of the copying has been done.

This way you keep the migration traffic mostly off the network and at the SAN level.

Okay, it's a bit Heath Robinson and there needs to be some programming done but surely this could
work. I really need a SAN/NAS migration appliance; someone please build
one? Pretty please?

p.s And if someone has already patented this or done it!!? I'm really sorry!!

p.p.s And if someone tries it and looses all their data! Well I'm really, really, sorry!!


10 Comments

  1. Nice!
    I know of one NAS head that already does this, however: The DROBOShare! It uses a USB-attached DROBO, and you can freely switch between USB-attached and NAS-attached.

  2. Martin G says:

    I was kind of thinking of something a little larger!! I can do it with my Linksys as well!!!

  3. Chuck Hollis says:

    Hi Martin
    There is a popular solution out there to just what you’re talking about.
    Thousands and thousands of implementations, doing just what you’re doing.
    (prepare yourself for product plug)
    Simply pointing a SAN device at a NAS head won’t work. Every NAS device has it’s own filesystem structure and metadata that’s fundamentally incompatible with every other filesystem implementation at a representational level.
    But you can do transparent migration at a filesystem level, without using the “copy” command 😉
    File virtualization solutions create a “virtual file system” above NAS devices (and servers with file systems!) that appear transparent to the user.
    They can nondisruptively migrate old to new, and users and apps are usually blissfully unaware that anything has changed, even while data is being moved.
    The best ones are completely agnostic to source and destination. EMC’s example (RainFinity) is a good example.
    All sorts of neat options on selective processing, archiving, tiering, re-laying things out, etc.
    Don’t know if this is what you were thinking of, or if any product out there meets your needs, but I thought I’d mention it.
    — Chuck

  4. Martin G says:

    I think Stephen got it and think Chuck completely missed it.
    Lets take for example a NetApp head and lets just assume that it runs some kind of Unix as it’s operating system? Now, apart from a SMOP; there should be no reason why it could not run Storage Foundations, allowing it to understand VXFS etc. Actually, you could probably use the various open-source implementations of NTFS etc as well. Once the head can understand the ‘foreign filesystem’, it could copy the files from the foreign filesystem to it’s own Platypus-based filesystem. I suspect this might actually be easier than implementing any one of a number of file-virtualisation appliances.
    Now, things like the NSLU2 can already do this in the home-space; it’s just never been done as far as I know in the Enterprise space. Not the copying from a guest file-system into its own format anyway. I’ve done it on an NSLU; plugged two USB devices in and then copied the FAT32 volume to a EXT3 volume all while still accessing the files via CIFS. I then did a true-up of those files which were locked/changed. It minimised the amount of time I lost access to my MP3s.
    I might suggest someone like Exanet TPOC it; should be very trivial for them.

  5. Ronald says:

    I may be an id10t but doesn’t Microsoft’s DFS do this??

  6. Martin
    I had to do the same thing on an Linksys NSLU2 too. I had 2x 250GB drives full of stuff and (I think due to the sheer number of files) the NSLU2 started to lose track of things and I thought I’d lost everything. I found a read-only EXT3 driver for Windows which allowed me to mount the drives to Windows and copy the data off. So, what you’re asking for is a SAN-based version of that process, taking LUNs from elsewhere. What format to you expect the LUNs to be in (i.e. what source systems)? For instance, I’m wondering whether Windows VxVM can read other formats directly.

  7. inch says:

    rhel + foundation suite…. 🙂
    Its cheap too (well at least for four volumes!!)
    go symantec for giving away foundation suite for a 2cpu linux machine!

  8. Martin G says:

    I expect the majority of my volumes to be in VxFS under Solaris.
    It’s just intrigued me that no-one has done it. For example Acopia could offer a bundled additional ‘appliance’ to enable migration of block into a NAS environment and then into their global namespapce.
    The appliance could be as simple as something as a Lintel box running Foundation suite. Just a thought.

  9. Chuck Hollis says:

    Sorry, Martin, if you think I missed it.
    I thought you wanted to solve a practical migration problem of getting your Solaris file systems (presumably supported by VxFS) to a different NAS device, and not have to explicitly copy everything, disrupt users, etc.
    If that’s not the case, I misunderstood.
    You proposed teaching the NAS head about VxFS.
    While interesting, I know that won’t likely be practical, because there’s an additional layer of file system implementation semantics that has to be known by NAS head.
    Additionally, since the likely use case for such a capability is occasional migration, I would think that customers would be looking for a more generic rather than specific capability.
    Best of luck with the alternative approach!
    — Chuck

  10. Martin G says:

    Chuck,
    I’m kind of confused about these additional file system implementation semantics. This whole conversation is predicated on a ‘filer’ basically being in a lot of cases built on commodity hardware running Linux or some other Unix-like OS (alot of them are). If a head can be given the extra functionality to understand guest file-systems/volume managers; it should be possible to take a ‘snap’ of your block storage, mount the snap and start copying the data into the head’s native file-system which would allow me to use the head’s advanced functionality such as snaps, clones, dedupe etc. If the head is already participating in a security domain of some sort, it should also be possible for ownership/permissions of files to be maintained.
    I can keep the migration traffic off my network, it will also probably run faster. And yes it is a very specific use-case but there’s a fair amount of people who want to move block data into filers for various reasons and some even on a fairly regular basis. I can give you a real world example.
    Currently, all our dev/test sits on block-storage; there is a desire to move alot of it to NAS.
    We refresh dev/test from production which is always going to stay block. It would be interesting if I could ‘snap’ production and mount those production file-systems on a NAS head and then copy in the native file-systems of the head.
    At the moment, I am going to have to use a generic server and then copy to a mounted NFS volume; when we are talking about large file-systems, this takes time and I really want to be able to offer a quicker refresh to the development teams. Doing the copy at SAN speeds could enable me to speed things up.

Leave a Reply to inch Cancel reply

Your email address will not be published. Required fields are marked *