Storagebod Rotating Header Image

/dev/null – The only truly Petascale Archive

As data volumes increase in all industries and the challenges of data management continue to grow; we look for places to store our increasing data hoard and inevitably the subject of archiving and tape comes up.

It is the cheapest place to archive data by some way; my calculations currently give it a four-year cost something in the region of five-six times cheaper than the cheapest commercial disk alternative . However tape’s biggest advantage is almost its biggest problem; it is considered to be cheap and hence for some reason no-one factors in the long-term costs.

Archives by their nature live for a long-time; more and more companies are talking about archives which will grow and exist forever. And as companies no longer seem to be able to categorise data into data to keep and data not to keep; exponential data-growth and generally bad data-management; multi-year, multi-petabyte archives will eventually become the norm for many.

This could spell the death for the tape-archive as it stands or it will necessitate some significant changes in both user and vendor behaviour. A ten year archive will see at least four refreshes of the LTO standard on average; this means that your latest tape technology will not be able to read your oldest tapes. It is also likely that you are looking at some kind of extended maintenance and associated costs for your oldest tape-drives; they will certainly be End of Support Life. Media may be certified for 30 years; drives aren’t.

Migration will become a way of life for these archives and it is this that will be a major challenge for storage teams and anyone maintaining an archive at scale.

It currently takes 88 days to migrate a petabyte of data from LTO5-to-LTO6; this assumes 24×7, no drive issues, no media issues and a pair of drives to migrate the data. You will also be loading about 500 tapes and unloading about 500 tapes. You can cut this time by putting in more drives but your costs will soon start escalate as SAN ports, servers and periphery infrastructure mounts up.

And then all you need is for someone to recall the data whilst you are trying migrate it; 88 days is extremely optimistic.

Of course a petabyte seems an awful lot of data but archives of a petabyte+ are becoming less uncommon. The vendors are pushing the value of data; so no-one wants to delete what is a potentially valuable asset. In fact, working out the value of individual datum is extremely hard and hence we tend to place the same value on every byte archived.

So although tape might be the only economical place to store data today but as data volumes grow; it becomes less viable as long-term archive unless it is a write-once, read-never (and I mean never) archive…if that is the case, perhaps in Unix parlance, /dev/null is the only sensible place for your data.

But if you think your data has value or more importantly your C-levels think that your data has value; there’s a serious discussion to be had…before the situation gets out of hand. Just remember, any data migration which takes longer than a year will most likely fail.


11 Comments

  1. Jim says:

    /dev/null is great, provided it is deliberate : ). Putting an archive into some systems which lack a fast effective migration capability can be a bit like using /dev/null accidentally.

    Long-term tape and library compatibility doesn’t have to be a problem if you have the right hardware-independent software managing it. With TSM for example you can add new generation tape drives and as space within the archive is reclaimed (assuming some files expire) data will be consolidated/re-copied to new tapes which can be newer generations, or you can trigger this manually. This also helps to keep the media fresh and the data readable since it isn’t left sitting completely idle for years.

  2. Jim says:

    I guess all my earlier comment really boils down to is a view that once you get to petabytes, migration needs to be a core capability of the system.

  3. Martin Glassborow says:

    Really Jim? You are still going to looking at migrations which are measured in years. Running one of the larger online TSM archives, it’s not the work…it’s the sheer amount of time. Yes, you can keep throwing drives at the problem but you are going to get to the stage where it could well be that you are out of support before you are migrated. You will be dedicating drives purely to migration as well.

    And you are assuming file-expiry..archive forever or archive for 10+ years which is heading forever.

    And as we head to ever larger tape-capacities, the problem gets worse. Tape could well be a victim of it’s own success.

  4. Jay Livens says:

    Martin,

    You make some great points. There is no doubt that the challenge of managing and migrating archives gets bigger and bigger regardless of the chosen storage platform. Tape certainly provides a compelling cost savings versus disk for archive, but brings the migration issue to the forefront.

    One way that Iron Mountain helps our backup customers address this is by providing restoration services. We have copies of virtually every tape drive ever made and help customers access legacy data when needed. It seems to me that this same service model might help address the issue you mention. We could help migrate the data to newer technology or provide access to the underlying data on demand.

    You are right that migrations can be difficult, but why not rely on a partner to handle it for you? This would relieve you of the burden of purchasing, managing and maintaining the large quantities of tape drives for migration purposes.

  5. Martin Glassborow says:

    Jay, you make an interesting point…and actually one which reinforces my point. If I can’t do it myself and I need to outsource the service to make it practical; it makes me wonder why I am doing it in the first place.

    Also, if I start outsourcing migrations and the like; it just keeps adding to the TCO.

  6. Jay Livens says:

    Martin, migrations are unavoidable regardless of strategy due to hardware EOL. As data volumes grow, the migration process gets more complex and risky. Thus, I think that migration services are likely unavoidable regardless of what storage method used.

    I agree that migrations add to cost and thus increase TCO, but am not sure if there really is a choice. As you rightly point out, the cheapest alternative is to not store the data to begin with or alternatively, store it securely in /dev/null. 🙂

  7. Chuck Hollis says:

    Great post, great points!

    To the extent we can come up with long-term economic models for multi-petabyte active archives (including periodic migration), everyone wins. And I agree, this will become more of the norm before long.

    While most people initially choke on the costs of disk vs. tape in this role, disk does have one potential advantage in this context: easier migration.

    For example, we’ve got a number of gigantic Isilon clusters where it seems the customers are almost always in a state of (nondisruptive) migration: new modules join the cluster, older modules drained out, software shuffles the data around in the background, and life goes on without drama.

    Yes, hardware migrations are unavoidable, but they certainly could be handled a lot better with available technology.

    — Chuck

  8. Tony Nelson says:

    Which brings a thought about all this data that we somehow add value to. How about a mechanism that analyzes and indexes the data, keeping the most important and interesting value in a separate metadata type archive, then send that bulk to /dev/null.

    So as data is permanently purged, it’s first analyzed and keeps a very small percentage based on its value, determined by some form of analytics that relevant to that specific business.

    For example, you might not have every transaction of what a person bought, but you’ll have a total of how much they spent and the main category. Rest goes to /dev/null. The value would be in the analytics engine that would be created.

    Just my Tuesday morning rant with coffee and a bagel….

  9. Hector Servadac says:

    Martin,

    interesting point of view, and 88 days IMHO is really optimistic. But remember, LTO can read 2 generations backwards. You can read LTO1 in LTO3 drives, and LTO3 in LTO5 drives.
    So… you can skip 1 o 2 migrations.
    In 10 years you went from LTO3 to LTO5 or LTO6, but you only migrate tapes once…

  10. Ben Watson says:

    I wouldn’t be happy relying on a third party to allow me to retrieve my archive data. Consider a situation where I have 5PB in deep (offline) archive with someone like Iron Mountain (which would be at least 10PB in reality – two copies), and no facility to retrieve the data myself. I’m then completely and utterly at the mercy of this third party, who could charge me whatever they like for retrievals. There would have to be some rather extensive contract negotiations to settle upon fine details to mitigate this risk.

    Completely understand about migrating data between tape generations – even at full whack we can only run at ~120MB/s (realistically) and that’s a single operation. For managing a large environment though, surely the number of drives in a multi-petascale library would have entered double figures as buying drives for migration is an unfortunate way of life (we need to read, write and migrate concurrently). Surely it’s better having multiple tape drives than oodles of disk that costs 6 or 7 times as much, even if they might not be used continually? The key is to make sure that things like migration can be a background task without disrupting access to the data, so migrations aren’t interrupted midway through. This is likely to be a function of the storage/data management application though.

    Definitely agree with Hector, we can have two LTO generations between migrations with the same tape drives in a library. And even then, if we’re able to abstract the actual read request, we could have several different generations of tape drive used with individual sets of media (e.g. LTO4, LTO5, LTO6 and LTO7). In this example, the LTO4 data could be phased out with one copy migrating to LTO7 whilst keeping the second copy on LTO4 tapes available for access.

    Tape’s still definitely an answer, though not by itself. A chunk of disk will be needed in any situation to buffer the stream and give us a staging/re-staging location.

    My Easter musings, fueled by tea & chocolate.

  11. […] harsh realities of enterprise IT and this visit was no different. Martin had recently blogged about “Petascale Archive” and the challenges he and other IT pros are facing managing the scale of massive data growth. As an […]

Leave a Reply

Your email address will not be published. Required fields are marked *