Storagebod Rotating Header Image

Flash is dead but still no tiers?

Flash is dead; its an interim technology with no future and yet it continues to be a hot topic and technology. I suppose I really ought to qualify the statement, Flash will be dead in the next 5-10 years and I’m really thinking about the use of Flash in the data-centre.

Flash is important as it is the most significant improvement in storage performance since the introduction of the RAMAC in 1956; disks really have not improved that much and although we have had various kickers which have allowed us to improve capacity, at the end of the day they are mechnical devices and are limited.

15k RPM disks are pretty much as fast as you are going to get and although there have been attempts to build faster spinning stuff,; reliability, power and heat have really curtailed these developments.

But we now have a storage device which is much faster and has very different characteristics to disk and as such, this introduces a different dynamic to the market. At first, the major vendors tried to treat Flash as just another type of disk; then various start-ups questioned that and suggested that it would be better to design a new array from the ground-up and treat Flash as something new.

What if they are both wrong?

Storage tiering has always been something that has had lip-service paid to but no-one has ever really done it with a great deal of success. And when you had spinning rust; the benefits were less realisable, it was hard work and vendors did not make it easy.  They certainly wanted to encourage you to use their more expensive Tier 1 disk and moving data around was hard.

But Flash came along and with an eye-watering price-point; the vendors wanted to sell you Flash but even they understood that this was a hard-sell at the sort of prices they wanted to charge. So, Storage Tiering became hot again; we have the traditional arrays with Flash in and the ability to automatically move data around the array. This appears to work with varying degrees of success but there are architectural issues which mean you never get the complete performance benefit of Flash.

And then we have the start-ups who are designing devices which are Flash only; tuned for optimal performance and with none of the compromises which hamper the more traditional vendors. Unfortunately, this means building silos of fast storage and everything ends up sitting on this still expensive resource. When challenged about this, the general response you get from the start-ups is that tiering is too hard and just stick everything on their arrays. Well obviously they would say that.

I come back to my original statements, Flash is an interim technology and will be replaced in the next 5-10 years with something faster and better. It seems likely that spinning rust will hang-around for longer and we are heading to a world where we have storage devices with radically different performance characteristics; we have a data explosion and putting everything on a single tier is becoming less feasible and sensible.

We need a tiering technology that sits outside of the actual arrays; so that the arrays can be built optimally to support whatever storage technology comes along. Where would such a technology live? Hypervisor? Operating System? Appliance? File-System? Application?

I would prefer to see it live in the application and have applications handle the life of their data correctly but that’ll never happen. So it’ll probably have to live in the infrastructure layer and ideally it would handle a heterogeneous multi-vendor storage environment; it may well break the traditional storage concepts of a LUN and other sacred cows.

But in order to support a storage environment that is going to look very different or at least should look very different; we need someone to come along and start again. There are a various stop-gap solutions in the storage virtualisation space but these still enforce many of the traditional tropes of today’s storage.

I can see many vendors reading this and muttering ‘HSM, it’s just too hard!’ Yes it is hard but we can only ignore it for so long. Flash was an opportunity to do something; mostly squandered now but you’ve got five years or so to fix it.

The way I look at it; that’s two refresh cycles; it’s going to become an RFP question soon.

 

 

 

 


7 Comments

  1. […] on here Rate this:Share this:TwitterEmailLinkedInPrintDiggFacebookGoogle +1 Leave a Comment by […]

  2. Martin,

    You’re asking many of the right questions and suggesting a few interesting angles, but the overall answers are still MIA. It’s hard, I know. Personally, I have been working several of these topics for 15 years now, the latter part within Caringo. The point is, to get any street cred in a cloudy (..) field fraught with marketing disinformation and doublespeak, you need to build entirely fresh infrastructure, from the bottom up. That takes time, not just to develop the technology, but to develop the mindshare.

    You’re very right about the application needing to drive this. We need the same kind of abstraction layer for applications to drive storage as the one that SQL brought to drive databases in the mid eighties of the last century (just can’t believe it’s that long ago!). To us at Caringo, the crux resides in no-compromise object storage with actionable metadata. Let the application express its storage desires in metadata key-value pairs – both standardized and custom – encapsulated with the object and intelligent storage will have a roadmap to navigate by. Just like relational databases were able to evolve dramatically (ever worked with Oracle V2?) underneath unmodified apps expressing their database needs in (largely) unmodified SQL.

    If you apply this simple principle, any storage technology can come along and be optimally leveraged for performance, economy and durability by apps written decades before.

    — Paul

  3. Gavin Mc says:

    Good article M

    Theres lots of Laserdisc player salesman out there sadly

    Right now, lots of people are buying the laserdisc story but not that many people are selling players which will be able to easily transform into a DVD player, Blu-ray or whatever you comes out next

  4. Andy says:

    An interesting article. I think we need to think outside the box. You say flash will be dead. Maybe, but what is flash? The disk? The array with flash? You say we need to have the intelligence outside of the array. What is an array?
    We could simply define an array as a bunch of similar disks that are bundled together, take care of protection against failure and that’s it. So we have a SATA array, a SAS array, a SSD array, a RAM array … whatever.
    Now the question is where should we select which array to use for what? If I think of the cloud it can not be the application and it can not be the operating system. Both are virtualized and do not know anything about the storage system. The first component that knows about it is the hypervisor. But will we only have cloud systems? Won’t there be any physical hosts left? I think that is where the various storage virtualization technologies have to jump in. Make those components intelligent, let them analyze what happens and take measures.
    There is another reason why it should be outside of the application, OS or hypervisor. Those components only know about themself. They don’t care about other applicatons, OS or hypervisorts. Just like VMWare bundles several hosts togehter to a “big server” a storage virualization should do the same.
    Now there is one feature I miss from all the vendors of said storage virtualization technologies. A method to migrate online from one storage virtualization to another, e.g. synchronize all the LUNs behind a set of frontend WWNs/IPs/WhateverAddressMethod and then take over the frontend identifier to the new system. Else we have to do a storage migration every time we have to upgrade the storage virtualization technology. If that would be possible even between vendors, then we are at a good point.

  5. Lee Johns says:

    Very smart comments. This is why “all flash” vendors are barking up the wrong tree. At Starboard Storage we happen to use flash but our core benefit is that we break down the traditional constructs of storage into a multi tiered architecture. That is our differentiator. I does not matter what the tiers are. It just happens that today one is SSD. Here is a one minute video on the MAST (Mixed application storage tiering) architecture. http://www.youtube.com/watch?v=AJnx17-sjkI&feature=plcp
    Tiers will always exist and our goal is to frictionless performance for applications across tiers.

  6. Adam Bodette says:

    I think you are missing out on a few existing offerings that already do a lot of what you are asking for pretty well. I use a Compellent (Now Dell) SAN with fully automated storage tiering. I have a mix of SSD, 15K and 7K SAS drives all treated as one large pool of space. The tiering occurs at the block level. My inactive data sits on the 7K drives and my SSD and 15K drives are free to handle all new writes. And it’s not just disk type tiering it’s also raid tiering. All my writes are raid 10 and then that data overnight is switched to Raid 5 for read performance. The stuff moved down to 7K is on Raid 6. I can define at the LUN level what tiers (disk type and raid) the LUN is allowed to use (keep non production off my SSDs!). I do not have to think about what disk to allocate or what silo to stick a LUN in as I would have to in the past with prior SANs.

  7. Lee Johns says:

    I agree their are some good architectures out their but Compellent Tiering is old school. Most of the new architectures use caching because it is much more imediate and has no performance impact on the system. Also writes in a good architeture are to faster Flash-based devices rather than using a slow RAID mirror sitting on disk. They lazy write to the backend disk. No policy-based migration needed. you get a much more efficient use of cache that way. No need to try and predict how much of each tier you need. In most cases it also makes 15K drives unnesasary. you get much more performance and better utilization at much lower cost.

Leave a Reply

Your email address will not be published. Required fields are marked *