Storagebod Rotating Header Image

The Crying Game

I wonder if EMC regret their FAST announcements or certainly FAST v1, FAST v1 is pretty ho-hum and what most people are waiting for is FAST v2; sub-LUN, block-sized optimizations. But I wonder if Georgens is going to regret playing with semantics and trying to claim that tiering is dead when what he is really advocating is the same long-term vision as EMC, IBM and just about everyone else. 

Most people at present believe that the for-seeable future of disk arrays, is two tiers, SSD/Flash and SATA and I see nothing in Georgen's statements which contradict; the only thing I see is an attempt to differentiate and pull the wool over the world's eyes. Tiering is here to stay and perhaps in ten years' time we will be talking about SSD being the bulk tier and some other technology being the fast cache tier but it will still be tiering.

So Tom, you can pretend that your eyes are only watering….but we know that they are tears! 


  1. Storagezilla says:

    No regrets here.
    This was always going to be a phased approach, deadlines were set and they’ve been met to date.
    We’re still on track for FAST V2 and will discuss specifics around that soon enough.
    I think it’s a good thing when EMC states what it’ll deliver over the coming quarters. Not only does it give end users time to investigate how applicable it might be to their environment but it gives a clear direction to where the company is heading.
    I’d encourage more of it as we have a very busy couple of years coming up.
    Ultimately FAST V2 is a function of Virtual Provisioning, so standing where we are now you can see the road ahead as well as the road already travelled.

  2. Martin G says:

    Zilla, I would encourage all vendors to be more open about their plans. I know it’s hard and risky but ultimately open discussion of roadmaps can be of value to all concerned.

  3. TimC says:

    I disagree with most all the vendors that the future will be all SATA and flash. That model works right up until it doesn’t, then it falls over and dies. Truly random workloads that don’t fit within your cache are going to crush the array.
    Regardless, I’m not sure how you can fault the guy for calling caching, caching. They’re already doing it today, and they’re calling it what it is: caching. Calling it “automated tiering” is a great way to invent a new term to charge people more money for.

  4. Martin G says:

    it depends on how large your flash tier is; we are talking about multi-terabyte flash tiers. And yes, I know all about truly killer random workloads which basically negate the impact of the cache. In a truly automated tiering model, that data will get moved to the fastest tier.
    And when a cache becomes so large….does it become a tier? The PAMII cards are pretty large and road-mapped to get larger…they’ll need to be if the NetApp vision is just for SATA and Flash. Because those random workloads will just flatten the array otherwise, the random workloads will basically be pinned there. So the cache has become a persistent store…so basically, a tier. And at some point NetApp will have to work out how to enable the PAM cards to support write as well as read.
    And Automated Tiering will just become a basic underlying block, not a chargeable thing. If you’ve done it right, it is part of the architecture and not an add-on. Now, I am hoping that EMC when they were making the foundational changes that they needed for Virtual Provisioning actually did some serious architectural re-design.
    Of course EMC will charge for the feature tho’; I hear that some of them actually have to eat! And it is of course inevitable that I will moan about them charging and want it for free.

  5. TimC says:

    A multi-terabyte flash-tier, when it becomes remotely reasonably priced, will still not be enough for datasets at that point in time. There’s customers today who already have multi-terabyte oracle databases that are random in nature and would eat that entire flash tier. I just disagree that price-performance wise flash will be taking over anytime soon. Just like I disagreed when the first VTL’s hit the market that they would somehow do away with tape. We’ll see if I’m 2 for 2 😉
    Now, if you’re going to call it a tier based strictly on size… are you going to start calling main memory a tier? In the Openstorage boxes by Sun/Oracle, they have as much memory as flash, and we aren’t very far out from terabytes of memory in a large server.
    Regardless, I don’t buy into the “we don’t need 15k rpm disks, we can do it all with flash and SATA!” Now, I can see at some point in the next decade flash replacing the 15k tier entirely if they bring price down, and wear leveling goes up… but that’s not in a scenario where you’re tiering blocks, that’s in a scenario where they just stop making 15k disks, and start producing flash disks of the same size for the same price.
    Food for thought I suppose…

  6. I think TimC is on to something here; people consider caching to be a basic (free) array feature, while “tiering” sounds like a new (paid) feature. So maybe Georgens is really saying that tiering/caching is boring and all arrays should offer it for free. Of course, even if NetApp’s flash caching software is cheap, the PAMII price can only be described as extortionate.

  7. Hector Servadac says:

    I think it’s like people talking about tape’s death. I still believe in tapes, they are some kind of “green” storage, and think that will happen the same to traditional disks, because of regulations, traditions and confidence.
    Maybe we will see more “hybrid” products like V-MAX more successful than “Storage Reinvented”, or maybe customers are more open-mind than I think…

Leave a Reply

Your email address will not be published. Required fields are marked *