Storagebod Rotating Header Image

Sort of Right, Kind of Wrong!

Steve Duplessie is both right and wrong in his post on SSDs here!

He is right that simply sticking SSDs into an array and treating them as just Super Speedy Disk can cause yet more work and heartache! Concepts such as Tier 0 are just a nightmare to manage!

He is also right that the problem should be defined high-level as the interaction between the user and their data, getting them access to the data as quickly as possible.

He is also right that just fixing one part of the infrastructure and making one part faster does not fix the whole problem. It just moves the problem around!

Unfortunately, whilst every other component in the infrastructure has got faster and faster; arguably, storage is actually getting slower! At a SNIA Academy event recently, they suggested that if storage speeds had kept up with the rest of the infrastructure improvements; disks would now spin at 192,000 RPM. The ratio of capacity to IOPs gets less and less favourable every year; wide striping has helped mitigate the issue but as disks get bigger, we either look at the situation where we waste more and more capacity as the areal density of IOPs means that most of the capacity on a spindle should just be used for data at rest or we need a faster storage medium.

But we probably don't need a huge amount of faster storage medium and a small sprinkling will go a long way; that's why we need dynamic optimisation tools which move hot chunks of data about. SSDs will be good but just treating them as old-fashioned LUNs might not be the best use of them.

Automation is the answer but I think Steve knows that! Dynamic optimisation of infrastructure end-to-end is the Holy Grail; we are some way off that I suspect! I'd just settle for reliable and efficient automation tools for Storage Management at this point. 


4 Comments

  1. Martin,
    SSD”s are over-hyped.
    An SSD does not improve write latency, it improves read latency.
    Every modern storage array acknowledges a write from memory not from disk.
    Flash is best used as an extension of the read buffer cache.
    cheers,
    kostadis

  2. Martin G says:

    You see I don’t think anyone knows that best way to use SSDs at present! We know that there are some applications which benefit greatly from accelerated reads but we also know that building huge DRAM-based read caches is not economic; so yes SSDs have an obvious use as an extension of the read-cache.
    But once SSDs as a resource achieve parity cost with TSR (traditional spinning rust); we’ll see more use. And we’ll probably thank EMC for being the pioneers building understanding of how and how not to utilise SSDs.

  3. Kostadis –
    “Best” for a memory-constrained NetApp filer might well be using Flash as a cache.
    But that doesn’t mean this is the “best” approach for every storage architecture.
    And I don’t believe that SSDs have to match HDDs at cost/GB before we’ll see more use. At today’s 8x the $/GB of a 15Krpm drive, demand is already strong and growing; probably 4x is the tipping point and 2x will signal the end of the 15Krpm hard disk drive industry altogether.

  4. Barry,
    The NetApp filer is not memory constrained (what does memory constrained me? Any system can benefit from more memory, but there is a cost/performance curve we have to hit to stay in business).
    As for HDD and 15k RPM drives and flash, sure I could agree with that point if I was building a Traditional Legacy Array.. Except for the very serious counter argument that there is thing called IOPS data density (http://blogs.netapp.com/extensible_netapp/2008/11/iops-data-densi.html) that argues that 15k RPM drives may in fact be a better deal.
    The core flaw in the SSD argument is the point Martin makes. If my IOPS data density is not 1.0, then I have to do some very clever space balancing to make effective use of the flash. Given that most IOPS are for read operations (http://blogs.netapp.com/extensible_netapp/2009/03/understanding-wafl-performance-how-raid-changes-the-performance-game.html) the best use of super-fast disk with an IOPS density of 1.0 is for read offloading, not for absorbing writes.
    Therefore, I’ll contend that in the end the future of flash in the DMX and a NetApp FAS is going to be augmentation of the read cache not for stable storage …
    Hmm… this must be the reason the V-Max announced FAST. Not to space balance capacity, but as a way to better leverage Flash as a caching technology.
    cheers,
    kostadis

Leave a Reply

Your email address will not be published. Required fields are marked *