Storagebod Rotating Header Image

Interesting Question?

Are AFAs ready for legacy Enterprise Workloads? The latest little spat between EMC and HP bloggers asked that question.

But it’s not really an interesting question; a more interesting question is why would I put traditional Enterprise workloads on an AFA? Why even bother?

More and more I’m coming across people who are asking precisely that question and struggling to come up with an answer. Yes, an AFA makes a workload run faster but what does that gain me? It really is very variable across application type and where the application bottle-necks are; if you have a workload that does not rely on massive scale and parallelism, you will probably find that a hybrid array will suit you better and you will gain pretty much all the benefits of flash at a fraction of the cost.

The response often received when asked what the impact of being able to run batch jobs, often the foundation of many legacy workloads, in half the time is a ‘So what?’ As long as the workload runs in the window; that is all anyone cares about.

If all your latency is the human in front of the screen; the differences in response times from your storage become pretty insignificant.

AFAs only really make sense as you move away from a legacy application infrastructure; where you are architecting applications differently, moving many of the traditional capabilities of an Enterprise infrastructure up the stack and into the application. Who cares if the AFA can handle replication, consistency groups and other such capabilities when that is taken care of by the application?

Yes, I can point to some traditional applications that will benefit from a massive amount of flash but these tend to be snowflake applications and they could almost certainly do with a re-write.

I’d like to see more vendors be honest about the use-cases for their arrays; more vendors working in a consultative manner and less trying to shift as much tin as possible. But that is much harder to achieve and requires a level of understanding beyond most tin-shifters.


11 Comments

  1. calvinz says:

    Because flash is now cheaper than 15K drives? And the beauty with 3PAR (not so much XtremIO, Pure, etc) that you can determine what you want to put in flash and get all-flash performance or what can live on near-line drives.

    1. storagebod says:

      Hybrid arrays? You might see I mention them? You don’t need an AFA and that’s the point. And I’ve not really seen flash at parity with or even cheaper than 15K drives; not without cheating.

      1. calvinz says:

        WRT my statement that AFAs (with dedup) are now cheaper than 15K drives – a couple of things:

        > First, we have something called Adaptive Sparing. We work with SSD vendors to get about 20% more capacity out of an SSD; so the 1.6TB SSDs that other vendors use are 1.9TB with 3PAR. You can read more about it in this post from one of my 3PAR solution architects: http://h30507.www3.hp.com/t5/Around-the-Storage-Block-Blog/Built-for-flash/ba-p/175268

        > Second – we’re using a conservative 4:1 compaction to get to the point where SSD is cheaper than 15K drives. No trickeration, cheating or lying. The 4:1 we think is conservative. And for any customer that already has HP 3PAR, there is a tool they can run that will estimate their savings with deduplication.

        I did see you mention hybrid. The point I was trying to make that wasn’t clearly stated is that with our converged flash array (HP 3PAR 7440c), you can run either hybrid or all-flash and get the performance of all flash. My assertion is that any customer using 15K drives should look at using SSDs with HP 3PAR.

        1. storagebod says:

          And not every data-type compresses or dedupes. Life is never as simple as vendors would have you believe.

          And I quote myself…

          “if you have a workload that does not rely on massive scale and parallelism, you will probably find that a hybrid array will suit you better and you will gain pretty much all the benefits of flash at a fraction of the cost.”

          This doesn’t discount 3Par as your hybrid array..it doesn’t discount any vendor’s hybrid array. And if I want to fill that array up with SSD only; I generally can.

          Enterprise arrays are pretty commoditised this days…there’s so little difference between them, it’s hardly worth bothering with anything apart from price and the performance of their support organisations. And the prices are incredibly close as well in my experience. It’s a bit tedious sitting through the roadmap presentations these days as you can pretty much replace one vendor’s name with another in any roadmap.

          The innovation isn’t in this space and that doesn’t matter really. You’ve all got good technology in this space.

          1. calvinz says:

            Agree that compression/dedup is different for different types of data. Most people understand that a database won’t dedup very well (unless you are using snapshots). I don’t think we at HP try to make things appear simple – but we do need to give guidance when we talk about what HP 3PAR does. If a customer needs more detail about their environment – ask away. I think we’re more transparent that most.

            We need to have this conversation over beers in London! I’ll be there in December but everything will change by then.

  2. Andreas Weigl says:

    I totally agree that it doesn’t make sense to put a workload on a AFA if it is fast enough on a non AFA. I would leave out the “traditional workload”. It simply implies all “old” workloads. With things like xtremeIO it might make sense to put home directories on there. The dedup and compression might make your flash cheaper than your SAS. The speed might be important too especially if a lot of people log in and out of their systems at a specific time frame.

    As much as I hate that phrase: It depends. It depends on the workload, it’s importance, how redundant the data is and even what you have on the floor right now. Doesn’t make sense to buy a array when you have huge amounts of space in your AFA.

  3. Marc Lavatan says:

    Yeah and when I go buy a car a Checy Spark will get me there, but is that really what I should buy for the support of my lifestyle and reliability for the next 3-5 years?

  4. Michael Shea says:

    Good afternoon Bod. It is nice to see a blogger boffin put up something a whole lot more sober than the hyperbolic “this changes EVERYTHING”, “It’s a flash REVOLUTION” philippics.

    Flash has its place in a business, if it helps a business achieve a well defined and desirable outcome. Faster is not positively an outcome, as you point out clearly.

    One conversation I had recently with an IT Director underscores the disconnect of IT from the business. He wanted to have an all flash array so he could look innovative inside his company, but he could not tell me what business changes anyone there wanted to execute to make his company’s customers happier. So would the all flash array matter? Not that he could say or measure.

    I blame us vendors for this sorry state. (I work for NetApp)

  5. alpharob says:

    “AFAs only really make sense as you move away from a legacy application
    infrastructure; where you are architecting applications differently”

    Not so. Yes, as others mentioned home directories make no sense. But from real-life recent example, customer moved big bad monster app into AFA. Was there some mis-use of that? Absolutely. But all the report runs that management wants to see are much faster. Could this have been done with hybrid? Well.. the problem was/is large amounts of data, so you need a very large SSD tier because the blocks are not “that hot.” Plus.. the AFA that Should Not Be Named , all IOs in the microseconds. Millisecond IO is “old school” – B^)

  6. Ed Rolison says:

    I think pretty fundamentally – no. At least not until the capacity and cost of flash approaches the cost of spinning rust. (and that’s cost in DC footprint too).

    But _most_ enterprises _typical_ workload isn’t evenly distributed across the array, and so you get a lot more value out of flash tier than you would going all flash. You’ve got pareto distributions abounding.

    I’ve got my ‘general purpose’ arrays at the moment, with 100G RAM, 2TB Flash cache and 200TB of spinning disk, and I’m still getting 90% IO serviced from cache tiers. I think by that yard stick we may even be ‘wasting’ the cost of 10K SAS drives, because SATAs would probably have done the trick. (Although, I think we’d notice when doing some of our maintenance tasks). But that’s a pretty consistent pattern across the estate of ‘random users and projects who don’t really know what they need’.

    I’d much rather not waste the budget there, and instead focus on buying AFAs for the specific workloads that need them.

Leave a Reply to storagebod Cancel reply

Your email address will not be published. Required fields are marked *