Storagebod Rotating Header Image

Quick NetApp Thoughts…

It's days like today that I really feel for vendor techies; proudly watching their newest products sail off into the big wide world to be marketed and sold to the unwashed masses. 

They watch their baby pawed and prodded by their rivals; who inevitably tell them that their baby is ugly. And no-one likes that. Chuck is inevitably straight up to the plate and I bet none of us predicted that.

He accuses NetApp of being boring; well, I guess it is really. All of NetApp's hardware announcements are pretty boring, NetApp are not a hardware company. It's OnTap announcements are where the interest is really and I would have liked to see a bigger OnTap announcement, you can't keep announcing the same product. However, the hardware announcements add things which have been asked for by the customer base for sometime and I can't fault them for that.

And some of Chuck's comments are pretty hollow, 

'getting ridiculously easy to win most any performance benchmark against NetApp'

Apart from the industry standard one which EMC won't play ball with. Actually, I had an interesting conversation around the subject of benchmarks with regards to EMC; it might be interesting to take a standard benchmark of some sort; run it on a raw VMAX or DMX or CX, then run it against the same array but this time behind one of the virtualisation controllers say from NetApp, IBM or HDS. It might be kind of fun. 

But really, unless all the industry agree to release their products on the same refresh cycle, we are always going to be in the situation where we are comparing different generations of product. Although IBM, HDS and NetApp have all refreshed this year; so it appears that EMC are contrarian. And perhaps for the next six months or so, NetApp might be in the position where they can say that it is,

'getting ridiculously easy to win most any performance benchmark against EMC, especially CX'

However his comments about spindle counts ring true but I think this is very true for much of the industry including EMC et al; the use-cases for 1000+ spindle arrays are pretty limited in general. Certainly if you load the array with SATA spindles, low-IO rate archival would seem to be the only use but I suspect that there might well be better and more cost effective options. 

For example, many of the value-add features are not really of a huge amount of value in these environments; you are probably not going to be snapping or cloning data on a regular basis; depending on the type of archive, dedupe might be of limited value or not even desirable. 

Also I worry about migrating off of these huge arrays; migrating petabytes of data is time consuming and it needs to be done in a non-disruptive manner. If NetApp came up with a way of changing out and upgrading the heads with no downtime; that would be cool. Of course this is where someone from NetApp pops up and tells me that there is exactly that product and I look stupid but hey, I'm willing to take the chance. BTW I'm talking about data-in-place upgrades; just whip the head out and replace it with a bigger model.

So, yes…boring announcement but no the less welcome and needed. 


7 Comments

  1. Dave Graham says:

    now that i can be a bit more unmuzzled, i completely agree with you Martin. The same sort of “here we go again” attitudes in competitive storage blogging really tires me out. Hardware refreshes on a whole exist in the “more better” plane. Hardly anyone i know would actually present hardware that takes a step back in aggregate performance, RAS features, etc. It comes down to how these enhancements HELP the customer achieve biz objectives. For that, kudos to NTAP.
    cheers,
    dave

  2. Chuck Hollis says:

    Good counterpoint to my blog post, Martin — thanks for that!
    — Chuck

  3. CP says:

    First you need hardware (more cores, more memory, higher clock rates)… then comes the software (over time) that can fully utilize it…

  4. Pete Gerr says:

    The North American announcement is expected tomorrow, and the real focus should be on Data ONTAP, as you point out.
    Note there have been changes made, apparently, to the original site that pre-announced: http://www.storagenewsletter.com/news/systems/netapp-upgrades
    Going from Data ONTAP 8.0.0 to 8.0.1 is not a big deal, that would be a minor release, and it’s unlikely this is the case, since 8.0.1, I believe, has actually been available for sometime.
    I believe there might be a typo in this original source, or it’s intentionally been mis-stated.
    What we should be looking for is Data ONTAP 8.1.0, which would be the next major version, and one that is expected to introduce block protocol support (FC, iSCSI, FCoE) in the ONTAP 8 family.
    In other words, NetApp’s version of “Scale-out SAN”.
    Recall the block protocols are not supported in the 8.0 family as yet (nor were they in the GX/Spinnaker code).
    That’s the real story here, IMHO, so we’ll have to wait and see.

  5. Joerg Hallbauer says:

    OK, remember, you asked for it. 🙂
    NetApp does have the ability to do a data in place upgrade. You just swap out the heads for more powerful heads, and off you go. The only major caveat to this is that it ONLY applies to the models that don’t have internal drives. I think it’s fairly obvious that if you are booting the array from drives that are internal, and you take those away, you have nothing to boot from. But even that can be worked around by adding a shelf, moving the boot volumes to the new shelf, and then doing the head-swap.

  6. Martin G says:

    With no downtime, completely non-disruptively. And before people yell; yes I know I’m asking for things which very few companies can do at the moment, doesn’t stop me setting the bar high tho’…

  7. Martin,
    What you’re asking is completely legitimate, downtime is costly even in the SMB segment, and given the current status of the economic crisis (at least here in Italy) to many IT workers overtime pay is denied so they’re not willing to invest their weekend for free doing maintenance that requires an outage.
    Said that, I know that Compellent can do that, even with very different families, like going from their Series 10 to Series 30 hardware directly without taking an outage, and I’m pretty sure that when Series 40 will be announced a direct migration from S10 to S40 will be possible.
    But I think that this advantage is due to their relative youth, they started to build their product with those ideas in mind, and it’s something that the other established vendors did not think of back in the days, I think that every major vendor (NetApp included) need to start thinking bold, without keeping too many strings attached to the past, legacy support is what prevents their storages to shine, and NetApp is learning that the hard way with OnTap 8.
    Just my 0.02 € 🙂

Leave a Reply

Your email address will not be published. Required fields are marked *