Storagebod Rotating Header Image

Post Your Favourite FUD!!

Okay, let's get all the FUD out of our systems! What piece of FUD are you either

  • Most proud of! What scurrilous piece of FUD have you manufactured to smear another vendor?
  • Most amused by? What patently stupid thing has a vendor said to smear another vendor?
  • Most shocked by?
  • What piece of FUD actually turned out to be true?

This doesn't have to be storage specific but obviously I would prefer it to be so!


9 Comments

  1. Techmute says:

    This should really be entitled “Bring out your FUD.” I agree that nothing erodes my trust in anything a rep is saying like something that is blatantly untrue.
    Gems recently? The only real notable one was:
    NetApp will definately be out of business within 12 months.
    I guess, the biggest surprise recently was some self-FUDing by a couple of our reps stating that their thin provisioning / wide striping solution shouldn’t be used for anything that remotely requires performance since RAID 6 parity calculation couldn’t be done in cache any more and you lose a ton of replication options. I was a little shocked at that.

  2. “The Evolution of the Storage Brain – Applications Run Faster With Deduplication” – NetApp (http://blogs.netapp.com/drdedupe/2009/09/the-evolution-of-the-storage-brain-speeding-up-applications-with-deduplication.html)
    Truth, NetApp’s deduplicated data, when read into a PAM I or PAM II, will be accessed just as fast an any data read into a PAMI or PAMII.

  3. John F. says:

    @Steven,
    I’m not sure which category you were going for. Deduplicated data read into PAM or II is read from cache just as fast as duplicated data. The point is that the cache is deduplicated as well.
    In the case of duplicated data, the cache fills up quicker and blocks don’t stay in cache as long. With deduplicated data, the cache takes longer to fill up and the blocks stay in cache longer.
    Truth – a cache hit is faster than a cache miss.
    Truth – dedupe aware cache means more cache hits.
    Truth – the higher the percentage of cache hits, the better the overall performance of the storage system.
    John

  4. FACT: Pull two drives from two different drive trays on an XIV array within seconds of each other and you WILL lose data – even though you may not be able to detect it for days or even weeks.
    IBM accused me of spreading FUD with the above observation.
    Scarier reality – swap the two pulled drives when you reinsert them and the entire XIV array will either a) corrupt itself trying to regenerate (bad) checksums or b) crash immediately.
    In fact, my whole initial expose of XIV was initially claimed as FUD, yet today every single one of my observations has been proven to be true.
    But as TonyP admonished, even the truth can be FUD, n’est pas?

  5. Lary Freeman says:

    Regarding my post mentioned above (Ask Dr Dedupe), I have no intent to spread FUD here. As John F points out (thanks John) the point of my blog is that intelligent cache is better than dumb cache. Keep an eye on my blog for more observations of how the storage brain is evolving…

  6. Chuck Hollis says:

    @martin
    I don’t know if it’s my favorite FUD or not, but Kostadis’ “file virtualization doesn’t work” statements certainly qualify as a recent example.
    Several years ago, I posted a “Prove It Yourself” kit for NetApp filers that showed them running slower and slower as they filled up with writes. It got downloaded thousands of times. Don’t know if they ever fixed that particular problem. Truth be told, I thought it was a good example, since anyone could run the test for themselves.
    And there’s a lot of “independent tests” out there in the industry showing that the vendor who paid for the test came out w-a-a-y-y-y on top. Shocking!
    The best time for storage FUD is right after a strong competitor launches a major new product. It takes a while for everyone else to figure out what the vendor is actually doing, and the crazy statements made during the confusion can be absolutely hilarious. I can point to EMC’s Centera, Atmos and more recently the V-Max come to mind.
    Can we add “outrageous vendor claims” to the extended FUD category? Maybe “guaranteed” 50% storage savings? Or perhaps virtualization devices that are supposed to “pay for themselves” with storage savings?
    Takes just as much time and effort to sort people out on that kind of nonsense as well.
    — Chuck

  7. John F. says:

    @Chuck
    You should be particularly proud of that piece of FUD. I believe it predates my time at NetApp, and seems to ressurrect itself every 6 months or so. You are indeed the FudMeister. Try as I might, however, I just cannot find the dip in the 48 hour SPC results. Why don’t you take a look (it’s on page 23) and see if you can point it out? http://www.storageperformance.org/results/a00062_NetApp_FAS3040-48hr-sustain_full-disclosure.pdf
    Speaking of benchmarks, do you have anything similar for either the Clariion or Symmetrix? Perhaps you could post a link pointing to what happens as you use up all the hypers on a Symmetrix? Before my current day job, I used to work with Symmetrix or two. If you need some data points or a perfmon screenshot of the latency counters, I’d be happy to post one for you.
    John

  8. Josh A says:

    John F is spot on with the XIV. I came into an organization where they have recently installed and started using a fully populated unit. The technology is intriguing and I personally feel that the risk of dual drive failure and losing data is no greater than with a traditional RAID 5. We’ve all lost data on a blown RAID at least once so I would say that the claim is not FUD.
    The XIV has just as high a risk of failure as just about any storage device out there, however, if you do have a dual drive failure (DDF) you have no way of knowing what data you lost until you try to access it. This (and lack of support documentation) are the XIV’s achilles heel.
    IBM is supposedly addressing this by allowing you to replicate data across 2, 3, 4 locations on the array with an upcoming code upgrade (10.1 or 10.2 I believe) but the FUDsters will then say “If you lose 3 or 4 drives at one time you’re still screwed” Well what is the mathematic likelihood that you would lose 3-4 drives who all have the same piece of data only on those 3-4 drives? Probably less likely than having a disgruntled former employee pull drives or pour water into the unit. My advice then… don’t pull out the drives and make sure your backups are up to date regardless of your environment.

  9. John F. says:

    @Josh
    I think meant Barry?

Leave a Reply

Your email address will not be published. Required fields are marked *