Storagebod Rotating Header Image

I spy FUD

When I see things which are blatantly FUD; I get really annoyed! And every now and then, I am going to call it! Hu posts on SVC, it's always a bad start when you dedicate an entire article to a competitors product, it can never end well.

And there is a big, whopping piece of FUD in the article and I quote Hu here!

"I asked why they were converting from Brocade to CISCO, and their
answer was that they were planning ahead for FCoE. I pointed out that
the SVC may work well in a FC SAN, but may have to do a lot more work
to guarantee delivery of packets in a lossy network like Ethernet. The
SVC will have to be reworked in order to work in a non FC environment,
where packets may be dropped when the network gets congested.   Since
the USP V does its virtualization in the storage controller, we would
be able to convert the front end ports to FCoE ports and not do a major
revision of the storage virtualization functions."

Yes, you can if you are insane run FCoE over non-DCE; you can run it over 1 GbE if you insist (I have for experimentation purposes at home) but the whole point of running FCoE is that you run it over DCE which is lossless; it implements flow control!

FCoE is fibre channel! IBM are not going to support FCoE over anything else apart from DCE! FCoE is fibre-channel! Perhaps Hu has confused iSCSI with FCoE?

So is Hu going to go back to that customer and apologise? What he should have done was said, why are you converting to Cisco to get FCoE? Brocade has a roadmap to support FCoE, you may have good reasons to go to Cisco but FCoE at this point probably should not be the driver!


24 Comments

  1. Barry Whyte says:

    Thanks Martin, it means more when an indie points out FUD. I’d also question several points Hu makes, and maybe the customer should be in contact with IBM.
    SVC supports all the major switch vendors (past as well as present) and we have many mixed vendor SANs in our test labs and don’t see issues.
    However, how does this actually lead to the conclusion that SVC virtualizes the SAN? SVC manages and virtualizes DISKS… not VSANs, Zones etc
    As you say… FUD… you should see the “How to Sell against SVC” presentation they have – its full of it – and so makes it really easy to go in and sell against USP because the customer soon realises that we are talking fact, and they are trying to spread fiction.

  2. inch says:

    what the deuce?
    I mean really…
    Someone might like to read the spec for fcoe……

  3. Martin G says:

    Inch, it’s worrying; are we going to see a load of FCoE FUD from HDS?

  4. Etherealmind says:

    As in ALL FUD, there is an element of truth. Cisco will support pre-standard Data Centre Bridging (DCB). Let me explain.
    IBM uses the term Converged Enhanced Ethernet (CEE) for their pre-standard DCB ethernet, and Cisco uses the term Data Centre Ethernet (DCE). Note that CEE and DCE are trademarked terms by their respective companies. DCB is the IEEE standards terms for the unfinished standards.
    Cisco will have an FCoE solution using prestandard DCB with all the QoS and features needed in place long before Brocade/Foundry will. Foundry is a little bit behind here and have publicly stated that they will not have DCB support until the standards are completed.
    The standards were supposed (according to Omar Sultan from Cisco) to ready this month, however, it is probably another year away.
    So, if you want FCoE soon, you will have to choose Cisco and pre-standard implementation. On the other hand, waiting a year or so….. wont make any difference to most people.
    Which is all a storm in a teacup, since the rise of iSCSI will probably make FC storage unviable anyway. DCB enables iSCSI very nicely and is a lot easier to use than FCoE.

  5. Joshua Sargent says:

    Martin – right on. Here is the response I posted on Hu’s blog, but it’s currently….awaiting moderation. =)
    —-
    Wow, Hu…I’m not sure where to begin with this post! =)
    The SVC most certainly does not sit in the middle of “two SANs.” It also definitely does not require “another SAN” just for the ports on the SVC cluster. It requires two fabrics for redundancy, just like every other storage environment, including Hitachi’s.
    The SVC does not “virtualize the SAN.” It virtualizes volumes, pure and simple.
    SAN switch migration with the SVC is not a problem if done correctly, and don’t take that to mean that it’s some complex operation, either! I’ve done several McData -> Brocade and McData -> Cisco, and yes, Brocade -> Cisco migrations for SVC customers without much planning and always without issue.
    And who is planning to implement FCoE on a lossy Ethernet network?! This customer you reference surely knows what you have missed – that FCoE gets deployed on lossless CEE, not standard Ethernet. You suggestion that the SVC will have to be “reworked” for a non-FC environment is pure FUD.
    Sorry Hu – this post is wrong in so many ways, you should honestly consider retracting the post altogether.
    Josh

  6. Barry Whyte says:

    Martin, here’s some more – since you allow comments on your blog – and Hu it likely to filter out both Joshua’s and mine …

    Here we go again…
    So a few points :
    a) If the customer is aware of us and reading, please contact me as there should be no issue in converting between switch vendors – we support mixed SAN environments and I’m sure we can help. As Martin G point out in his “I spy FUD” post, Brocade have a great roadmap for FCoE and there is no reason to convert just for this reason alone.
    b) I guess you don’t quite understand FCoE (maybe this was another ghost written post – or was this one your own?) As Martin points out FCoE is over DCE not GigE fabrics. This is fibre channel – lossless. Not iSCSI. Your suggestions that SVC can’t support FCoE easily in the future. I’d suggest that we can support it much easier than USP. Simply swap the FC PCIe card in the node with an FCoE HBA and et-voila. OK, so there are necessary software updates that will go along with that. However, if we wanted we could offer this as a field upgrade to existing hardware… would you do the same with USP?
    c) I’m also struggling to understand your conclusion. How does this statement mean you can conclude (incorrectly again) that SVC virtualizes the SAN? Do we manage VSANs? Do we manage Zones? No. We virtualize DISK. Its not all just about moving luns around, maybe you should come for an SVC customer pitch and you will see its as capable as USP, and in some cases more so.

  7. Martin G says:

    Barry, probably not the place but as I haven’t had a briefing on SVC futures so am not under NDA, I am going to speculate that you have actually got FCoE working in SVC in the labs?
    I would also speculate that with an iSCSI offload engine, you could also do iSCSI if you felt there was the market.
    I guess this is one of the beauties of building your storage devices out of modular and commodity hardware.

  8. For a so-called “Chief Technical Officer”, Hu is about as out of touch with reality as one could imagine.
    Earlier this week, he demonstrated his abject ignorance of the Symmetrix architecture (both V-Max AND earlier implementations); now he’s trashing FCoE and SVC with abject ignorance again. And he decided to censor my feedback (as per HDS policy); I expect he won’t post any feedback from SVC-supporters either.
    As BarryW says, the stoopid stuff that Hu and HDS arms their sales force with really makes for fun customer presentations. We don’t even have to try hard to discredit Hitachi any more – customers actually ask us to help them understand the facts on the way in the door…they know Hitachi is misleading them from the outset.
    So I gotta say – stop pointing out Hu’s ineptitude – he’s making it easier to compete successfully against the archaic monolithic wanna-bee.

  9. Steven Ruby says:

    “…So I gotta say – stop pointing out Hu’s ineptitude – he’s making it easier to compete successfully against the archaic monolithic wanna-bee…”
    While I agree that FUD and/or mis-representation on a Blog by a CTO is probably not the best way to spend an afternoon, your “wanna-bee” comment is blatantly ignorant. The archaic monolith has proven itself to be far from a “wanna-bee” as it pertains to enterprise storage and storage virtualization.

  10. Charlie Dellacona says:

    Isn’t referring to DCB as “lossless” a kind of left handed FUD as well?
    All networking media have non-zero error rates. DCB is not different. There will be transmission errors, which will result in lost or damaged frames. To deal with this the upper networking layers will have to re-transmit. A lost frame from congestion is not really different than one lost from error. None of the media layers of FCoE, FC, and iSCSI are lossless.
    The upper layers of FCoE, FC, and iSCSI all deal with re-transmits, so none are lossy.
    All this about “lossless-ness” is FUD as well.

  11. Joshua Sargent says:

    Charlie – I disagree. The industry has seemingly adopted the term “lossless” to mean “lossless by design” which does not necessarily imply “100% lossless in fact.”
    Understanding the technology as well as you obviously do, it is disingenuous to ignore the key distinction between Ethernet and DCB or FC. Ethernet is lossy by design. DCB (with Priority Flow Control and PAUSE) and FC are lossless by design.
    Of course transmission errors will cause re-transmission in any case, but that’s not the point. You cannot simply ignore the fact that congested Ethernet networks incur losses that congested FC and DCB networks do not incur.
    Describing FC and DCB networks as “lossless” is definitely not FUD…in my opinion.

  12. Joshua Sargent says:

    My follow-up comment just got rejected over on Hu’s blog, so posting it here. I was told I was posting “too often – slow down.” Well, if 2 comments in 3+ days is too often, he must not expect much participation! Here it is:

    Hu – a few points of clarification might help…
    A. Zones are not fabrics. Zones are not SANs. The fact that the SVC requires additional zones certainly does not make it a fabric virtualization product.
    B. I do not understand why the SVC would need to handle lossless FCoCEE (or FCoDCE, or FCoDCB) traffic any differently than it currently handles lossless FC traffic. Do you?
    C. “SVC” stands for SAN Volume Controller, not “SAN Virtualization Controller.” Very appropriately named in my opinion, as it controls VOLUMES which reside on the SAN. It takes some physical VOLUMES, abstracts them, and creates virtual VOLUMES. It does not abstract physical fabrics to create virtual fabrics. (eg VSANs on Cisco MDS switches)
    As to the reasoning for creating vDisks in the first place, the benefits are numerous! Here are just a few obvious ones:
    1. Wide striping across multiple mDisks.
    2. vDisk Mirroring (one vDisk with its contents mirrored on multiple mDisks)
    3. Single point of management (all storage provisioning happens at the SVC, so users don’t need to know how to perform daily provisioning tasks for multiple vendor’s systems.)
    Of course, if you don’t want to abstract the volume, you always have the option of using an “Image Mode” vDisk. Customers can also migrate their volumes from Image Mode to Managed Mode (abstracted) or vice versa…all non-distruptively.
    So, Hu…please do attend an SVC briefing in the near future. How can you compete if you don’t understand your competition?!
    Chuck L – Check to make sure the features you enjoy on the SVC are available with the USPV…including the ability to easily and cost-effectively remove/replace the virtualization device when the time comes. This operation is trivial for the SVC. You’ll also want to verify the performance of virtualized storage behind the USPV, as HDS hasn’t published any SPC benchmarks using virtualized storage…at least that I know of. SVC’s benchmarks are there for everyone to see.
    Christophe Bertrand – why does it matter where Sebastian works? His post did not attempt to give readers the impression that he was a non-biased third party. If he had, then I would agree with you…but in this case he clearly did not mis-represent himself. For the record though, (since you’re checking) I do not work for IBM.

  13. Joshua Sargent says:

    Ok..in fairness to Hu, I just tried posting a third time and it appears my comment is now awaiting moderation. I suppose it could have been a problem with his blog software…

  14. Charlie Dellacona says:

    @Joshua Sargent: The industry never says “lossless by design” or anything else like it. They say lossless period. The usage goes all the way back to the T11 people who use it as well. The fact stands, there is nothing lossless about it.
    Loss is used to scare people into thinking data is somehow lost. That is FUD! Pointing out that upper protocol layers in all cases have to (and do) handle loss with re-transmits exposes it as so.
    You should be more careful with words like disingenuous, I say exactly what I mean, unlike “the industry”.
    That DCB has an improved flow control mechanism that reduces packet drops under congestion means the upper layers will not have to re-transmit *as much*. No data will be lost, with or without it. Performance may improve from reduced retransmits, but oddly no one seems to make that claim.

  15. Martin G says:

    Charlie, the point that I was originally making was really that FCoE is no different in characteristics to FC; they work in the same way.
    For Hu to suggest that IBM would have problems working a lossy environment like Ethernet was pure FUD. SVC already copes with FC which as you say will drop packets in a congested network; hence it should have no issues with FCoE.

  16. Joeri VS says:

    Hi,
    I have implemented several SVC configurations and can say it is indeed FUD they are talking. Instead of attacking other vendors they should use their own advantages to help a customer, not just try to sell a product by attacking the other competitors ( I am not sure I can use the word competitor…)
    One little remark on Joshua’s post, the explanation of VDisk mirroring is not really the right one in my opinion. The explanation you give can already be done by striping the virtual disk on different RAID groups (one big mdisk at every RAID group) for performance reasons (a lot of disk spindels gives a lot of performance) Vdisk mirroring gives us the posibility to have a logical RAID1 over different RAIDx (R1,R5,R10,…) groups with automatic failover (there is a delay, which can be perfectly handled by the applications). It is even possible to have a logical RAID1 on different storage subsystems (just keep in mind that you will always work on the speed of the slowest device)
    Since there are now scenarios who support split cluster configuration without the need of an RPQ we could create a situation of automatic failover when a disaster strucks at one of the two locations. But I still like to call it high availability and not use it on long distances, this is where the metro & global mirror are for (which is disaster recovery). With some good scripts it is even possible to automate the disaster recovery procedures from the hosts (but I would still like to take the decision myself!)
    Correct me if I am wrong, I do not want to point my finger to Joshua, since it is a very nice review of some basic SVC features!
    kind regards

  17. Joeri VS says:

    @ Joshua
    my excuses that I told that the technical explanations was wrong of the VDisk mirroring fact, I forgot to read the “mirrored” word into your sentence (2. vDisk Mirroring (one vDisk with its contents mirrored on multiple mDisks))
    kind regards

  18. Joshua Sargent says:

    Charlie – Sorry man…dance around the distinction all you want, there *IS* something lossless about it. It is definitely not FUD to distinguish between a network designed to discard transmissions under congestion and networks designed NOT to. In other words, DCB with PFC is designed not to discard transmits under any circumstances, completely unlike standard Ethernet. The fact that transmission errors occur is COMPLETELY beside the point.
    Who says “lossless” refers to “data” anyway?! The term lossless in this context very clearly refers to “transmissions.” By design, the transmissions are never lost…they may be found to have an error and need to be re-transmitted, but they aren’t lost.
    I have never, not even ONCE, heard of anyone trying to “scare” anyone into “thinking data is somehow lost” with standard Ethernet. And we certainly weren’t doing so in this thread! Hu Yoshida made a very clear statement that he thought the SVC would “have to be reworked … where packets may get dropped when the network gets congested.” Please explain how it is FUD to say that this phenomenon doesn’t occur with DCB???? Answer: It’s not FUD. It’s fact. Period.
    There are few things more annoying than someone trying to be provocative just for the sake of being provocative.

  19. Joshua Sargent says:

    Martin – FC will not drop packets during network congestion. Neither will FCoCEE. Charlie is referring to retransmissions that occur due to errors…not congestion.

  20. Martin G says:

    Yes, I know; engaging brain this morning has been a slow process!!

  21. Charlie Dellacona says:

    Joshua,
    Okay then, point by point.
    Frame discard due to congestion drop is loss. Frame discard because of transmission error is also loss. In both cases the upper level has to re-transmit. “Elimination of loss due to congestion” is not the same as “lossless”. It is not beside the point to say so. The former merely results in a possible performance improvement, the latter is physically unachievable to my knowledge.
    As for your second point, your distinction is wrong. If a frame is garbled beyond correction then it is lost, same as a congestion drop. Most switches won’t forward it if the FCS is wrong. The intended receiving station sees nothing in either case. Arguing about transmission v. data is moot.
    As for your third point, I can’t speak for Hu, and I work for a competitor, so then strictly from your quote above, DCB does not solve the packet drop problem. Again, there are still packet drops from transmission error, so the SVC (and to be fair everything else) has to be engineered to handle loss, and at the upper levels it looks the same – a missing packet. Of course it already is so engineered, but DCB does not alter the requirements as eliminating congestion loss still leaves other kinds of loss.
    I do think Hu mis-spoke, as all systems with FC interfaces should be easily engineered to use FCoE, it should be just a media layer swap.
    This thread is about FUD. I think calling something lossless when it is only partially so, and thus really not so, and by implication that other choices are lossy, is left handed FUD, as I originally said. I am not trying to be provocative, I am pointing out that there is a problem with calling DCB lossless. If you find that annoying, so be it.

  22. Joshua Sargent says:

    Charlie,
    I agree (and have the entire time) that frame discard due to transmission error is loss. However, it has become clear that you are only interested in arguing the semantics of the common usage of the term “lossless” used to describe the features of DCB with PFC. As such, this discussion is going nowhere.
    The fact that the folks on the IEEE 802.1Qbb committee, ANSI T11, the FCIA, SNIA, (etc., etc…) are all in agreement on my side of the argument is probably going to be enough to satisfy most reasonable people that the use of the term is definitely not FUD. If you wish to disagree with those guys, so be it.
    I stand by my correction of Hu, as he very specifically mentioned loss of packets due to network congestion…not transmission errors (a sign that *he* might not be the most unreasonable participant in the discussion). 802.1Qbb most certainly does address that problem.

  23. Jeff Darcy says:

    Calling DCE/CEE/DCB/WTF “lossless” might not be FUD, but it’s definitely sloppy and misleading for all the reasons Charlie mentions. Furthermore, citing other groups who’ve chosen to be sloppy and misleading is mere appeal to authority and would be worthless even if those groups had no vested interest in positioning these technologies vs. iSCSI/NAS/etc. If we’re going to split hairs over virtualizing a SAN vs. virtualizing volumes, it’s entirely appropriate to point out the distinction between a truly lossless medium (i.e. with internal retransmission) vs. one that merely loses less often than it did before (which was unforgivably often).

Leave a Reply

Your email address will not be published. Required fields are marked *