« "Independent Analysis" -- Is There Hope? | Main | The Other Side Of Bricks »

April 14, 2008



I don't think the iSCSI crowd necessarily feels threatened by FCoE, but I think the opposite.

I've read your posts and although I agree that FCoE will have some momentum, I've yet to see you explain why you think that FCoE will be inherently better than 10 Gb iSCSI, or better than FC8 for that matter.

Pigeon-holing iSCSI to small SANs in SMBs seems to run contrary to IDC/Gartner predictions of the size of this market over the next few years and the argument may hold true with 1Gb iSCSI, but certainly not with 10Gb iSCSI

I'll look forward to this article :)

Chuck Hollis

Hi mgbrit

I can't speak to how the FC-only vendor crowd feels about iSCSI, since the vendor I work for has FC, iSCSI, NAS, CAS et. al. I don't have a lot of empathy for anyone who believes a single protocol will solve all storage challenges.

Regarding market share growth for iSCSI, just about every analyst is pointing at SMB/small commercial growth for iSCSI. Not a single analyst that I'm aware is predicting iSCSI displacing FC in data centers.

But I think you know this already, don't you?

As far as "inherently better", that's a useless statement without context, and I think you know me better than that.

To be specific:

FC8 will appeal to FC4 customers who want to stay with what they've got, and just ride the next wave of technology improvements. Just as exciting, for example, as the transition from FC2 to FC4, and -- of course -- FC1 to FC2. Ho hum.

10Gb iSCSI will appeal to people who've started out with iSCSI, and want the next performance bump, if and when the costs come down to be attractive. No surprise here, right?

I think what I've failed to explain is that FCoE will be attractive at scale. Save a couple hundred bucks per port, multiply that by several thousand ports, and larger implementations will pay attention.

Take any cost saving number, multiply it by a very big number, and you'll have a very big number.

As an example, I routinely meet customers who have FC port counts at 5,000, 10,000 and even more. They'll never do any IP protocol for storage; they want a lossless storage protocol, no retries, a behavioral and management model that they're familiar with, and so on.

Go ahead, just try to convince them that iSCSI can solve all their problems.

I dare you.

For these big data centers, they're watching FCoE, and they're waiting to see when it makes sense for them. And, from their perspective, it's entirely logical.

Now, if you believe that the future will be be larger, consolidated data centers, with demanding service levels, etc. where will iSCSI play?

Fiber Guy

Saying that FCoE will approach the cost of Ethernet is misleading. There is a large difference between the cost of 1GE and 10GE. If FCoE is mainly approaching the cost of 10GE, then FCoE is much more expensive than the cost of FC8. This has always been a problem for iSCSI and will be a problem for FCoE. Pundits always say - just wait till you get iSCSI at 10 Gig. We keep waiting and then when it comes it is unpractical from a cost perspective.

Chuck Hollis

Hi Skip (from Brocade) -- thanks for commenting.

Yes, we're talking 10Gb ethernet here, not 1Gb.

I don't think it's misleading at all. The goal is not to have dedicated, low-volume parts for FCoE (like we have for FC2, FC4 and FC8), but to have high-volume parts that are essentially standard 10Gb ethernet used in a specific role (e.g. FCoE).

Of course, if it doesn't hit the cost targets, or doesn't become pervasive (e.g. integrated with the motherboard, etc.) it'll never take off.

Since you work for Brocade, you've probably met the same data center people I've met -- ones that just won't consider iSCSI at any speed, period / end-of-story / don't-bring-it-up-again, please.

I would guess that some of them roll forward to FC8, just like they did to FC4 and FC2. But, this time, I think many of them will take a hard look at FCoE.

Ryan Malayter

The costs of 10GbE are already well below that of 8 GbFC.

Cisco Catalyst 3560E-12D (12 x 10GbE) = $1150/port. Intel Dual-port 10GbE NIC = $667.

Brocade 5100 (24x8Gb FC) = $935/port. QLogic 8Dual-port Gb FC HBA = $2,126.99.

It's the need for a multi-thousand dollar HBA in each server that kills the economics of Fiber Channel. 10GbE NICs are only getting cheaper, and will soon be "free" on the motherboard.

matt sanders

Chuck - Please excuse the intrusion and question after reading your blog...

at one time I recall Cisco (and others) rejecting Pause as not necessary in Cisco switches with wire-speed switching and non-blocking, shared-memory.

and that a correctly designed network would not require it.

what has changed with the 10gig switch technology that now requires it?

can you help me with this please?

I was hoping for a easy answer if there was one


matt sanders

Chuck Hollis

No intrusion whatsoever -- always welcome a bit of a chat with others in our world.

Frankly speaking, I vaguely remember something about this as well early on. I don't remember where it ended up, though. All parties seem to be satisfied today, the initial products that support the standard are getting through internal quals pretty well, and most people close to it are pretty confident in this stuff.

If it's important to you, let me know, and I can have one of the technologists get back to you.

Sudhir Brahma

Most of the discussions about Data Centre and convergence is probably based on the assumption that they will easily migrate their cabling to support 10Gige. That is an expensive operation. A Cat 6 cable may be designed for 1 Gige, but if it is sufficiently small, maybe the capacitive/inductive impedences may allow thruputs which are sub 10Gige but much larger than a FC of 4 Gige (for example). Unfortunately no ethernet standard exists between 1Gige and 10Gige. Maybe there is scope for a technology to exploit this sub 10Gige thruput- basically better utlizing existing cabling infrastructure and push more data thru. There are several possibilities on the implementation- and ofcourse, both ends of the Level2 network will have to have such intelligent devices which will handshake and settle at some reliable sub 10Gige level.
I will like to know if such an effort has happened and the outcome thereof.

Scott L

Being in Az I had the luxury to stop in and see Cisco setup their lab on FCoE @ SNW. It was a mess I won't lie. WWN to MAC and back mapping Lun Masking, all the easy tasks in a native FCP environment were a nightmare in the emulation world. I also was skeptical to the purpose until I saw the presentation. That and the costs of the adapters pretty much sum up the negative from my PoV.

The good? You can setup the network (VLANs or dedicated switches) and forget about it. In large organizations your storage guys have to put in tickets for the network team to provision and configure before they can do their jobs with iSCSI. I wouldn't say subnetting and routing protocols should be in a storage admin's job description as MANDATORY. Sure it'd be nice. With FCoE there's WWN provisioning if you're using a software initiator but they already know how that all works. You can't ARP poison or broadcast storm an FCoE network if the network is setup and hardened properly, which could happen on an iSCSI version. There are a lot of advantages the "it's not routable" bigots like to throw out there. It's fast and 100% off loadable. Most of the arguments against FCoE are easily shrugged off which aren't necessarily true the other way around against iSCSI. Imagine from a security standpoint how simple an audit is to pass in a pure FCP environment vs pure iSCSI. FCoE can be setup basically the same way.

The other discussion attached to FCoE vs iSCSI vs FCP is cost per port, standardization, and requirements. If you need DR replication sure FCP won't work FCoE won't it seems like a no brainer. Tier 1 apps need tier 1 availability imo FCP all the way and budget typically doesn't matter in these scenarios $2k for connectivity to business data is nothing.

The one big advantage of FCoE vs iSCSI I can see is port aggregation/trunking for iSCSI. I don't believe port trunking is a capability of FCoE (assuming 1gbe). If you're blessed with something like symantec storage foundation you can MPIO and load balance in it meaning run however many FCoE and have less protocol overhead.

One unknown which few discuss is the overhead of the SAN going from iSCSI to FCP, I assume it's negligible. It used to be a killer on the client side until the offloaders came along but some (most) still cause interrupts on the bus seek complete offloaders.

Enough rambling EOM.

The comments to this entry are closed.

Chuck Hollis

  • Chuck Hollis
    SVP, Oracle Converged Infrastructure Systems

    Chuck now works for Oracle, and is now deeply embroiled in IT infrastructure.

    Previously, he was with VMware for 2 years, and EMC for 18 years before that, most of them great.

    He enjoys speaking to customer and industry audiences about a variety of technology topics, and -- of course -- enjoys blogging.

    Chuck lives in Vero Beach, FL with his wife and four dogs when he's not traveling. In his spare time, Chuck is working on his second career as an aging rock musician.

    Warning: do not ever buy him a drink when there is a piano nearby.

    Note: these are my personal views, and aren't reviewed or approved by my employer.
Enter your Email:
Preview | Powered by FeedBlitz

General Housekeeping

  • Frequency of Updates
    I try and write something new 1-2 times per week; less if I'm travelling, more if I'm in the office. Hopefully you'll find the frequency about right!
  • Comments and Feedback
    All courteous comments welcome. TypePad occasionally puts comments into the spam folder, but I'll fish them out. Thanks!