So, if you're a regular reader of this blog, you know I've had my eye on FCoE for a while.
For me, this has the potential of being a winner for everyone who uses FC extensively today -- and you know who you are.
Even though I didn't go to SNW (I'm not a big fan of industry shows), I did enjoy watching the ritual and how different people reacted.
What's This All About?
I've written about FCoE before (April 07, Oct 2007 and Feb 2008).
Simply put, the majority of the storage networking marketplace uses fibre channel. It's not as cost-effective as ethernet. FCoE potentially offers fibre channel-style networking with the economics of ethernet.
If you're thinking TCP/IP (or UDP, etc.) think again -- FCoE is a "bare wire" protocol that doesn't have routable packets, but does have "lossless" characteristics that are oh-so-important for many storage use cases.
The parade of vendors announcing support for FCoE was pretty impressive -- not only the HBA vendors, but new switch vendors like Nuova (now Cisco!), and -- of course -- storage vendors such as EMC and NetApp.
And, of course, there were a few vendors who felt a bit put out by all of this talk of a potential alternative to iSCSI -- not surprising, given their position.
What You Need To Know
FCoE is all about the economics of ethernet. If it doesn't deliver the economic bang-for-the-buck, it just won't be interesting, will it?
And all of that is built on a few default assumptions:
- standard 10Gb ethernet silicon can be used for FCoE (looks like it so far)
- costs drop dramatically as 10Gb volume and demand builds (in process)
- this same silicon shows up on motherboards and low-cost NICs (no reason why not)
- the FCoE standard allows vendors to spend less on qualification, interoperability and support in heterogenous environments than we saw with FC.
I'd like to think we learned a few things the last time around in regards to defining a standard that minimizes interoperability "expense", but we won't know until we see the different implementations and how they work in the real world.
If all these things happen (or look like they're going to happen), it's pretty clear to me: FCoE becomes logical successor to the FC world we live in today.
The iSCSI Crowd Feels Threatened?
There are more than a few companies and individuals who've built their offerings around iSCSI. Now, for the umpteenth time, there is absolutely nothing wrong with iSCSI -- where it fits.
But there's a reason why other storage protocols exist (e.g. FC, NAS, CAS, etc.) as they solve certain storage problems better than iSCSI can.
Some vendors can live comfortably in this "best tool for the job at hand" world. Others, well, they're having a really tough time -- the angry, outraged FUD has started.
Even some of the traditional FC storage crowd (e.g. HDS, IBM, HP, Engenio, etc.) are strangely silent whether they're going to be supporting FCoE or not.
Why? It's a big, honkin' expense to support an entirely new storage protocol, especially one that's designed to take big costs out of the storage networking environment.
My Prediction Here?
We'll see enough viable FCoE offerings towards the end of 2008 for people to get started if they choose. The technology will be robust enough for many to evaluate, but probably something most people won't want to put into production in any serious way.
As we come through 2009, we'll see broader ecosystem support from more vendors. I think the costs associated with FCoE will be somewhere between FC and standard ethernet, simply because the volumes will haven't increased enough yet - but they'll be moving in the right direction.
And, finally, as we get into 2010, we'll see prices decline to something approaching ethernet costs, and another wave of ecosystem effects will start to kick in. And, if my crystal ball is working, we'll start to see widespread adoption across the board, especially for new enterprise SAN builds.
Will iSCSI go away? No, it shouldn't. It's still probably the simplest block-oriented protocol out there, and I bet it's still going to be attractive for many smaller SAN implementations. You just won't see any of it at the core of larger enterprises, just as it is today.
A Stepping Stone To Unified Fabrics
I think the real action is in having storage protocols (e.g. FCoE) co-exist nicely with other data center protocols (e.g. TCP/IP, RDMA, etc.) on the same wire, using the same silicon. Being able to offer different styles of network behavior -- on the same wire, at the same time, without them interfering with each other -- is where all of this is going.
And, if this sort of "network unification" takes place, the arguments for large data centers to create a single fabric are going to be pretty compelling ...
I don't think the iSCSI crowd necessarily feels threatened by FCoE, but I think the opposite.
I've read your posts and although I agree that FCoE will have some momentum, I've yet to see you explain why you think that FCoE will be inherently better than 10 Gb iSCSI, or better than FC8 for that matter.
Pigeon-holing iSCSI to small SANs in SMBs seems to run contrary to IDC/Gartner predictions of the size of this market over the next few years and the argument may hold true with 1Gb iSCSI, but certainly not with 10Gb iSCSI
I'll look forward to this article :)
Posted by: mgbrit | April 14, 2008 at 04:27 PM
Hi mgbrit
I can't speak to how the FC-only vendor crowd feels about iSCSI, since the vendor I work for has FC, iSCSI, NAS, CAS et. al. I don't have a lot of empathy for anyone who believes a single protocol will solve all storage challenges.
Regarding market share growth for iSCSI, just about every analyst is pointing at SMB/small commercial growth for iSCSI. Not a single analyst that I'm aware is predicting iSCSI displacing FC in data centers.
But I think you know this already, don't you?
As far as "inherently better", that's a useless statement without context, and I think you know me better than that.
To be specific:
FC8 will appeal to FC4 customers who want to stay with what they've got, and just ride the next wave of technology improvements. Just as exciting, for example, as the transition from FC2 to FC4, and -- of course -- FC1 to FC2. Ho hum.
10Gb iSCSI will appeal to people who've started out with iSCSI, and want the next performance bump, if and when the costs come down to be attractive. No surprise here, right?
I think what I've failed to explain is that FCoE will be attractive at scale. Save a couple hundred bucks per port, multiply that by several thousand ports, and larger implementations will pay attention.
Take any cost saving number, multiply it by a very big number, and you'll have a very big number.
As an example, I routinely meet customers who have FC port counts at 5,000, 10,000 and even more. They'll never do any IP protocol for storage; they want a lossless storage protocol, no retries, a behavioral and management model that they're familiar with, and so on.
Go ahead, just try to convince them that iSCSI can solve all their problems.
I dare you.
For these big data centers, they're watching FCoE, and they're waiting to see when it makes sense for them. And, from their perspective, it's entirely logical.
Now, if you believe that the future will be be larger, consolidated data centers, with demanding service levels, etc. where will iSCSI play?
Posted by: Chuck Hollis | April 14, 2008 at 08:07 PM
Saying that FCoE will approach the cost of Ethernet is misleading. There is a large difference between the cost of 1GE and 10GE. If FCoE is mainly approaching the cost of 10GE, then FCoE is much more expensive than the cost of FC8. This has always been a problem for iSCSI and will be a problem for FCoE. Pundits always say - just wait till you get iSCSI at 10 Gig. We keep waiting and then when it comes it is unpractical from a cost perspective.
Posted by: Fiber Guy | April 21, 2008 at 11:24 AM
Hi Skip (from Brocade) -- thanks for commenting.
Yes, we're talking 10Gb ethernet here, not 1Gb.
I don't think it's misleading at all. The goal is not to have dedicated, low-volume parts for FCoE (like we have for FC2, FC4 and FC8), but to have high-volume parts that are essentially standard 10Gb ethernet used in a specific role (e.g. FCoE).
Of course, if it doesn't hit the cost targets, or doesn't become pervasive (e.g. integrated with the motherboard, etc.) it'll never take off.
Since you work for Brocade, you've probably met the same data center people I've met -- ones that just won't consider iSCSI at any speed, period / end-of-story / don't-bring-it-up-again, please.
I would guess that some of them roll forward to FC8, just like they did to FC4 and FC2. But, this time, I think many of them will take a hard look at FCoE.
Posted by: Chuck Hollis | April 21, 2008 at 12:18 PM
The costs of 10GbE are already well below that of 8 GbFC.
Cisco Catalyst 3560E-12D (12 x 10GbE) = $1150/port. Intel Dual-port 10GbE NIC = $667.
Brocade 5100 (24x8Gb FC) = $935/port. QLogic 8Dual-port Gb FC HBA = $2,126.99.
It's the need for a multi-thousand dollar HBA in each server that kills the economics of Fiber Channel. 10GbE NICs are only getting cheaper, and will soon be "free" on the motherboard.
Posted by: Ryan Malayter | July 07, 2008 at 01:39 PM
Chuck - Please excuse the intrusion and question after reading your blog...
at one time I recall Cisco (and others) rejecting Pause as not necessary in Cisco switches with wire-speed switching and non-blocking, shared-memory.
and that a correctly designed network would not require it.
what has changed with the 10gig switch technology that now requires it?
can you help me with this please?
I was hoping for a easy answer if there was one
thanks
matt sanders
Posted by: matt sanders | August 01, 2008 at 01:37 PM
No intrusion whatsoever -- always welcome a bit of a chat with others in our world.
Frankly speaking, I vaguely remember something about this as well early on. I don't remember where it ended up, though. All parties seem to be satisfied today, the initial products that support the standard are getting through internal quals pretty well, and most people close to it are pretty confident in this stuff.
If it's important to you, let me know, and I can have one of the technologists get back to you.
Posted by: Chuck Hollis | August 01, 2008 at 03:24 PM
Hi,
Most of the discussions about Data Centre and convergence is probably based on the assumption that they will easily migrate their cabling to support 10Gige. That is an expensive operation. A Cat 6 cable may be designed for 1 Gige, but if it is sufficiently small, maybe the capacitive/inductive impedences may allow thruputs which are sub 10Gige but much larger than a FC of 4 Gige (for example). Unfortunately no ethernet standard exists between 1Gige and 10Gige. Maybe there is scope for a technology to exploit this sub 10Gige thruput- basically better utlizing existing cabling infrastructure and push more data thru. There are several possibilities on the implementation- and ofcourse, both ends of the Level2 network will have to have such intelligent devices which will handshake and settle at some reliable sub 10Gige level.
I will like to know if such an effort has happened and the outcome thereof.
thankyou
regards
Sudhir
Posted by: Sudhir Brahma | August 16, 2008 at 01:47 PM
Being in Az I had the luxury to stop in and see Cisco setup their lab on FCoE @ SNW. It was a mess I won't lie. WWN to MAC and back mapping Lun Masking, all the easy tasks in a native FCP environment were a nightmare in the emulation world. I also was skeptical to the purpose until I saw the presentation. That and the costs of the adapters pretty much sum up the negative from my PoV.
The good? You can setup the network (VLANs or dedicated switches) and forget about it. In large organizations your storage guys have to put in tickets for the network team to provision and configure before they can do their jobs with iSCSI. I wouldn't say subnetting and routing protocols should be in a storage admin's job description as MANDATORY. Sure it'd be nice. With FCoE there's WWN provisioning if you're using a software initiator but they already know how that all works. You can't ARP poison or broadcast storm an FCoE network if the network is setup and hardened properly, which could happen on an iSCSI version. There are a lot of advantages the "it's not routable" bigots like to throw out there. It's fast and 100% off loadable. Most of the arguments against FCoE are easily shrugged off which aren't necessarily true the other way around against iSCSI. Imagine from a security standpoint how simple an audit is to pass in a pure FCP environment vs pure iSCSI. FCoE can be setup basically the same way.
The other discussion attached to FCoE vs iSCSI vs FCP is cost per port, standardization, and requirements. If you need DR replication sure FCP won't work FCoE won't it seems like a no brainer. Tier 1 apps need tier 1 availability imo FCP all the way and budget typically doesn't matter in these scenarios $2k for connectivity to business data is nothing.
The one big advantage of FCoE vs iSCSI I can see is port aggregation/trunking for iSCSI. I don't believe port trunking is a capability of FCoE (assuming 1gbe). If you're blessed with something like symantec storage foundation you can MPIO and load balance in it meaning run however many FCoE and have less protocol overhead.
One unknown which few discuss is the overhead of the SAN going from iSCSI to FCP, I assume it's negligible. It used to be a killer on the client side until the offloaders came along but some (most) still cause interrupts on the bus seek complete offloaders.
Enough rambling EOM.
Posted by: Scott L | October 20, 2009 at 08:03 PM