When I first started discussing this emerging storage networking standard, it brought out vigorous debate, not only from others in the industry, but also within EMC.
I've been quietly tracking FCoE, and -- not surprisingly -- it appears to be on its way to achieving critical mass.
And, it's not just me who's noticed the same thing.
So, What Is FCoE Again?
We love acronyms in this industry, don't we? This one stands for "Fibre Channel over Ethernet".
The storage networking world has largely adopted the FC protocols, but has always been intrigued by achieving the cost points associated with ethernet technologies.
The popular iSCSI protocol attempted to address this "best of both worlds" approach, but did it over IP protocols, rather than using "bare metal" ethernet frames. For this and other reasons, iSCSI has done well in smaller environments, but is rarely (if ever) seen in large scale corporate SANs.
I'm probably going to get another round of vitriolic comments from the iSCSI fanboy club on this last statement, but facts are facts, folks.
FCoE is different -- it delivers the exact same behavior as FC, but does it over 10Gb ethernet. Now, if you're responsible for a large, corporate FC SAN, this is intriguing to you on several levels.
You're interested in the lower server-attach costs if server vendors (and HBA) vendors deliver FCoE-capable hardware at lower cost than current FC alternatives. You're intrigued with the idea of converged data center switches (like Cisco's new Nexus platform) that offer perhaps a better approach than dedicated FC switches and directors.
If you're a large SAN person, you're keeping an eye on this.
Others Are Keeping An Eye As Well
Perhaps the best summary writeup I've seen was from Mary Jander over at Byte&Switch, who provides a nice recap of recent industry activities in this space.
From my perspective, perhaps the most surprising thing about the article was her surprised tone. I think she and others had initially dismissed FCoE as JASSS -- Just Another Silly Storage Standard.
Well, I can understand the skepticism, but I felt early on that this one was very different -- it solved a legitimate problem, and did it in such a way that everyone could benefit: customers as well as vendors.
So, When Is FCoE Going To Be Available?
I get asked that question a lot. Unfortunately, there's no hard-and-fast date. I could be smug, and say -- technically -- you can go out and buy FCoE products today if you really, really want to. But, of course, no one is really doing that yet.
What happens with these standards is that -- over time -- vendors start added FCoE support to their products, building an ecosystem of things that work together, and --- most importantly -- maturing their solutions.
Customers look at this, and make their own decisions as to whether they'll get started with the new stuff, whether it's mature enough for their tastes, not to mention pragmatic things like funding, refresh cycles and all of that.
My personal guess? We'll see lots of product announcements during 2008, and we'll see the first wave of signficant deployments during the first part of 2009. I'm guessing it'll be one of the "cool storage projects" in larger IT departments in about a year.
I was part of EMC when FC SANs first hit the market -- EMC was out in front of this one -- and there was a fair degree of evangelism, early trials, etc. before FC SANs become mainstream. When we brought FC SANs to market, it took a few years of evangelism to convince the majority of people to move of off direct-connect SCSI, and think in terms of FC networks.
I don't think it'll take anywhere as long this time around ....
Hi Chuck,
iSCSI fanboys? That's funny, I want the hat! OK, I agree with you, current big SAN customers with lots of FC already will want to maintain their FC infrastructure. But I still have to take a skeptic's side. There's a lot of new stuff to buy to make FCoE work. I might be surprised, but given the relative small TAM compared to vanilla 10G Ethernet, FCoE equipment is going to seem pretty expensive. iSCSI products probably aren't going to scale up adequately for large-scale enterprise applications, however, they work very well for things like email and server virtualization/consolidation. There is no question that FCoE will be a very good answer for some, but for others its overkill - a technology that overshoots the market's requirements.
Posted by: MarcFarley | February 22, 2008 at 03:53 PM
Hi Marc -- I think you're missing a key point -- there should be no cost penalty for using FCoE 10Gb vs any other 10Gb ethernet implementation. It looks like it'll be common silicon, HBAs, etc.
So, go ahead and be skeptical. That's what makes the world go 'round.
Posted by: Chuck Hollis | February 22, 2008 at 04:13 PM
Chuck,
My 2 cents...
FCOE interest is growing as a parallel alternative to the predicted rise of iSCSI topologies over the next few years. It has some fundamental hurdles to jump and it is definitely some years away. It doesn't make sense until widespread adoption of 10GbE has come down from the stratosphere. Will it save money? You'll need FCOE (TOE?) HBAs and expensive switching to get the best bang, which is a wash with today's FC HBAs. Management costs may be a little lower, but Brocade and Cisco do a good job of that today on their FC plumbing. Moreover, the vast majority of the customer base (them between the NSA and the low-end SMB space!) aren't yet fully utilising FC4. Sun makes a good point with Infiniband, which is popular for HPC and Oracle RAC interconnects, but has been stillborn for the majority of storage topologies. It may yet have some legs - if Cisco sees the market opportunity (!)
Posted by: mgbrit | February 23, 2008 at 10:54 AM
Hi mgbrit -- rather than argue point-by-point with you (which I could), rather, let's see what develops by this time next year?
A few of my industry friends have some friendly wagers on "signficant customer adoption" of FCoE during 2009, mostly nice bottles of wine.
About half are lining up for, and half against. I guess that is what makes it interesting!
I don't know what you do for a living, mgbrit, but if you're responsible for a large FC site (as opposed to working for a vendor, reseller, etc.), I bet you're following it closely as well.
Cheers!
Posted by: Chuck Hollis | February 23, 2008 at 02:39 PM
"but I felt early on that this one was very different -- it solved a legitimate problem, and did it in such a way that everyone could benefit: customers as well as vendors."
Agree. It will make data center simpler. It is the trend, but I don't think it will immediately replace existing SAN. There will be a trial period. It also means new opportunities to both end vendors.
Posted by: Shibin Zhang | February 24, 2008 at 03:35 AM
A comment of FCOE vs. ISCSI.
In page 7 of following article
http://www.netapp.com/library/tr/3496.pdf
, it was found that the software iSCSI had better throughput than the hardware iSCSI. I guess the root cause was that the hardware TOE became the performance bottleneck. TCP logic is too complicated. It's common sense that simple logic can lead to low-cost and high IOPS hardware, but complicated can lead to the opposite. FCOE won't have the same performance bottleneck.
Posted by: Shibin Zhang | February 25, 2008 at 02:42 AM
You are right, its one worth watching. It is always facinating to look at how markets adopt or not and the speed with which for new technologies. I would like to see some more discussions aroud FCoE topologies and ecosystem rather than just the merits of the technology. The more education that occurs about how you might implement FCoE, then the easier it will be for clients to envisage usage. One area that is still a little hazy is also the work being done on the Etnernet standard to support this, the impact on the design of the network this will have etc. For some this might represent an opportunity for FUD, unless it is clear.
Of course, anytime a Tier 1 Vendor brings a new technology to market (or announces support for it), one HAS to be at least cast an eye over it.
Cheers,
Greg.
Posted by: FCoE Gathers Steam | February 25, 2008 at 11:17 AM
The biggest problem I see with FCoE is why would existing shops use it? It introduces another protocol (Ethernet) into there enviorment; which means it takes to groups (Storage and IP) to setup / manage it. So the cost savings have to be considerable to make it worth the extra effort and cost (two groups) to install / monitor it.
If you are starting a new shop it might be worth looking at but for current SAN customers I do not see it (just like iSCSI)
Posted by: fcman702 | April 03, 2008 at 12:33 PM
Hi -- I think we need to separate protocol (e.g. FC, IP, iSCSI) from wire.
Do you use your networking group to set up your FC environments today? Probably not. There's no reason to believe that you'd need to do so in the FCoE world.
The cost savings come through consolidation and scale.
FCoE is part of a broader range of standards that point to world where a single "spigot" could simultaneously serve storage needs, server-to-server clustering as well as standard networking duties, all based on Ethernet.
Scale comes from the economies of manufacturing associated with components. If the FCoE silicon ends up being just standard 10Gb parts, then we'll all see a dramatic drop in component costs, or should.
We'll see, won't we?
Posted by: Chuck Hollis | April 04, 2008 at 10:57 AM