So, if you're a regular reader of this blog, you know I've had my eye on FCoE for a while.
For me, this has the potential of being a winner for everyone who uses FC extensively today -- and you know who you are.
Even though I didn't go to SNW (I'm not a big fan of industry shows), I did enjoy watching the ritual and how different people reacted.
What's This All About?
Simply put, the majority of the storage networking marketplace uses fibre channel. It's not as cost-effective as ethernet. FCoE potentially offers fibre channel-style networking with the economics of ethernet.
If you're thinking TCP/IP (or UDP, etc.) think again -- FCoE is a "bare wire" protocol that doesn't have routable packets, but does have "lossless" characteristics that are oh-so-important for many storage use cases.
The parade of vendors announcing support for FCoE was pretty impressive -- not only the HBA vendors, but new switch vendors like Nuova (now Cisco!), and -- of course -- storage vendors such as EMC and NetApp.
And, of course, there were a few vendors who felt a bit put out by all of this talk of a potential alternative to iSCSI -- not surprising, given their position.
What You Need To Know
FCoE is all about the economics of ethernet. If it doesn't deliver the economic bang-for-the-buck, it just won't be interesting, will it?
And all of that is built on a few default assumptions:
- standard 10Gb ethernet silicon can be used for FCoE (looks like it so far)
- costs drop dramatically as 10Gb volume and demand builds (in process)
- this same silicon shows up on motherboards and low-cost NICs (no reason why not)
- the FCoE standard allows vendors to spend less on qualification, interoperability and support in heterogenous environments than we saw with FC.
I'd like to think we learned a few things the last time around in regards to defining a standard that minimizes interoperability "expense", but we won't know until we see the different implementations and how they work in the real world.
If all these things happen (or look like they're going to happen), it's pretty clear to me: FCoE becomes logical successor to the FC world we live in today.
The iSCSI Crowd Feels Threatened?
There are more than a few companies and individuals who've built their offerings around iSCSI. Now, for the umpteenth time, there is absolutely nothing wrong with iSCSI -- where it fits.
But there's a reason why other storage protocols exist (e.g. FC, NAS, CAS, etc.) as they solve certain storage problems better than iSCSI can.
Some vendors can live comfortably in this "best tool for the job at hand" world. Others, well, they're having a really tough time -- the angry, outraged FUD has started.
Even some of the traditional FC storage crowd (e.g. HDS, IBM, HP, Engenio, etc.) are strangely silent whether they're going to be supporting FCoE or not.
Why? It's a big, honkin' expense to support an entirely new storage protocol, especially one that's designed to take big costs out of the storage networking environment.
My Prediction Here?
We'll see enough viable FCoE offerings towards the end of 2008 for people to get started if they choose. The technology will be robust enough for many to evaluate, but probably something most people won't want to put into production in any serious way.
As we come through 2009, we'll see broader ecosystem support from more vendors. I think the costs associated with FCoE will be somewhere between FC and standard ethernet, simply because the volumes will haven't increased enough yet - but they'll be moving in the right direction.
And, finally, as we get into 2010, we'll see prices decline to something approaching ethernet costs, and another wave of ecosystem effects will start to kick in. And, if my crystal ball is working, we'll start to see widespread adoption across the board, especially for new enterprise SAN builds.
Will iSCSI go away? No, it shouldn't. It's still probably the simplest block-oriented protocol out there, and I bet it's still going to be attractive for many smaller SAN implementations. You just won't see any of it at the core of larger enterprises, just as it is today.
A Stepping Stone To Unified Fabrics
I think the real action is in having storage protocols (e.g. FCoE) co-exist nicely with other data center protocols (e.g. TCP/IP, RDMA, etc.) on the same wire, using the same silicon. Being able to offer different styles of network behavior -- on the same wire, at the same time, without them interfering with each other -- is where all of this is going.
And, if this sort of "network unification" takes place, the arguments for large data centers to create a single fabric are going to be pretty compelling ...