As I reflect on my whirlwind visit to SNW (and read all the materials), it's pretty clear that the one standout from this event is that it marks the date where FCoE is starting to be taken very seriously by customers and vendors alike.
Not that I feel a bit vindicated, but there's a story here ...
Let's Go Back A Bit
I think I first wrote about FCoE way back in April of 2007.
The reaction was interesting, to say the least.
There was one crowd that felt very defensive about iSCSI, and saw it all as an evil plot by the FC crowd to regain control of the marketplace.
Sure, "only the paranoid survive", but the condition is responsive to modern medications :-)
Another crowd pointed to the evolution of FC from 4Gb to 8Gb, as well as declining FC prices, and argued "why do we need this?". Answer: if you're a student of technology history, ethernet always wins. Just ask anyone who spent big on Token Ring :-)
There were others who saw this as JAS -- Just Another Standard -- and relegated to the confusing tangle of various standards that this industry seems to be always working on.
And now, there's enough FCoE generally available that we enter the next phase -- do customers really want this stuff?
And there are two distinct and separate things going on here that you'll need to watch.
The Tactical Angle
One very popular lens is "how does it compare to FC?".
And that's all going to be driven by things like economics, broad-based vendor support, and the like. At the end of the day, a pipe is a pipe is a pipe, and several of the discussions I heard were along the lines of a like-by-like comparison.
And, like any other storage networking technology, we all know what the questions will be, and what the answers will have to be in order to be successful.
But, even then, there are some differences.
As an example, we never saw servers with motherboard level FC ports on them, did we? But, if you think about it, I'd be surprised if we didn't see server motherboards with 10Gb ethernet on them.
And the only think cheaper than an ethernet HBA is having it right on the motherboard.
That'll shake up the cost equation if it comes to pass, won't it?
Some are predicting the demise of iSCSI altogether.
I don't agree with this perspective as it makes ostensible sense in the 1Gb world which is good enough for many, but -- clearly -- some of the putative advantages are moderated a bit if FCoE adoption catches on.
Coexistence
The other thing I found interesting in my discussions was the assumption that, somehow, people would immediately stop using FC, and start using FCoE. As in rip-and-replace.
The truth will likely be completely the opposite. FC and FCoE will need to comfortably coexist for quite some time in most corporate SAN environments.
I'm guessing that FCoE will be more attractive to customers doing newer VMware builds and hosting more demanding I/O loads -- something that started in earnest this year.
On a very simplistic level, it makes a certain sense. Let's see, here are 6 or so servers all attached via dual 2Gb FC that we're consolidating. Putting them on VMware doesn't mean they're doing any less work from an I/O standpoint. Maybe a pair of 10Gb FCoE channels isn't such an outlandish idea after all ...
The Strategic Angle
It's not all in the marketplace, but if you listen carefully, FCoE is but a single step to the next nirvana: converged ethernet fabric for the data center. Call it DCE, DCF or whatever -- the idea is still the same.
Wire once and walk away.
Hard channels for different kinds of network traffic that are blissfully unaware of each other.
Network reconfigurations done via software, rather than crawling around and pulling cables.
That's a mighty attractive world if it comes to pass.
Now, I don't think that any vendor can guarantee that the FCoE kit you buy today will be upgradeable to this new world. But -- as it comes to pass -- this should provide an additional incentive for IT architects everywhere to take a closer look at FCoE -- not only from a tactical perspective, but a strategic one.
I seem to remember making a bet with Chad Sakac about FCoE adoption back in April 2007. I don't remember the specifics, but I do remember there was some nice wine involved :-)
Well, things are happening right on schedule.
Now it's up to all of you in IT land to make your decisions -- important or not?
Courteous comments welcome as always ...
I'm not sure how many folks outside our little world are taking FCoE seriously. But in the words of my favorite TV space cowboy, "my says of not taking you seriously are definitely coming to a middle!"
Now how about a native FCoE array?
Posted by: Stephen Foskett | October 16, 2008 at 01:53 PM
Great question, Steve!
The answer is "the world doesn't really neeed one -- yet".
NetApp's "announcement" that they may have one by the end of the year is a nice testosterone-based marketing move, but I'd like to think we're a bit more pragmatic here at EMC.
And if you think about it a bit, you'll understand why.
- native arrays aren't faster / cheaper / better than their FC counterparts -- you're only changing a few ports, and the current switches support both.
- likely early adopters will want intermixed FC and FCoE environments for a while.
- No one will be buying a new array just to get FCoE anytime soon.
- We think the investment for now should be on interop, drivers, qual, management tools, support, config guides, etc.
Which is why we spent our effort on qualifying the Nexus in mixed environments, including presumably older arrays. And why we spent big bucks qualifying the new adaptors. On as many operating systems as we have drivers for.
Not to throw stones, but I don't think Netapp did too much of that sort of unglamorous heavy lifting.
Simply put, we see this FCoE thing as far more important than getting people to buy a new array :-)
Thanks for the question!
Posted by: Chuck Hollis | October 16, 2008 at 03:22 PM
Chuck,
Looks like from the comments http://www.theregister.co.uk/2008/10/15/fcoe_io_kill_iscsi/comments/ on The Register, the major vendors in this space have an awful lot of work to do to convince the industry that
a. this is a solution that solves real problems.
b. it's a better solution than leading edge versions of established protocols like 10Gb iSCSI or FC8.
c. it's not just a solution being proposed by major storage and networking vendors trying to maintain market share or protect margins in a space that is commoditizing.
That it is different from the predominant established protocols will be of concern to those storage managers who have become familiar with fibre-channel. We're all familiar with the issues of fibre-channel such as the complexities of maintaining fabrics, zones, firmware revisions on switch, SAN, HBA and the costs that this plumbing comes with.
There are multiple advantages of simplicity and cost reduction that play into iSCSI's hands too, which I won't go into here. In short, the world will continue to demand iSCSI because of its simplicity.
Sophistication designed around simplicity is more optimal than sophistication designed around complexity.
Geoff @ Dell.
Posted by: Geoff Mitchell | October 17, 2008 at 03:38 PM
Hi Geoff -- thanks for commenting.
Right now, we have a small number of customers who are extremely enthusiastic about trialing FCoE. By "small", it's probably more than you'd think, but not enormous.
Whether these people like what they see remains to be seen. We like what we see, but that's just us vendors. And, of course, we need those small numbers to be larger numbers.
This is happening mostly in larger enterprises that are figuring out their next SAN architecture: 8Gb FC or 10Gb FCoE?
Trust me, these people are not even thinking about 10Gb iSCSI. I think 1Gb iSCSI will still be popular for quite a while, though.
Thanks for writing
Posted by: Chuck Hollis | October 17, 2008 at 04:32 PM
A simple question: Why was FC chosen to be embedded in the ethernet frame, instead of the more logical contenders like SAS...I know Coraid is driving the AoE (ATA over ethernet)...but are any of the "big boys" (Cisco,EMC)embracing it and is it going anywhere? SAS has the capabilities of FC...and if that is in place , SATA can be tunneled. Otherwise native Aoe could also be efficient and inexpensive. There are some talks of security, which I dont think are "unsurmountable" ...especially in a dedicated network in a Datacentre (OOB kind of traffic), all such traffic can easily be segregated. Are there any other reasons? Or it is an exercise: "Now that we have made significant investments in FCoE...let us cook up reasons why that is the best and the rest are 2nd grade"? Finally for me as a buyer, I will want something that gives me more storage for every $...not the "bells and whistless of the infrastructure" (FCOE/iSCSI everything like such transport falls in that category)..lets us talk technology here and keep the bean-counters out for some time...anyone?
Posted by: Sudhir Brahma | October 19, 2008 at 12:18 PM
I don't know if there's a technical argument or not, but one indisputable advantage of FCoE is that about a bazillion data centers understand the FC protocol, how it behaves, how to configure and manage it, etc.
I can't speak to SAS in larger fabrics, but it's hard to avoid that it'd be "something new to learn" for lots of folks in the data center.
And preserving existing familiarity is important. After all, FC uses a SCSI command set, right?
Thanks --
Posted by: Chuck Hollis | October 19, 2008 at 02:36 PM
Thanks for the quick response Chuck. I appreciate it.
If indeed, a "bazillion data centers understand the FC ", then it is a self fulfilling technology- why create inefficiencies by embedding it in Ethernet? Let the Storage world be "SaNned" by just FC...obviously that is not so. By embedding it in an Ethernet datagram the Storage Industry acknowledges the 'better prevadence" of Ethernet. If that is given, then the choice of what to embed inside is really restricted to what a disk natively understands:...and most disks today understand SATA,SAS,SCSI,FC (In that order of preference-highest being SATA and SAS)...in all this, FC seems to be some thing really odd...something that seems to be driven by financial compulsions (read profiteering) than rational technical reasoning....just my take of the matter.
regards
Sudhir
Posted by: Sudhir Brahma | October 20, 2008 at 10:24 AM
Actually, Chris Mellor has found a patent for SAS over Ethernet here http://www.freepatentsonline.com/y2008/0228897.html
Now, a reader (me) did some googling and discovered that the author of said patent, Mike Ko appears to work for one IBM Almaden. So welcome to our new Storage Uber-Protocol, SASoE; which will just have to be known as Sassy!
Posted by: Martin G | October 20, 2008 at 12:40 PM
Wow that was fast Chris...thanks for the information....kind of re-inforces the old adage.."if it is a good idea, someone would have done it or found it already"...the guy who invented the wheel was probably more fortunate :-)!!
I bet this idea or Patent may not go any where till the "big boys" bless "Sassy"...and if they do...we can probably start seeing whitepapers which claim how this was the "more obvious way of doing things" and then show us how all other transports (FCOE or ISCSI) were inferior, while their marketing folks get armed to sell new things to the world.
Chuck: Is EMC going to go behind this?
Chris: Thanks Again!!
regards
Sudhir
Posted by: Sudhir Brahma | October 21, 2008 at 06:45 AM