« EMC Information Infrastructure for Microsoft | Main | Next Generation Backup, Recovery and Repurposing »

January 02, 2007

Comments

Chris M Evans

Chuck, great post. The bits that stick for me are; political - I see that issue of network v storage guys as a big thing; I've done some FCIP recently and getting a straight answer out of the network team isn't easy. On the hardware front, I agree power will be an issue, but I think more with the switch itself. I've posted previously on the Gb/s per watt the major vendors provide (Brocade was best). Virtualisation will go some way to reduce the argument of power consumption in servers. Most of all I think it is the change thing. The environments I work in have a massive aversion to change, the implication that things will stop working, work incorrectly and basically put the business at risk. That can be something as simple as moving to another NAS supplier....

Dave Hitz

Chuck, I disagree that "the year of iSCSI" hasn't come. I think it happened in 2004.

Details: http://blogs.netapp.com/dave/ThinkingOutLoud/2007/01/07/The-Year-of-iSCSI.html

Dave

Chuck Hollis

I saw your post, Dave, and I see your point. The benchmark you chose was $100m of revenue, which is not entirely unreasonable.

And I can't argue that iSCSI is not a healthy market for EMC, NetApp and lots of other folks.

The benchmark I chose was different than yours -- I chose to focus on customer adoption rather than arbitrary revenue milestones. $100m against a backdrop of tens of billions doesn't represent a broad-based shift in customer thinking, IMHO.

I think it's fair to say that -- at one time -- many of us thought that people using FC would start to consider iSCSI as a serious alternative, and we'd see broader adoption.

That hasn't happened, and I thought it was an interesting lesson for many of us, now somewhat obvious in retrospect.

My other goal was to dispel the occasional notion that, somehow, major vendors like EMC were "holding back" on iSCSI. Actually, the opposite is true -- if anything, we've overinvested rather than underinvested.

Enjoy reading your blog, Dave!

Wolfgang Singer

Dave,
I agree with many things you are stating in your blog. Specifically that several years it was incorrectly predicted for the next year that "This is the year of iSCSI". However, when we take a look at the following figures mentioned at the Storage Networking World about the number of iSCSI implementations:
Fall 2005: 4,500
Spring 2006: 12,000
Fall 2006: 20,000 (with predicted 30,000 by year end).
This shows clearly that the iSCSI adoption rate is increasing fast. However, from a market size point of view, iSCSI does not even come close to the FC market (yet).

Chuck Hollis

I can't disagree that iSCSI is growing fast, or that there are many implementations.

The growth figures are off a very small base, and -- again -- today, around 5% of the SAN market is iSCSI. That's enough to sober even the most ardent advocate.

More importantly, these implementations aren't at the heart of corporate IT. Heck, my perception is that the topic isn't even up for discussion.

And that's the point I wanted to make.

BTW, this post kick-started a very vibrant debate. All good for the industry.

Thanks for the post ...

Christophe Baranger

Dave,
Regarding this market since the beginning, I think most of the time we miss one important point about the yes/no iscsi debate. If you looking at the iSCSI as a potential replacement for FC, I am completly agree with you. But I think that iSCSI put more on the table than just a shift in connectivity. If you lookat startup company, NetApp or EMC with the Celerra, the value proposition of these iSCSI solution is more around a virtualisation engine. You do not have anymore a direct link between the exported virtual Lun and the physical drives. The Lun takes their needed blocs from a pool of storage. The immediate benefit is simplicity, not in the GUI, but in the management of the array. The array takes care of the underlying optimisation of disks. As an analogy, MSSQL did take market share to Oracle because the product was better (and it was far below) but because the same IT guy was managing network, OS, Applications... and not anymore just a DBA. I thinks the same thing is happening to the storage and iSCSI is the way to do it. It is not the end of the FC, it is just that the next wave will not be FC.

meh130

I'm not sure comparing iSCSI and FC revenues is the right comparison. There is no doubt FC Symmetricies drive more revenue than iSCSI Celerra. The same likely applies even in a Clariion environment. FC attached systems are likely richer configs than iSCSI.

FC is the preferred mission critical data center storage transport. iSCSI is popular with distributed clients, such as diskless PCs, distributed web-servers, etc. In some cases iSCSI has enabled a new capability (diskless PC boot), and in some cases it is used as an alternative to NAS.

I have also heard iSCSI is popular in Microsoft environments because network shares are not allowed for some applications (Exchange mail stores), but iSCSI allows Ethernet attached storage.

FCoE is not a panacea. It cannot be routed (unless a Ethernet FC router is introduced into the Ethernet network, which seems kludgy), so it cannot replace iSCSI which is typically used in a routed IP environment.

FCoE may require new NICs to support L2 reliability.

So iSCSI and FCoE will have to coexist. So much for converged networks.

If Ethernet becomes the grand unifying transport, it opens up three options for the datacenter: FCoE, iSCSI (including iSER), and networked file systems (including NFSoRDMA, pNFS, pNFSoRDMA, etc.).

FCoE is not innovation. It is the same thing repackaged. As was iSCSI. As is iSER.

NFSoRDMA is interesting, as it eliminates the big problem with NFS (CPU load).

pNFS and pNFSoRDMA seem truly game changing.

My guess is five years from now, all of us will look back and say "Ethernet took over, but things didn't turn out I predicted in 2007."

But for EMC, spinning rust will still be spinning rust five years from now, so you will win regardless of who wins the coming protocol wars.

Florian Heigl

Hi, I just found this - quite randomly - while searching for benchmarks on FC-iSCSI routers.

I'll put down a quite personal opinion here and will try to reason it out, too.

"So why is that? And therein lies the story ..."

plain and simple:
It is because iSCSI sucks.

Disclaimer: I've been toying around with iSCSI for quite a few years now. I've seeked dark channels to get a hold of the Cisco branded software-initiators, I've done plain iSCSI from W2K to a NetAPP filer the moment M$ realeased the beta initiator, I've even had databases on it just for testing, and I've even gone through the madness of using it on HP-UX and AIX.

- Platform support issue: not true - Cisco had everything covered in 2003 already. Real fact: They stopped covering everything because noone was interested. Current software initiators work for about everything (Ok, FreeBSD might lack encryption, but as long as noone even notices seems no issue)

- Performance gap being irrelevant:
Not true. This IS what IT managers will care about. In real world datacentres people are happy to see 150MB/s coming out of dual 4gb links, and they absolutely EXCEPT an i/o overhead for this of no more than 2% cpu power.
What you propose them is about the following scenario: "Oh, given dual gigE links you'll easily see 60MB/s on certain transactions at a neglible cpu usage of roughly 25% unencrypted and just slightly reduced network bandwidth"

Numbers:
one of your CPUs + service contract easily runs $10000+ - suddenly waste 10-25% of that for overhead?
you, in fact, need 2 additional gigE lans to match the reliability of your current SAN
you need to use extremely highend-specced lan switches to REALLY match the reliability (think ISL == dual-trunked cross-switch VLANs) Those switches cost a lot more than a few cheap brocades / ciscos.

- "As we matched up iSCSI performance against our rather extensive knowledge of real-world workloads, it became pretty clear that a significant majority of applications could run comfortably on iSCSI with no negative performance impact whatsoever."

shouldn't the lower layers of an infrastructure match up to potential workloads? or should the customer rather add an FC adapter and remap the lun to FC once peak loads occur? Like a database backup that'd take just 50% longer? Or do we suggest the DMX disk mappings are made in a way that concurrent IO will the bandwidth under our 'threshold'?


Then comes security - encryption? nice thing. just a slight CPU overhead for those 65MB/s without an offloading HBA. if you buy the HBA the cost benefit starts wearing off. Same if you build separate lans, not mentioning Zoning is slighlty less complicated than using 1000-2000 VLANs for those few hosts you have.

- Lack of integrated enterprise level tools. Large environments won't be straightforward to manage, meaning HUGE administrative costs. not just training, but constant overhead. No issue for toying around, making showcases and customer presentations. But will you be liable for managing a few thousand iSCSI luns using selfmade scripts or the filer GUI? (sorry Dave, but I didn't cancel DataFabric Manager :p)

- Hell, without an dedicated HBA you can't even use it to boot a system reliably, and even if you have a dedicated HBA there's no multipathing, etc, which equals a lot of tiny SPOFs in consolidated environments.

- the 10ge argument:
People have run iSCSI over 10gbit ethernet, and they cheered as they reached around 600MB/s sustained throughput.
The funny thing is, a friend of mine used to do movie postproductions, those 600MB/s are just slightly what a 1997 SGI Octane could push through the FC HBAs they had in it.

1. iSCSI might come to better reputation once the performance catches up to post-2000 numbers
2. working encryption, boot, multipathing working on enterprise scale might help. think scsi reservations, automatic handling of 1000s of encryption sessions, certificates, etc. That's when it works - because that is what the enterprises already *got*.
3. Any actual new features might help, too.


Florian

P.S.: Thanks a lot it at least includes iSNS.

Chuck Hollis

Hi Florian, and thanks for the commentary.

Have you had a chance to look at FCoE and offer a perspective?

Thanks

Ethernet Over Copper

I was wondering that too. Thanks for the insight.

The comments to this entry are closed.

Chuck Hollis


  • Chuck Hollis
    SVP, Oracle Converged Infrastructure Systems
    @chuckhollis

    Chuck now works for Oracle, and is now deeply embroiled in IT infrastructure.

    Previously, he was with VMware for 2 years, and EMC for 18 years before that, most of them great.

    He enjoys speaking to customer and industry audiences about a variety of technology topics, and -- of course -- enjoys blogging.

    Chuck lives in Vero Beach, FL with his wife and four dogs when he's not traveling. In his spare time, Chuck is working on his second career as an aging rock musician.

    Warning: do not ever buy him a drink when there is a piano nearby.

    Note: these are my personal views, and aren't reviewed or approved by my employer.
Enter your Email:
Preview | Powered by FeedBlitz

General Housekeeping

  • Frequency of Updates
    I try and write something new 1-2 times per week; less if I'm travelling, more if I'm in the office. Hopefully you'll find the frequency about right!
  • Comments and Feedback
    All courteous comments welcome. TypePad occasionally puts comments into the spam folder, but I'll fish them out. Thanks!