What'll it be with your VMware environment -- FC, iSCSI or NAS?
Now, given that hot debates rage on storage protocols across the industry, and -- of course -- within EMC, juxtaposing something like VMware on top of it raises another whole round of debates.
And, in the spirit of lively conversation, I thought I'd offer my view of how I'm seeing this play out in the industry and with EMC's customers.
Like all storage protocol discussions, I'm sure I'm going to get my fair share of commentary ...
VMware ESX Can Be A Forcing Function
If you're a smaller shop, and you're looking at going from individual, physical servers to a consolidated ESX server (and many of you are doing just this), you're probably looking at your first storage network.
Although -- technically speaking -- ESX can be direct-attached to an external array without the need for a storage network, a surprising number of people are using their move to ESX as justification to take the plunge on their first shared storage device.
When you're looking at your first shared array, cost and simplicity dominate. We're seeing a majority of preference for entry-level iSCSI arrays in this segment.
It makes sense -- iSCSI can be simple, cost-effective and uses the same technology base as the networks you're already probably running.
Statistically, although we see some NAS and FC in this space, I'd offer that iSCSI-based array are winning the race for entry-level VMware ESX implementations.
Interesting note -- certain mid-tier storage arrays support a reasonable number of direct-attach ports. I wonder how many people go for a SAN (either FC or iSCSI) without looking at the direct connection option?
Or, More Of What You Have
Larger enterprises already have made an investment in FC SANs, and -- not surprisingly -- they want their new VMware ESX servers to plug into the infrastructure they already have.
Let me make this clear: there is no iSCSI vs. FC performance argument any more in this space-- the data is in. IP protocols (iSCSI and NAS) seem to deliver equivalent performance to FC in the vast majority of VMware applications, except maybe for something like video streaming where raw bandwidth *might* matter.
Although there's rarely a technical justification for FC over IP protocols, neither is there a financial justification to eschew the existing FC SAN investment and splurge on a new type of storage infrastructure.
In the established shops that already have SANs, the overwhelming trend is "more of what you have".
It's not religion. It's just being practical.
And Then There's NAS ...
NAS provides some interesting potential options in VMware environments. And most larger shops also are comfortable with running important applications against NAS servers, so it's a viable option.
If you're already running a big SAN *and* a big NAS environment, which makes better sense for VMware?
And that's where it gets interesting, at least for me.
First, let's get the performance argument out of the way. As I mentioned above, if you construct a video streaming benchmark, you could make an argument of FC over NAS. But that's the exception, not the rule. I'd go on record saying the vast majority of ESX workloads would be hard-pressed to tell the performance difference between NAS and SAN, at least in an EMC environment.
But, in VMware, every server is a file. And managing files is what NAS is designed to do.
VMware's support of advanced features (like DRS) in non VMFS environments is getting better. Which means that individual *.vmdk files can now live in NAS environments, and benefit from advanced NAS features using a per-VM granularity level.
As an example, an interesting intersection is the intersection between ILM and VMware.
We're meeting customers who are generating lots and lots of server images.Any time they need a new server, they take an old one, modify it, and keep both around. Not only are there the current server images that are in production, there's the next one they're working on, the last few they've had, and so on.
And these are not small objects either, they might run from 1-8 GB of storage. And they add up.
We've met more than a few customers from heavily compliant environments that not only want to archive important information, but they also want to archive the virtual machine and application that created it.
Bottom line -- there's *.vmdk proliferation happening. And being able to use NAS's inherent ability to move files around to different service levels and archive devices is looking to be useful.
Another example is QoS. Not all running VMs need the same level of performance. NAS can deliver the functionality to detect who's getting what service level, and move the right virtual machine to the right service level transparently.
Another useful service level in this discussion is the data dedupe service level. Data deduplication is great, but it's hard to guarantee a high service level for production data in a dedupe environment. An interesting extension to the QoS discussion is the ability to move VMware images to a dedupe file system (or backup device) when service levels aren't so demanding.
Information security is another interesting area. Today, EMC and others sell tools that can scan a file looking for sensitive information, and when they find it, they can take action -- raise a flag, wrap it in DRM, etc. I don't think it'll be too long before there's a need to scan VM images looking for stuff that isn't appropriately protected.
And, as long as we're talking security, let's not forget that NAS introduces an additional level of well-understood security and authentication not typically found in SAN environnments. Not a big deal today, but might be in the future.
Of course, no discussion would be complete without replication. NAS has a mature set of capabilities for local and remote copying that complements ESX's capabilities -- more options.
Now, I'm sure over time, much of this may find its way into ESX itself. But alternative options are always nice things to have.
The Engineer In Me Thinks ...
VMware ESX's basic abstraction is a file. In VMware, every server image is a file.
It makes a certain amount of sense that any storage device that directly supports this abstraction will have certain advantages over time.
Part of me thinks that -- at the end of the day -- we'll find people starting to see the potential advantages of NAS in a VMware environment.
Widespread profileration of VMware means more information -- not only the data you already have, but the new data in the form of containerized server images. Servers are now just another form of information.
And I think the discussion -- ultimately -- will be driven by information management concerns, and less about performance or economics.
We'll see.
You on the right thought path with NAS and Vmware... We are not just starting to see the benifits of NAS, we are over a year into it...
Here's a list of NFS benefits that we currently enjoy:
http://viroptics.blogspot.com/2007/11/why-vmware-over-netapp-nfs.html
I hope we will soon see new protocols like NFS-RDMA take over as the perferred protocol for vmware...
And I hope to see cluster NAS systems (like Isilon) catch on...
Posted by: Dan Pancamo | December 04, 2007 at 09:36 PM
Just a question...do you know the percentage of time that EMC storage is attached in a VMWare environt? I hear something like 65% of the time storage is introduced in an EMC envirionment it's EMC product? Can it really be that high?
Posted by: eric zappia | October 22, 2008 at 12:25 PM
Hi
I've seen 2 or 3 different surveys, none of them perfect, attach rates ranging from the low 30s to over 50%.
I think the variances have to do with which segment of the market is being surveyed.
Bottom line -- we have healthy attach rates in VMware environments, something we work to earn each and every day.
Posted by: Chuck Hollis | October 22, 2008 at 06:24 PM
Which is better Xen or VMware? Does Xen have an open source backend for user-based access and maintenance?
Also is there any backend for managing Xen with user-based access rights?
Posted by: Linux VPS | February 27, 2009 at 03:20 PM