« Hybridicity | Main | My Obsolete Storage Knowledge »

October 30, 2014


Chad Sakac

Disclosure EMCer here.

Chuck, I tend to agree with you - but this reinforces my view that VMware needs to prioritize the vehicle for the ecosystem (not just VMware) to innovate around the storage stack in the VMkernel.

Extensibility in the CURRENT storage stack is very limited, and very gated (relative to other ecosystems). Beyond the PSA in 5.5 there is no real (supportable) route - and that is very, VERY limited.

In "Future vSphere releases" VVOLs and IO filters provide a potential route - but IO filters we should be excited about, but not singing and dancing - that is a vehicle that exists in other forms in other places. VVOLs on the other hand are a whole new beast, and represent new opportunity for innovation and awesome.

I think the whole ecosystem needs to rally around VVOLs and IOfilters (doing everything I can over here in EMC land, and am NEVER satisfied) - so fully support, but let's be clear - there's a fundamental limiting element with vSphere for 3rd parties that doesn't exist in the KVM/Openstack ecosystem, heck the Microsoft ecosystem. The reason (kernel security) is an excuse (maybe one with merit), but that is an engineering problem, and shouldn't be considered a business problem.

I did a demo at VMworld of a prototype ScaleIO SDC (client) embedded in the VMkernel (and was VERY VERY CLEAR - prototype only, no committed GA, or a date). This gives customers CHOICE (VSAN tends to win for vSphere-only environments, ScaleIO tends to win when vSphere hosted kernel VMs represent some, but not all of workloads).

I think the right way would be to open this up to anyone and everyone (the right way of course).

Otherwise - VSAs are really the only choice (with the limitations you describe) that then limit (artificially) their use cases.

Thanks for adding to the dialog, and am a big believer in VSAN, EVO:RAIL - and the whole ecosystem (for the betterment of the big picture rather than any one vendor - including EMC - we must compete freely and deliver innovation and value).

Mark Burgess

Hi Chuck,

I believe most VMware shops will continue with external arrays and start to run VSAN along side them for specific workloads (i.e. VDI and T2/T3). As VSAN matures I am sure we will see a higher percentage of workloads sitting on VSAN.

Will the external array disappear? I doubt it, but it will depend on the innovation and value proposition the vendors provide. I do believe that the external array vendors will have to move to more of a software-defined model whereby you can carry forward your investment in the software to new hardware. Also all technologies have trade-offs - the advantage of today's external arrays is better RAID options and the fact that there is no evacuation process required when you take down a node.

It would be really useful if somehow VMware could publish performance comparison numbers that compare VSAN to the leading VSAs.

I am sure most of us believe it is much more efficient, but that may not actually translate to any significant real world difference.

It reminds me of what a VMware NSX guy said to me recently:

1. The Kernel based Distributed Firewall can do 20Gbs
2. The 3rd party VM based Firewall can do 2Gbs

I think Kernel Modules win the efficiency battle by a fair margin!!!

Like you, I also think Nigel has got it wrong with his lock-in argument (more thoughts at http://blog.snsltd.co.uk/lock-in-choice-competition-innovation-commoditisation-and-the-software-defined-data-centre/).

If he changed his blog title to "EVO:RAIL Is Much Worse Than a HW Array" then I would agree.

I just do not get EVO:RAIL, to the point that I must be missing something, I really need to go away and recheck the pricing (more thoughts are at http://blog.snsltd.co.uk/vmware-evorail-or-vsan-which-makes-the-most-sense/).

The highlights for me are:

1. It is extremely expensive
2. It is extremely constrained
3. It is the complete opposite of Software-Defined
4. It sums up what is bad about Hardware-Defined that VMware is fighting against

For me the biggest advantage of a software-defined solution is that the software and hardware are independent and you can always move the software onto the latest and greatest hardware - with EVO:RAIL the software is tied to the hardware.

I really like VMware's SDDC strategy and VSAN and NSX, but like Nigel I do not like lock-in and the associated poor value - for me EVO:RAIL looks like lock-in.

Your comments would be appreciated.

Best regards

Chuck Hollis

Hi Chad --- nice to hear from you! I hope all is well ...

I do agree with you on the importance of ecosystem support. Now that I'm at VMware, I have a brand-new appreciation of just how hard it can be -- hard for VMware engineers, hard for our partners, and hard for customers who try to line up the pieces to come up with a workable solution!

VVols and IOfilters are excellent examples -- way harder for everyone than they looked when first proposed. But, happy to say, making progress for a "vSphere Future Release" as you put it.

That being said, you and I both know that there are matters of degree when it comes to being integrated into a hypervisor. "Approach A" gives a mechanism for a nice loadable kernel module to improve IO efficiency. "Approach B" gives the storage code the keys to the kingdom -- resource allocation, prioritization, etc. -- all the undocumented stuff.

I agree that Approach A should be provided to the ecosystem. We, as VMware, are late on that one, but it's coming. However, I don't think we'll ever see Approach B.

BTW, why is it that there aren't any third-party modules for storage array code? Always wondered about that :)

-- Chuck

Chuck Hollis

Hi Mark -- thanks for the comments and questions. I'll do my best.

I agree with your side-by-side adoption model. That being said, we are seeing "fresh builds" aplenty with no external storage array.

We have run several apples-to-apples performance comparisons of VSAN vs several VSAs: same hardware, same workload. We have convinced ourselves of our claims; however, we don't want to go poking one of our partners in the eye with overly competitive benchmark figures.

The significance is measurable: either smaller configs or higher VM density, real-world application performance differences, etc.

EVO:RAIL is all about ease-of-consumption. You buy a thingie, turn it on, and start provisioning VMs. Really, it's that simple.

Some people really get that and are interested, others look at this and prefer to handpick their own hardware, configure it, etc. With VSAN (and vSphere) both options are available. I consider EVO:RAIL complementary to other consumption models. Some will like it, some won't. Choice is good.

Hope that helps!

-- Chuck


Hi Chuck,

Just wondering if you can provide some of the data around the below statement, I'd be interested to see what your talking about.

"Write-heavy transactional workloads and most VSAs don’t seem to get along well."


Chuck Hollis

Josh, it's considered good form to disclose your affiliation when commenting.

Your Twitter profile states that you are a solutions and performance engineer for Nutanix. I think people would find that a relevant piece of information to put your questions in context.

If you were a customer, partner or someone else I'd spend the time to share a detailed answer. Given your role, there is nothing I could say or show that would convince you otherwise, so I'm not going to even try.


-- Chuck


Hi Chuck,

I work for the Government (can't disclose more - sorry).

Curious as to how VSAN might play a role in businesses / organizations with data center (size) limitations but would like to scale greater. Also, what would be the value for customer incorporating a VSAN compared to traditional storage infrastructure (VNX, VMAX, etc)?

Chuck Hollis


Understand your situation. Quick notes here, for more drop me a note a chollis (at) vmware (dot) com.

One school of thought is that -- for folks who are already using rack form-factor for servers -- there's the potential to reclaim the internal server real-estate for storage purposes: power, cooling, slots for drives, etc. Even being conservative, a 16 node cluster with 4 1TB drives each gets you 64TB raw. Not an enormous amount, but serviceable without additional server or storage footprint.

VSAN, of course, can support much higher capacities: bigger disks, more disks per server, etc. -- just giving you a flavor here.

The other side of the coin is sheer storage density at scale. Using mainstream approaches, it seems that dedicated storage arrays can offer more density per floor tile than, say, racking and stacking servers for that purpose. But you still have to put the servers somewhere.

If we were chatting, I'd ask specifics about capacity, workload profiles, etc. Drop me a line if you'd like to discuss more?

-- Chuck

The comments to this entry are closed.

Chuck Hollis

  • Chuck Hollis
    SVP, Oracle Converged Infrastructure Systems

    Chuck now works for Oracle, and is now deeply embroiled in IT infrastructure.

    Previously, he was with VMware for 2 years, and EMC for 18 years before that, most of them great.

    He enjoys speaking to customer and industry audiences about a variety of technology topics, and -- of course -- enjoys blogging.

    Chuck lives in Vero Beach, FL with his wife and four dogs when he's not traveling. In his spare time, Chuck is working on his second career as an aging rock musician.

    Warning: do not ever buy him a drink when there is a piano nearby.

    Note: these are my personal views, and aren't reviewed or approved by my employer.
Enter your Email:
Preview | Powered by FeedBlitz

General Housekeeping

  • Frequency of Updates
    I try and write something new 1-2 times per week; less if I'm travelling, more if I'm in the office. Hopefully you'll find the frequency about right!
  • Comments and Feedback
    All courteous comments welcome. TypePad occasionally puts comments into the spam folder, but I'll fish them out. Thanks!