« Dear IBM: Denial Is Not A River In Egypt ... | Main | One Size Does Not Fit All »

December 12, 2007




Good piece. Well written as usual. Some points could use clarification.

It seems that the big 3 all face similar challenges today with respect to scale of these devices. Once a customer hits the PRACTICAL limits of the virtualization engine they have to install another one. How does the capacity behind each engine participate in a single virtual pool? Am I mistaken in implying Invista has this challenge along with SVC and USPV?

As well, it seems all three approaches face similar challenges when it comes time to upgrade/replace the virtualization engine. How do customers do that non-disruptively?

In my Wikibon interactions I've communicated with and personally interviewed dozens of SVC, USPV and even some Invista customers-- All seem to be effectively solving the problems they were asked to address. But for now, to be frank, the "Invista scales better" rap is not resonating.

I'll be the first to admit mine is a limited sample but EMC's own Invista 2.0 press release quotes a university customer...not typically the reference model for large scale, complex storage (with all due respect to Purdue University). Thanks. -dave from Wikibon.

Chuck Hollis

Hi Dave:

Thanks for your great questions.

First, when it comes to "scale", I usually have to remind myself to think in two dimensions -- capacity and performance.

With regards to capacity, I think all of the "big 3" will be working to increase the number of volumes that their products can address as part of a single domain.

I personally believe that EMC's approach has an architectural advantage here as the numbers get larger over time, but I don't think that's the important issue, nor do I think it is especially relevant.

I think for many customers a more important aspect of "scale" is throughput and - yes - additional latency added by the virtualization device.

Comparing normal I/Os routed through a SAN device vs. an appliance or a storage array isn't even close to being a fair contest.

Keep in mind, with Invista's approach, the vast majority of I/Os are kept on the SAN, and not touched by the virtualization device, unlike array and appliance approaches.

Think microseconds vs. milliseconds.

Sure, this won't be an issue for everyone, but as you know there's a certain segment of customer that looks at this sort of thing very carefully.

You probably are aware that Invista's architecture separates data path from control path. If a customer uses dual pathing from the host (most do), the upgrade of the data path (e.g. the SAN switch) is almost identical to any other SAN upgrade. Takes a bit of planning, but this is done non-disruptively on a routine basis thousands of times a year.

I think this is important as technology evolves from 4Gb to 8Gb to whatever lies ahead.

The other component that's subject to upgrading is the control processor. I am not close to the details regarding Invista 2.0, but I believe that it supports fail-over upgrade scenarios, allowing you to upgrade one element of control-processor pair at a time.

Since you've been in the industry a while, you're probably aware that the vast majority of larger customers are unwilling to be part of a vendor's press release -- especially the really interesting ones.

This has nothing to do with technology or vendor relations, and has more to do with legal and PR concerns by most large enterprises.

To use vendor-generated press releases as an indicator of a product's market or technical success wouldn't be among my suggested best-practices for analysts. I'd bet it'd be easy to be led to some erroneous conclusions that way.

Thanks for writing, Dave -- and best of luck with Wikibon.


Touché Chuck. Time to do some more homework for sure. Good luck with 2.0-- hearing lots of good things from customers and expectations are high.

Barry Whyte

Chuck, I must agree with Dave, a well written piece. I guess you are admitting that Invista is aimed much more at the enterprise level, where you are happy to help protect the investment (I think I called it a 'cash cow' in the post on my blog) of copy services in the controller.

While I will admit that a very large scale solution that simply cracks the packet at wire speed will mean you can use very little processing power to provide 'redirection' of packets - it most definitely lacks the power and ability to 'snoop' the data. i.e. cache it, manipulate it, copy it, thin provision it, replicate it, mirror it, de-dupe it, whatever you want to do with it.

I may have been a little harsh in my write up of the recent re-announce (wasn't this announced back in August?) but I really believe that unless you have the processing power available, that is now in abundance in an appliance (Quad cores, Octal cores not far off) that is yet unavailable in a line card (not to mention the ability to very quickly sync cluster wide updates) that the flow-throw design is fundamentally flawed in its potential.

Correct me if I'm wrong, but I believe the point in time implementation in Invista is actually just a clone process, and the destination is not available until the source has been fully copied? What about incremental point in time copies, cascades, space efficient?

For those that want to really role out heterogeneous storage virtualization that provides vendor neutral feature rich copy services, an appliance or controller based solution are the only currently viable options?


Chuck Hollis

Hi BarryW -- good to hear from you.

I have to apologize regarding my tardiness in posting your comment and replying -- I've been completely unplugged from the grid, and -- let me tell you -- it's a refreshing experience.

That being said, I'm going to take the easy way out here and punt. There are others at EMC who could probably do a better job of replying and debating the points you raise.

Best wishes for a successful 2008!

Dave Vellante

Okay Chuck...in the spirit of doing some homework we went out and contacted Purdue directly (without any knowledge or assistance from EMC) to find out what was really going on there with Invista 2.0. While it's early in the production phase, I have to say we were very impressed that this example was actually representative of real commercial operations.

Still waiting for that banking reference but here's the case study writeup along with my apologies for dissing your reference account!!

The comments to this entry are closed.

Chuck Hollis

  • Chuck Hollis
    SVP, Oracle Converged Infrastructure Systems

    Chuck now works for Oracle, and is now deeply embroiled in IT infrastructure.

    Previously, he was with VMware for 2 years, and EMC for 18 years before that, most of them great.

    He enjoys speaking to customer and industry audiences about a variety of technology topics, and -- of course -- enjoys blogging.

    Chuck lives in Vero Beach, FL with his wife and four dogs when he's not traveling. In his spare time, Chuck is working on his second career as an aging rock musician.

    Warning: do not ever buy him a drink when there is a piano nearby.

    Note: these are my personal views, and aren't reviewed or approved by my employer.
Enter your Email:
Preview | Powered by FeedBlitz

General Housekeeping

  • Frequency of Updates
    I try and write something new 1-2 times per week; less if I'm travelling, more if I'm in the office. Hopefully you'll find the frequency about right!
  • Comments and Feedback
    All courteous comments welcome. TypePad occasionally puts comments into the spam folder, but I'll fish them out. Thanks!