« EMC 2009 Strategic Forum -- Intro | Main | EMC 2009 Strategic Forum -- Protection, Intelligence, Automation »

March 11, 2009

Comments

Jerry Thornton

Maybe instead of I/O dedupe it should be called spindle dedupe because you're not removing or reducing I/O, you're removing unnecessary spindles to support a given I/O. Maybe I'm being pendantic, but just my two cents.

Chuck Hollis

Good thought!

Andrew MacDonald

I/O Dedupe is what we do in PowerPath - great in VMware environments. We do spindle and I/O dedupe, talk about comprehensive solutions..

Bob Primmer

Good post, Chuck.

StorageTax

The DMX4 config comparisson looks interesting, but I cant help but think an array configured as described has a very narrow band advantage over a "traditional" configuration.

Less power and cooling is always a good thing. The real question I have is what is the real performance benefit of the hybrid config ? I find it difficult believe that any array is going to deliver 60% better overall performance by converting less than 1% of your useable storage to flash, especially when you convert nearly 50% to slower SATA drives.

I would love to see EMC publish some test data to back up these numbers with real workloads. No single application numbers but real honest to goodness multi-application mixed workloads, 20+ hosts connected to the array.

Im guessing that there will actually be a small performance degradation, simply because in the real world 90% of work isnt done on 1% of data.

Well thats the way it is in my world anyway.

Chuck Hollis

Hi StorageTax

Good news! Just ask any EMC "SPEED" Guru (a large team of field performance experts), and they have access to literally dozens of real-world application profiles, both done in our labs and from customer environments, with both large single hosts (e.g. mainframes) as well as multi-host environments.

I think you'll be pleasantly surprised.

The magic is that most "hot" applications have pronounced disk hot spots that are easily revealed with just a bit of analysis.

On a DMX, the large non-volatile write cache does a good job of soaking up most writes, but sooner or later has to dump to disk -- EFDs can help there. And random reads tend to defeat cache algorithms, but they make EFDs shine.

There are a number of before-and-after comparisons floating around that have somehow escaped our corporate perimeter, if you go looking. All are pretty amazing.

It'd be great to be able to have a detailed discussion with you, or whoever, and understand your environment better.

Thanks for the comment!

-- Chuck

the storage anarchist

StorageTax -

Indeed, not all environments fit into the 80/20 (or 90/10) rule, where a small subset of spindles support the overwhelming majority of the IO workload.

But most do.

This is why wide-striping works, for example: every spindle supports a small subset of the total workload. The idea is similar with Flash Drives - put the most used data on the fast storage to deliver the IOPS, the least-used on slow SATA, and demote everything else based on utilization (heck, wide-stripe everything else, if you'd like).

The EFD advantage is not just IOPS, though. EFDs can deliver response times unattainable with wide-striping. So rather than a degredation, most will see a rather significant improvement in performance.

Admittedly, we're at the beginning of leveraging flash, but saving money for the most performance-hungry applications is an excellent place to start!

Martin G

Should you not wide-stripe everything anyway Anarchist? Just have separate wide-striped pools? One for flash, one for FC and one for SATA?

StorageTax - we've had some modelling done looking at upgrading our DMX4s; a relatively small amount of flash would make a significant difference and this is for a mixed workload.

Daryll Chen

"Now, he didn't say that "tape was dead" -- nobody really believes that -- but customers were certainly investing in a whole lot less of it lately, and tape was being pushed farther and farther down in the hierarchy."

Why continue to buy and maintain tape when refurbished EMC equipment can be used for most 2nd tier storage environments?

http://www.reliant-technology.com/products/storage/emc/clariion/

The comments to this entry are closed.

Chuck Hollis


  • Chuck Hollis
    SVP, Oracle Converged Infrastructure Systems
    @chuckhollis

    Chuck now works for Oracle, and is now deeply embroiled in IT infrastructure.

    Previously, he was with VMware for 2 years, and EMC for 18 years before that, most of them great.

    He enjoys speaking to customer and industry audiences about a variety of technology topics, and -- of course -- enjoys blogging.

    Chuck lives in Vero Beach, FL with his wife and four dogs when he's not traveling. In his spare time, Chuck is working on his second career as an aging rock musician.

    Warning: do not ever buy him a drink when there is a piano nearby.

    Note: these are my personal views, and aren't reviewed or approved by my employer.
Enter your Email:
Preview | Powered by FeedBlitz

General Housekeeping

  • Frequency of Updates
    I try and write something new 1-2 times per week; less if I'm travelling, more if I'm in the office. Hopefully you'll find the frequency about right!
  • Comments and Feedback
    All courteous comments welcome. TypePad occasionally puts comments into the spam folder, but I'll fish them out. Thanks!