« Backup To The Future? | Main | Business School Lessons From Evolutionary Biology »

July 16, 2013


Howard Marks

Thanks for the shout-out Chuck. I count myself lucky that while I sometimes write about recent events in the storage biz the folks at NetworkComputing don't call my column news so I don't have the deadline pressures and can write a few days later after some reflection.

The problem I see with a ScaleIO style solution is that to provide reliable storage it has to be synchronously mirroring data between nodes. That's easy if it's supposed to be EBS and provide millisecond latency but to provide 100K IOPS at 800us you'll need a lot of network and a lot of CPU.

When you combine distributed storage and compute the number of potentially noisy neighbors and the instruments the neighbor boy's band plays all increase. Will a demand for IOPS to store the 2nd copy starve that server's app for CPU?

It is a brave new world we enter. Could be great eliminating the storage array as a thing you can point at (though I know you're working hard to make it a thing you can buy from EMC) but it could also be just a new form of complexity.

Chuck Hollis

Hi Howard -- thanks for dropping by and sharing your thoughts.

Both are good points.

On your first point (reliable storage requiring some sort of synchronous copying), you're right -- it's hard to get around that requirement. The good news is that network and CPU technology are improving at a blistering rate. Who would have thought multiple 10Gb ports would be a common default for server builds? Or 32GB of RAM? Things move along at a great clip these days.

As far as apps competing for resources, it's an age-old issue -- and if one of the apps happens to be a storage software layer, it's a familiar refrain. The answers are -- as always -- try to do smart scheduling/prioritization in either the operating system or hypervisor, or -- if it's really important -- invest in dedicated hardware.

On the last point, you're right. Every simplification solution seems to introduce a new form of complexity, and I suppose this will be no exception.

-- Chuck

John F. Kim

The synchronous data mirroring need not be too demanding if traffic to the shared storage is local within each server most of the time. And if you need some data to be coherent across two locations at once, I heard of some amazing EMC technology that's good at doing that--it's called VPLEX!

Beyond Chuck's mention of 10GbE, there are also 40GbE and 56Gb InfiniBand options that increase throughput while reducing latency and CPU utilization. For example EMC's Isilon already supports IB as a cluster interconnect.

The comments to this entry are closed.

Chuck Hollis

  • Chuck Hollis
    SVP, Oracle Converged Infrastructure Systems

    Chuck now works for Oracle, and is now deeply embroiled in IT infrastructure.

    Previously, he was with VMware for 2 years, and EMC for 18 years before that, most of them great.

    He enjoys speaking to customer and industry audiences about a variety of technology topics, and -- of course -- enjoys blogging.

    Chuck lives in Vero Beach, FL with his wife and four dogs when he's not traveling. In his spare time, Chuck is working on his second career as an aging rock musician.

    Warning: do not ever buy him a drink when there is a piano nearby.

    Note: these are my personal views, and aren't reviewed or approved by my employer.
Enter your Email:
Preview | Powered by FeedBlitz

General Housekeeping

  • Frequency of Updates
    I try and write something new 1-2 times per week; less if I'm travelling, more if I'm in the office. Hopefully you'll find the frequency about right!
  • Comments and Feedback
    All courteous comments welcome. TypePad occasionally puts comments into the spam folder, but I'll fish them out. Thanks!