« Cloud Security Takes A Big Leap Forward | Main | Beyond Private Cloud: Towards Information Governance »

March 04, 2010

Comments

marc farley

Good post Chuck.

Roy Mikes

Wow, great article. Enjoy reading it. Keep up the good work !!!

the storage anarchist

Excellent article indeed!

At the risk of turning this post into Storage Caching 201, there are a few nits you might want to clean up:

In the Where to Cache section, the 4th place data can be cached is in the drive itself. This is very important for both disk and flash-based drives, as cache not only assist writes but sometimes is used even to pre-fetch data on reads (often reading full tracks or even cylinders on a block read request).

In the "Value of Read Cache", the cache can be of value the FIRST time the data is read, if the I/O subsystem was able to pre-fetch the data (a large part of Symmetrix' secret sauce has to do with prefetch algorithms). I know you discuss algorithms later, but "first time read hit" is perhaps one of the biggest benefits of an intelligently cached storage array.

On the "Value of Write Cache," you assert that write cache is "non-volatile." It's a nit, but the cache itself isn't always implemented using non-volatile memory (NVRAM or even NAND Flash) - and when it is, the write cache is usually extremely small (4-8GB in the DS8K for example). Vendors take different strategies to protect write data that has not yet been destaged from loss (due to power failure, for example). Most will mirror the write data to two different cache boards/components, but the power failure scenario is addressed in a variety of ways: For example, battery hold-up (as in Hitachi's USP-V) is used to keep the SDRAM powered for several days (36-72 hours tops, I understand). Symmetrix V-Max and CLARiiON use a vaulting strategy - internal standby power provides hold-up long enough for cached data to be destaged to vault drives; previous generations of Symmetrix used a "destage to destination" strategy that pushed the writes out to the target drives under standby power.

This handling of power loss also forces concessions in the write caching strategy. In the NVRAM systems, the maximum of unwritten writes is limited by cache size; in destage-to-destination, the "write pending" limit must be restricted to what can be destaged inside of the SPS window AND is limited by how much data is destined to any single drive. The vaulting strategy affords the most flexibility, since the destage time is fixed - a V-Max system thus can use 100% of its global cache memory to hold writes.

Oh, and unless additional (extrenal) backup power is provided, the system that has 72-hour hold up will simply lose the data once the internal batteries are drained.

As to the importance of algorithms, Symmetrix has rather uncanny ability to self-optimize its algorithms. For example, you'd expect that an Oracle server running with 128GB of (local) SGA cache using storage on a V-Max with 128GB usable global memory would get little benefit from the V-Max cache. But in fact, the V-Max will deliver as much as 80% cache hit rate, a true testament to cache algorithms that can predict what Oracle is not able to cache locally.

Finally, there's a whole 'nother angle to persue, and that's how write caches in servers, network and arrays interact with regards to consistency, backups and replication. Maybe you/we'll tackle that one another day :)

Jay Livens

A great post! I was pondering read cache just last night and your article answered many of my questions.

nate

I was thinking just the opposite as far as good caching algorithms go, my last storage vendor had an interesting setup in that the disk storage they OEM'd for their NAS system had such bad algorithms that their best practices include disabling the write cache on the arrays, since it actually hurt performance to have it enabled. Instead they just cached more in the NAS layer.

What was even stranger to me is despite having write cache disabled on the disk arrays they still kept batteries in the systems and still wanted to replace them when they expired.

Because of that experience I suppose, their proposals to us for a system refresh, they wanted us to assume a near 0% hit rate for cache, since the workload was so random. They didn't want to believe me when I said even if it's really random cache can really help when organizing/ordering writes to the spindles. The people I was dealing with just didn't have experience with good storage I guess. Of course we ended up not going with their proposal.. They knew NAS hands down, but not much beyond that.

And as for battery backups I think it is pretty creative that some systems include an internal HD that the system can dump the cache to so you you just need a few minutes of battery, then the system can run forever without needing to worry about getting power back.

There was a data center fire here last year at a local facility and power was out for a good 48 hours, and they only got it back online after 48 hours by bringing in generator trucks, it took weeks to restore utility power to the building as a whole. Fortunately my organization wasn't impacted but a couple of my friends were.

Funny enough those friends worked with me at my previous company which I moved out of that facility to another one specifically because that original facility was prone to power outages. Fortunately their storage array was one of the ones which wrote cache to disk before shutting down.

I can only imagine how the people running storage systems relying on battery backed cache(whether it was a storage array or a server with a BBU) felt as the clock ticked by and they didn't know when/if power would get restored.

Vaughn Stewart

Mr. Hollis

I really like the foundation this post provides around storage caching techniques. Unfortunately it fails to cover advanced caching technologies specific to virtual infrastructures.

http://blogs.netapp.com/virtualstorageguy/2010/03/transparent-storage-cache-sharing-part-1-an-introduction.html

Maybe you should add Transparent Storage Cache Sharing to this post. Doing so will demistify the 'magic' available in non-EMC arrays.

Chuck Hollis

Vaughn --

I'm posting your "comment" here out of professional courtesy, but I take a dim view of vendors who try to use this blog as a way to shill their latest talking point -- as you are*.

Clever name for an old feature -- "TSCS" -- but, hey, storage cache sharing has been around for over 15 years, maybe it's time for a new name. And let's not forget, NetApp's "cache" only handles reads, and does nothing for writes.

Since the purpose of this post was "Storage Caching 101", I don't think I'm going to be adding vendor-specific views here.

I also think a good understanding of storage caching will be required to appreciate some of the new enabling technology EMC will be delivering before too long, like distributed cache coherence.

The "storage cache" discussion is about to move into an entirely new chapter, IMHO.

(* unless the vendor shilling their latest talking point is me, of course ...)

-- Chuck

The comments to this entry are closed.

Chuck Hollis


  • Chuck Hollis
    SVP, Oracle Converged Infrastructure Systems
    @chuckhollis

    Chuck now works for Oracle, and is now deeply embroiled in IT infrastructure.

    Previously, he was with VMware for 2 years, and EMC for 18 years before that, most of them great.

    He enjoys speaking to customer and industry audiences about a variety of technology topics, and -- of course -- enjoys blogging.

    Chuck lives in Vero Beach, FL with his wife and four dogs when he's not traveling. In his spare time, Chuck is working on his second career as an aging rock musician.

    Warning: do not ever buy him a drink when there is a piano nearby.

    Note: these are my personal views, and aren't reviewed or approved by my employer.
Enter your Email:
Preview | Powered by FeedBlitz

General Housekeeping

  • Frequency of Updates
    I try and write something new 1-2 times per week; less if I'm travelling, more if I'm in the office. Hopefully you'll find the frequency about right!
  • Comments and Feedback
    All courteous comments welcome. TypePad occasionally puts comments into the spam folder, but I'll fish them out. Thanks!