Big news today for IT infrastructure professionals.
EMC has taken the wraps off of the much-talked-about Project Lightning, and there's much more there than most people expected :)
In my previous post, I did my best to set context as to what was going on, why this was important, and so on. Unless you're a storage technologist, you really should start with that post.
In this one, we're going to get into the details: what it is, what it does and how it's different than what's come before.
And in my next post, I'm going to preview its sibling -- Project Thunder -- which, as you'll see, is a natural extension of what Project Lightning accomplishes.
In A Nutshell
VFCache is server-side flash storage that integrates with the rest of the extended storage environment.
It's fast. Of course, it's much faster than array storage, but it's also considerably faster than the existing server-side flash storage cards you might be familiar with.
It's smart. It uses EMC software and expertise to integrate with the world around it: storage arrays, operating systems, virtualization, management, etc.
It's protected. VFCache integrates nicely with the rest of the EMC information protection domain: high availability, backups, replication, DR etc.
Perhaps the best way to understand it is via this useful chart.
At the bottom, we see the familiar array-side flash technologies. Faster than disk, more economical to deliver high IOPs, but inherently far away from the server and processor.
At the top, we see the familiar server-side flash technologies. Wicked fast, but operationally challenged -- not mention difficult to protect any valuable data that might be there.
VFCache neatly integrates the two domains with a single, integrated approach that bridges the two camps.
The best part is that customers don't have to decide ahead of time which is the "better" approach: server-side or array-side. One environment enables them to use the mix that works for their environment: performance, cost, availability, etc.
Choice is good.
So, Let's Talk About Performance, Shall We?
A quick word before we start -- we here at EMC are very sensitive to, err, potentially misleading vendor performance claims. They don't do anyone any good -- especially our customers!
So, when we're talking about performance here, we're assuming a "typical" enterprise workload with decent block sizes, real-world read/write mixes, etc.
You know, the familar stuff like SAP, Exchange, Oracle, SQLserver, production filesystems, etc.
We don't think 64-byte, read-only hero numbers are particularly useful :)
I do apologize, though, for the lack of detail on the axes in this chart, but the general effect has been repeated time and time again with VFCache.
Whatever storage performance curve you're getting today with array-based storage, the curve moves down and to the right significantly. That's a very good thing.
Since average I/O latency is dramatically reduced (and we do mean dramatically), there's less of a queue, more bandwidth availabile, etc. Basically, big heaps of storage performance goodness.
How this effect translates into specific performance boosts will vary depedning on a whole host of factors, but one before-and-after example comes from this recent whitepaper showing an 80% improvement in transactions per minute (TPM) running a TPC-C like environment using Oracle 11g, a Cisco UCS server and a VMAXe.
For those of you who want to go a level deeper, we've prepared this handy chart that compares VFCache against a "popular" Familiar I-O alternative.
At a raw hardware/driver level, you can see the significantly better IOPs, the much lower latency, and -- maybe most importantly -- the dramatically reduced CPU overhead associated with VFCache, since (after all) it's extreme application performance that we're after here.
After digging into some of the deeper architectural discussions and software/hardware integration, I believe that VFCache approach will retain a meaningful performance advantage over competitive alternatives for the foreseeable future.
What the EMC team has done will probably end up being wicked hard to copy or improve on.
Time will tell if I'm right or wrong.
VFCache Product Architecture
If you're interested in a high-level view of how the stuff works, this part is for you.
At an outer level, VFCache implements a lightweight I/O filter driver that sits just beneath the block I/O subsystem.
The VFCache filter driver plays nicely with (but does not yet leverage) other block I/O stacks, including vendor-provided MPIO, EMC's PowerPath, etc.
The lightweight filter driver inspects the I/O traffice, implements write-through as needed, caching algorithms, and so on.
Interesting to note, if you'd like to stick an ordinary server flash card in your server concurrently -- and not take advantage of VFCache -- that's supported as well. This makes sense when you'd like to greatly speed up non-persistent data: e.g. temp files, locks, paging devices, etc.
(note, you can tell these are engineering slides, since the code name "Lightning" is still evident ...)
Under Vsphere, the architecture is a bit different, as you'd imagine.
Here, the EMC-supplied I/O filter driver sits just above the block I/O drivers in the guest operating system, meaning that you can enable VFcache on a per-guest and per-logical-storage basis if needed.
More importantly, there's a slick VSI (virtual storage integrator) VFCache plug-in for vCenter that lets you monitor both the server-side and storage side of the caching.
I'm sure my colleague Chad Sakac will be saying *much* more about this, so I'll best leave it to him ...
Where Does VFCache Fit In The Broader Spectrum of I/O Workloads?
Although most people are justifiably interested in typical enterprise-class workloads, that's not the entire known universe.
This chart here does a good job of positioning where VFCache does best in this broader storage spectrum. No visual simplification captures every nuance of the discussion, but this one I've found helpful.
Along the bottom, a continuum from write-mostly to read-mostly. Although most enterprise apps have a significant (and not-to-be-ignored) write component, they generally read data that's already been written; varying back and forth depending on the usage model. Email is a good example.
Along the vertical axis, the "locality of reference" scale: from essentially no locality of reference on the bottom, to a reasonable degree of LOR (also sometimes called skew) moving upwards.
The big red dot is where our friends at EMC/Isilon play -- scale out NAS, big data sets, huge read bandwidth, HDFS, etc. The big orange dot is where big write bursts are the norm: high-speed backup, data replication, real-time image capture and the like. No single EMC product fits there; we use different approaches depending on what's being done.
The big blue dot is our friend "temp" data: short, bursty writes that are (usually) non-persistent. VFcache (using the split-cache option) makes good sense there.
And, finally, the big green blob -- where all of our familiar enterprise applications reside in all their glory :)
What's Coming Later?
While there's plenty to appreciate in the first release of VFCache, there's a few things we'll hopefully be putting in customers' hands before the year is out.
We've learned a lot about practical data compression and deduplication: especially with "hot" data. That sort of technology is an obvious target, especially for a relatively expensive resource like flash memory storage.
There's a boatload of opportunity around integrating the world-view of array-visible data priorities, and the world-view of server flash data priorities. In our previous experience on array-based disk/flash/cache, there's a lot of predictive analytics we've learned which we'd like to expand to the broader domain.
Server-side cache (storage?) coherency is important to us as well, as many of the envisioned use-cases involve transactional databases that will frequently involve multiple, loosely-coupled servers. We have some good experience there, as well :)
And, finally, we need to continue to invest in management integration such that using something like VFCache is as natural and transparent as possible to all the different IT roles: the storage admin, the VMware admin, the application admin, the converged infrastructure admin, the IT generalist and so on.
I'm expecting you won't have to wait too long for these bits, though. They're nothing more than re-application of things EMC has already done in other domains.
One of the advantages of having a broad portfolio, and knowing how to use it.
Finally, flash technology itself is now a rapidly moving target. Prices are coming down, capacities are going up, performance evolution is now closer to Moore's Law than before, and there's a tiering hierarchy starting to become more obvious. Perhaps most intriguing is that the technology is exceptionally malleable from a form-factor perspective, leading to all sorts of creative packaging opportunities.
Customer Reaction?
The team has told me that they've had in-depth briefings with over 200+ IT groups on Project Lightning. That's a lot of feedback.
Customer reaction has been extremely positive -- mostly because we've worked to build a consumable and practical solution, rather than simply offering up a point technology. The handful of early-access customers have been extremely encouraging as well.
I'm sure you'll hear a lot more from us before long with plenty of classic "success stories" ...
As always, though, customers have choices.
They may feel they don't need the performance of this approach, or are satisfied with what we can do in the array itself. Fine, we've got you covered.
A few have a peculiar bent to grab the latest technology off-the-shelf, and do their own integration, support, etc. We're not going to stop them, of course, but we will continue to wonder "why?"
I think that the majority of enterprises will appreciate what we've done here, and more than a few will be intrigued to find out what VFCache can do for *their* performance-sensitive applications.
We're more than ready to help :)
But Wait, There's More ...
In addition to formally announcing Project Lightning, we're also offering up a preview today of Project Thunder -- something we haven't been talking about very much publicly.
Independently, it has its own dose of stand-alone awesomesauce, but it's actually a piece of a more compelling picture.
Once you learn more about it, you'll likely appreciate the natural way it fits in.
But to do that, you'll have to read the next blog post.
In the slide labeled "VFCache hardware advantage", would it be possible for you to tell us what the comparison product ("popular card") is? I'm guessing it's an original Fusion-io ioDrive card from those numbers... I think the newer ioDrive2 is substantially faster, and more of a comparable product to the (admittedly very good) Micron P320h you're using!
Posted by: SFoskett | February 06, 2012 at 04:08 PM
If that is such a revolutionary product, which I think it is, why does it have such a bland name? There's zero exitement when one hears EMC FVCache. Something along the lines of EMC Lightning, I'd say, would be much more memorable.
The author was himself pointing, in a recent article, that a good name is very important.
Posted by: Paul Batsii | February 15, 2012 at 07:52 AM
Great information. Thanks :)
Posted by: aliramenon | October 06, 2012 at 07:07 AM