My favorite example is this crew, who've found a way to conveniently insert a military jet engine into what looks like an ordinary school bus. The result really isn't a school bus anymore -- it's something quite different to behold.
Similar thoughts went through my head as I contemplated this morning's news from EMC: a passel of performance, efficiency and management enhancements for the CLARiiON and Celerra midrange storage platforms.
Having worked with both for so long, I'm finding myself having to re-calibrate my expectations.
To Begin With
Both the CLARiiON and Celerra storage platforms are icons of the industry, largely responsible for EMC's continuing #1 market share position in such standardized IDC categories as Open SAN and NAS.
But in this business, what made you successful a few years ago won't guarantee your success in the future. It's perhaps the most aggressive segment of the storage industry, with many players and all sorts of newer technologies. No resting on your laurels here, folks.
Many of the enhancements announced as available today were previewed at EMC World back in May. There's also a few bits that are worth discussing.
The Context Is Changing
Long gone are the days where you'd buy small, individual arrays for this project or that application.
I think this graphic illustrates it best -- the typical use case is quickly becoming a consolidate rack of virtualized servers, driving an incredibly mixed and variable workload against the storage array.
And these new use cases are changing the way that these mid-tier arrays are built and deployed.
The Impact Of Enterprise Flash Drives
I think by now just about everyone associated with storage understands that these devices represent the next generation of storage media.
Potentially far faster, far more efficient and far more reliable that traditional rotating rust, all that needs to happen is for the prices to come down.
Indeed, there's every indication that these devices are following the same volume/price curve just as quickly as other semiconductor devices.
A recent example are the new 100GB and 200GB enterprise flash drives that provide the same performance and reliability, but are now up to 30% less expensive per GB that the ones before.
That being said, it's pretty clear that we'll be looking at a substantial cost differential between disk drives and flash devices for many years. The impact for storage vendors is pretty clear: customers will want a variety of mechanisms to combine the two, while keeping things as simple as possible.
As prices for flash continue to drop, it's reasonable to expect not only more people using the technology, but also using more of it. Any successful mechanism has to support as wide a range of workloads as possible, and allow for the use of more and more flash as prices drop.
Introducing the FAST Suite for CLARiiON and Celerra
You've likely heard about the individual technologies before. FAST (fully automated storage tiering) transparently does 24 hour scheduled optimization of 1 GB data chunks across enterprise flash, 15k high speed disks drives, and high-capacity bulk drives. It's basically a set-and-forget proposition.
That's where FAST Cache comes in -- it uses the exact same enterprise flash devices as a read/write non-volatile cache instead of a storage target.FAST Cache handles the short-term variations in workload (both reads *and* writes); FAST then does back-end optimization as access patterns persist.
Together, they're turning out to redefine our expectations around performance and efficiency. Eye-popping performance comes from even modest amounts of enterprise flash; efficiency results by being able to substitute large-capacity drives for more traditional FC drives.
And the impact isn't in just one or two special cases like VDI boot, it's visible for just about every workload we've encountered so far -- even with modest amounts of flash. The headlines can be backed up with real-world data -- from both our testing labs and customer environments.
Doesn't really matter what we throw at it, the new FAST Suite delivers the goods.
If your data happens to live in file systems, don't forget about EMC FMA (file management appliance) which dynamically tiers data not only within an array, but across multiple file-serving arrays, including non-EMC ones.
The most recent version (FMA VE, or virtual edition) does this by running a modest VM anywhere in the environment.Block Compression
Block compression is turning out to be somewhat more useful for most customers in their non-backup environments than simple data deduplication alone.
Most storage dedupe schemes only show well when you've got a lot of very similar files -- say, when you're doing backup, or have boot images that look the same.
Block compression, by comparison, not only handle those cases well, but can also wring excess out of just about anything you store, including the frequent cases where files *aren't* all that similar.
And, since it's such a low-level abstraction, it can work well for just about anything you put on it: databases, file systems, repositories, etc. As you can see from the graphic, results vary, but it's all considerably good.
And the results aren't highly dependent on having a lot of very similar files :-)
You've likely heard about it before, and it's now ready for our customers and partners. It took me a while to fully grok what Unisphere was all about, but I think I've got it now.
Originally, I saw it as a better and more simplified way to manage block and file storage.
That perspective hasn't changed, but now I realize its real strength is far more powerful and subtle -- it elevates the administrator's role from managing storage devices to managing storage policy. You interact with Unisphere to express what you'd like to have done (and monitoring the results!) rather than grinding through dozens and dozens of steps.
Some may call it automation, some may call it simplification -- perhaps a more descriptive term is "elevation".
Here's a simple example for setting FAST policies. Carve out some storage from a mixed pool (flash, FC, SATA, etc.) and simply express how you'd like this particular storage optimized.
The choices are pretty clear -- optimize for performance, optimize for capacity, let the array figure it out, or do nothing at all. That's about it.
The team also did some before-and-after comparisons looking at common reporting tasks, especially if you're looking after multiple arrays. What once was a science project is now relatively straightforward.Regardless, I now am starting to see Unisphere as a new benchmark for storage administration and management. And most definitely worth checking out :-)
There's just too much to cover here, and -- besides -- Chad does a far better job than I :-)
I'd consider the current round of VAAI an "architectural preview" of what can happen when hypervisor and storage arrays work together in a cooperative fashion. The current round is definitely cool; just use your imagination to consider what might be coming down the pike.
The management models are converging nicely as well: virtualization administrators can now do most storage-related tasks in-context with environments like vCenter; storage administrators now have far more insight into what the virtualization layer is doing.
And -- once again -- this is just the beginning here.
Oh Yes -- There's Some Faster Hardware As Well
Didn't want to forget, but part of this announcement includes some seriously faster hardware as well -- the new VG2 and VG8 Celerra unified gateways. These devices sit in front of either CLARiiON or Symmetrix block storage, and are quite popular.
These are probably the first such storage devices in the market to support the latest Intel "Westmere" processors, with up to 8 cores per CPU.
As a result, these gateways are ~2x more powerful than their predecessors, and -- depending on what we see from the SPEC tests -- are likely to be some of the most powerful NAS devices in the marketplace -- period.
When you consider the massive power and scale of these new gateways (~1.8 PB for the new VG8, or ~256 TB per blade), I just have to wonder how many customers will want more than that in a single cluster.
For me, the whole 'scale out parallel NAS" discussion seems to becoming more academic rather than pragmatic.The 20% Guarantee -- The Offer Is Still Good!
Many of you know we've been running a "20% -- Guaranteed" offer as compared to other unified storage arrays. This isn't a result of compression or deduplication -- it's simply being more efficient in turning physical capacity into usable capacity, regardless of how you decide to use it.
Most of these savings come from out-of-the-box inefficiencies from the other guys in their standard configs: losses from right-sizing drives, file system metadata and reserves, performance and snap reserves, and the like.
Rather than debate the arcane details, the guarantee is pretty simple: we'll deliver 20% more usable capacity than the other guys -- guaranteed!It's turned out to be a great conversations starter (and eye opener!) for many people who weren't aware just how much of their raw capacity wasn't turning into usable capacity given competitive approaches. And everyone wants to be more efficient, yes?
More Efficient, More Powerful, More Elegant
Competition brings out the best in vendors, and this segment is no exception.
Taken together, I think these announcements redefine the "state of the art" for these types of products -- multiple ways to use flash, automatic tiering, new high-level management tools, VMware integration, significantly faster hardware, blazing performance -- the works!Why wouldn't you want a jet engine for your school bus?