From early mainframe roots to today’s hybrid cloud, the compulsion never wanes to progressively automate each every aspect of operations.
The motivations have been compelling: use fewer people, faster responses, be more efficient, make outcomes more predictable, and make services resilient.
But the obstacles have also been considerable: both technological and operational.
With the arrival of vSphere 6.0, a nice chunk of new technology has been introduced to help automate perhaps the most difficult part of the data center – storage.
It's worth digging into these new storage automation features: why they are needed, how they work, and why they should be seriously considered.
Heck, it's been around as least as long as I have, and that's a long time :)
Despite decades of effort by both vendors and enterprise IT users, effective storage automation still is an elusive goal for so many IT teams.
When I'm asked "why is this so darn hard?", here's what I point to:
- Storage devices had very limited knowledge of applications: their requirements, and their data boundaries. Arrays had to be explicitly told what to do, when to do it and where it needed to be done.
- Cross-vendor standards failed to emerge that facilitated basic communications between the application’s requirements and the storage array’s capabilities.
- Storage arrays (and their vendors) present a storage-centric view of their operations, making it difficult for non-storage groups to easily request new services, and ascertain if end-to-end application requirements were being met.
Here's the message: the new storage capabilities available in vSphere 6.0 show strong progress towards addressing each of these long-standing challenges.
Towards Application Centricity
To the extent that each aspect infrastructure can be made programmatically aware of individual application requirements, far better automation can be achieved.
However, when it comes to storage, there have been significant architectural challenges in achieving this.
The first challenge is that applications themselves typically don’t provide specific instructions on their individual infrastructure requirements. And asking application developers to take on this responsibility can lead to all sorts of unwanted outcomes.
At a high level, what is needed is a convenient place to specify application policies that can be bound to individual applications, instruct the infrastructure as to what is required, and be conveniently changed when needed.
The argument is simple: the hypervisor is in a uniquely privileged position to play this role. It not only hosts all application logic, but abstracts that application from all of infrastructure: compute, network and storage.
While these policy concepts have been in vSphere for a while, in vSphere 6.0 a new layer of storage policy based management (SPBM) is introduced. This enables administrators to describe specific storage policies, associate them with groups of applications, and change them if needed.
But more is needed here.
Historically, storage containers have not aligned with application boundaries. External storage arrays have historically presented LUNs or file systems - large chunks of storage shared by many applications.
Storage services (capacity, performance, protection, etc.) were specified at the large container level, with no awareness of individual application boundaries.
This mismatch has resulted in both increased operational effort and reduced efficiency.
Application and infrastructure teams need to go continually back and forth with the storage team regarding application requirements. And storage teams are forced to compromise by creating storage service buckets specified in excess of what is actually required by applications. Better to err on the side of safety, right?
No longer. vSphere 6.0 introduces a new storage container – Virtual Volumes, or VVOLs – that precisely aligns application boundaries and the storage containers they use. Storage services can now be specified on a per-application, per-container basis.
We now have two key pieces of the puzzle: the ability to conveniently specify per-application storage policy (as part of overall application requirements), and the ability to create individualized storage containers that can precisely deliver the requested services without affecting other applications.
So far, so good.
Solving The Standards Problem
Periodically, the storage industry attempts to define meaningful, cross-vendor standards that facilitate external control of storage arrays. However, practical success has been difficult to come by.
Every storage product speaks a language of one: not only in the exact set of APIs it supports, but how it assigns meaning to specific requests, and communicates results. Standard definitions what exactly a snap means, for example, are hard to come by.
The net result is that achieving significant automation of multi-vendor storage environments has been extremely difficult for most IT organizations to achieve.
To be clear, the need for heterogeneous storage appears to be increasing, and not decreasing: enterprise data centers continue to be responsible for supporting an ever-widening range of application requirements: from transaction processing to big data to third platform applications. No one storage product can be expected meet every application requirement (despite vendor's best intents) multiple types are frequently needed.
De-facto standards can be driven by products that are themselves de-facto standards in the data center, and here vSphere stands alone with regards to hypervisor adoption. When VMware defines a new standard for interacting with the infrastructure (and customers adopt it), vendors typically respond well.
vSphere 6.0 introduces a new set of storage APIs (VASA 2.0) that facilitate a standard method of application-centric communication with external storage arrays. VMware’s storage partners have embraced this standard enthusiastically, with several implementations available today and more coming.
Considering VASA 2.0 together with SPBM and VVOLs, one can see that many of the technology enabling pieces are now in place for an entirely new storage automation approach. Administrators can now specify application-centric storage policies via SPBM, communicate them to arrays via VASA 2.0, and receive a perfectly aligned storage container – a VVOL. Nice and neat.
Who Should (Ideally) Manage Storage?
It’s one thing to conveniently specify application requirements, it’s another thing to ensure that the requested service levels are being met, and – more importantly – how to fix things quickly when that’s not happening.
Historically, the storage management model has evolved in many IT organizations to be essentially a largely self-contained organizational “black box”. Requests and trouble tickets are submitted with poor visibility to other teams who depend greatly on the storage team’s services.
Although this silo model routinely causes unneeded friction and inefficiency (not to mention frustration all around), it can be particularly painful is in resolving urgent performance problems: is the problem in the application logic, the server, the network – or storage?
The storage management model created by vSphere 6.0 is distinctly different than traditional models: storage teams are still important, but more information (and responsibility) is given to the application and infrastructure teams in controlling their destiny.
Virtual administrators now see “their” abstracted storage resources: what’s available, what it can do, how it’s being used, etc. There should be no need to directly interact with the storage team for most day-to-day provisioning requirements. Policies are defined, VVOLs are consumed, storage services are delivered.
Through vCenter and the vRealize suite, virtual administrators now have enough storage-related information to ascertain the health and efficiency of their entire environments, and have very focused conversations with their storage teams if there’s an observed issue.
Storage teams still have an important role, although somewhat different than in the past. They now must ensure sufficient storage services are available (capacity, performance, protection, etc.), and resolve problems if the services aren’t working as advertised.
However, operational and organizational models can be highly resistant to change. That's the way the world works -- unless there is a forcing function that makes the case compelling to all parties.
And VSAN shows every sign of being a potential change accelerator.
How Virtual SAN Accelerates Change
As part of vSphere 5.5U1, VMware introduced Virtual SAN, or VSAN. Storage services can now be delivered entirely using local server resources -- compute, flash and disk – using native hypervisor capabilities. There is no need for an external storage array when using VSAN – nor a need for a dedicated storage team, for that matter.
VSAN is designed to be installed and managed entirely by virtual administrators independently of interaction with the storage team. These virtualization teams can now quickly configure storage resources, create policies, tie them to applications, monitor the results and speedily resolve potential problems – all without leaving the vSphere world.
As an initial release, VSAN 5.5 had limited data services, and thus limited use cases. VSAN 6.0 is an entirely different proposition: more performance (both using a mix of flash and disk, or using all-flash), new enterprise-class features, and new data services that can significantly encroach on the turf held by traditional storage arrays.
Empowered virtualization teams now have an interesting choice with regards to storage: continue to use external arrays (and the storage team), use self-contained VSAN, or most likely an integrated combination depending on requirements.
Many are starting to introduce VSAN alongside traditional arrays, and have thus seen the power of a converged, application-centric operational model. And it’s very hard to go back to the old way of doing things when the new way is so much better -- and readily at hand.
The rapid initial growth of VSAN shows the potential of putting a bit of pressure on traditional storage organizations to work towards a new operational model, with improved division of responsibilities between application teams, infrastructure teams and storage teams. And they'll need the powerful combination of SPBM, VASA 2.0 and VVOLs to make that happen.
Change Is Good -- Unless It's Happening To You
Enterprise IT storage teams have very specific ways of doing things, arguably built on the scar tissue of past experiences and very bad days. You would too, if you were them.
That being said, there is no denying the power of newer, converged operational models and the powerful automation that makes them so compelling. The way work gets done can -- and will -- change.
Enterprise storage teams can view these new automation models as either a threat, or an opportunity.
I know which side of that debate I'd be on.
Like this post? Why not subscribe via email?