In one corner of our industry, we have a familiar discussion regarding hypervisors, or — more precisely — software-defined compute.
In another corner, we have a vigorous debate around software-defined networking.
And, closer to home, a completely separate debate around software-defined storage.
Shouldn’t they all be aspects of the same discussion?
What do we gain by preferring a singular, consistent and integrated view of software-defined disciplines — and what do we lose by considering them individually, using a traditional lens?
Wrapping your head around software-defined anything can take some serious effort, if my personal experience is any guide. I do what I can to help explain the key ideas, and why they are important.
A good starting point for wading into the deep end is our familiar server virtualization — something we all have experience with. Seen through a “software defined” lens, we could better describe it as using application policy to dynamically compose compute services.
Here’s an application. Here are the server resources and services I want it to have: memory, vCPUs, HA, priority, etc. I express my desires to the hypervisor using a policy, which then dynamically allocates and manages the resources and services on my behalf.
If my requirements change, I simply change the policy — the hypervisor takes care of the messy details of reallocation, rebalancing, etc.
This is not exotic stuff — it’s how hypervisors are used around the world today.
And Now On To Networks …
The “killer app” that drove server virtualization was consolidation: cramming more work onto physical hardware. A very tactical need for efficiency resulted in a very strategic outcome.
The first “killer app” for network virtualization (or software-defined networking) appears to be micro-segmentation — the ability to easily define isolated networks behind the enterprise perimeter, and manage the rules by which they interact — using policies.
This is a big deal, and is worth a short explanation if you're not familiar.
In the last few years, large flat internal networks have proven to be enormous security risks. Attackers can use a footprint gained on one device to probe the entire enterprise network at leisure.
Since no one can keep all the bad guys out all of the time, the logical answer is to segment the network to contain the damage. A desktop in accounts payable shouldn’t be trying to access source code in engineering, as an example.
But as one tries to implement that idea in a larger setting, reality can bite when using traditional technology. Now you have dozens (or hundreds!) of network segments, each with its own need for a firewall and rule set. Enterprises are dynamic entities, so trust boundaries and desired interactions are constantly changing.
However, using a software-defined network makes this particular challenge very approachable. Application policies are used to dynamically compose network services. Change the policy, change the services. Indeed, more than few NSX implementations have been driven by this need.
And, once again, a very important tactical requirement (improving security) results in a very strategic outcome.
And Then To Storage
There’s no clear consensus on what the “killer app” will be that drives the first round of software-defined storage consumption, but I have a strong suspect: private cloud deployments.
Private clouds only thrive when highly automated — it’s all about efficient ease-of-consumption. By definition, there’s no good way to predict what storage services will be desired. Change is constant: moving service levels up and down, consuming more or less of resources, etc.
And inserting human beings in the middle of each and every storage workflow sort of defeats the whole purpose.
What you’d like to be able to do is have the application express desired storage policy, which is then used to dynamically compose the requested services. Change the policy, change the services.
Once again, a very tactical need (efficient ease-of-consumption) may result in a very strategic outcome.
Making The Case For A Singular Software-Defined View
Note that in each case (compute, network, storage) services are dynamically composed in response to an application’s specific policy.
So — at least at one level — the raison d’être for software-defined anything is giving applications what they need.
But that still doesn’t make an ironclad case for an integrated software-defined model vs. considering them as traditional standalone functions.
I think the argument for the former has three components: dependencies, optimizations and consistency.
Yes, this is an argument for VMware’s vision of SDDC. It's not that I'm opposed to alternative viewpoints, but -- as of this writing -- there aren't a lot of serious alternatives to consider for the enterprise IT crowd.
Examples Of Cross-Domain Service Dependencies
Many application services are dependent on others. Clearly, provisioning compute necessitates provisioning network and storage resources, but there’s more to the picture than that.
One example from the network world might be logging. If I request a logging service, that’s going to take storage — maybe a lot. If I want those logs automatically analyzed, that’s going to take compute as well. The network service catalog will need to be able to autonomously provision storage and compute services, if it's all going to work smoothly.
While some network services might be considered in isolation (e.g. firewalls), the more useful ones will be composed of storage and compute services underneath.
Similar examples emerge from the storage world. Most application-storage communication involves a network of some sort, which must be provisioned and managed. Various forms of data protection (remote replication and backups) are dependent on network bandwidth. Deduplication — if requested — is CPU and memory intensive. Ideally, storage services could request compute and network services as needed.
Again, while some storage services might be self-contained, the more interesting ones will be cascade to network and compute services.
Here’s the point: in our emergent picture of software-defined, we have to allow for services in one category (e.g. networking) to request supporting services (e.g. storage, compute).
That sort of integration going to be hard to achieve unless each of the support software-defined disciplines have been designed to consume other services presumed to be present.
Examples Of Cross-Domain Service Optimizations
When you go looking, all sorts of interesting new optimizations are possible when we consider an integrated software-defined view vs. separate disciplines.
Let’s say you’re replicating over a remote link, and things are getting slow. Yes, you could allocate more bandwidth — if available.
But there are other options to consider.
For example, you could crank up compression (requiring compute and memory), or perhaps simply throttle the offending party if that’s an application policy option.
Another example: storage is getting pummeled with read requests, and existing storage read cache is getting overwhelmed — such as you might see with a boot storm. Rather than starve other storage consumers, one optimization scenario might be to dynamically allocate a chunk of RAM as read cache (e.g. CBRC in VDI deployments) until the storm passes, and then reallocate.
Here’s the point: when policy objectives aren’t being met, there’s usually more than one way to respond. Many of those responses involve substituting one kind of resource for another, and that will require a high degree of integration and awareness across functional domains.
Examples Of Cross-Domain Service Consistency
Getting everyone “on the same page” can be difficult in an organization where people have to work closely together. In highly automated and optimized IT infrastructure, it’s the same situation.
Exactly what does “High Availability” mean to each respective discipline? “High Performance”? “Audited”? “Compliant”? “Mission Critical”?
Yes, we can laboriously specify externally to storage, network and compute precisely what these policy concepts might mean, but a more attractive alternative would be to leverage a reasonably consistent set of definitions across all three.
This desire for underlying consistency becomes magnified when one considers building hybrid clouds using external service providers. Consistent concepts and behaviors make the job of orchestrating predictable behavior across clouds that much easier.
A related perspective emerges when one considers monitoring, alerting, reporting, etc. across the integrated whole. While integrating disparate components is possible, it’s a lot of work that seems to never end.
The Observed Adoption Anti-Pattern
While I’m sure there will be some forward-thinking enterprise IT architects that see software-defined as an integrated and consistent construct across disciplines, the likely adoption patterns are most clearly pulling in the opposite direction.
The compute/server team makes a hypervisor choice, and figures out how to best automate the things they are responsible. The network team starts considering network virtualization, makes a technology choice, and starts automating their bits. And, finally, the same story repeats for the storage team.
Three teams, three potentially different choices.
Each team is embracing software-defined concepts. Each has identified the “win” for their part of the puzzle. Each is acting in a logical and pragmatic manner — albeit from their functional perspective.
Pity the poor infrastructure architect who has to glue those choices together in a single, cohesive whole.
In the hardware-defined world, we attempt to solve this multi-technology challenge with long, detailed and constantly-changing interoperability matrices (from each vendor!), a substantial and ongoing investment in interoperability and regression testing, plus after-the-fact integration for key workflows for things like provisioning, monitoring and reporting.
Indeed, much of the appeal of converged hardware infrastructure (e.g. Vblocks, Flexpods, et. al.) lies in re-creating the integrated nature of products that were designed and implemented independently. And addressing that need has quickly become a fast-growing multi-billion dollar market segment -- the demand is obviously there.
I feat that what we saw in the hardware-defined world, we will inevitably see in the software-defined one.
Because — while the technology has certainly changed — we humans really haven’t.
--------------------
Like this post? Why not subscribe via email?
Comments