The basic idea is simple: collapse external storage (and, eventually networking) into a single, software-powered environment that runs on commodity servers.
The potential benefit is two-fold: reduced capex through use of commodity server platforms, and reduced opex through less reliance on storage (and network) specialists.
The market pundits forecast that this category will continue to grow. VMware plays in multiple ways, but so do a bevy of startups.
The realist in me knows that everything has its pros and cons. Enterprise IT is a diverse, complex beast — where does hyperconverged fit, and — most importantly — where does it not?
Aggregation Vs. Disaggregation
One somewhat valid criticism is there’s less ability to independently scale compute, memory and storage. While there is decent ability to vary configurations, the server combined form-factor can be more limiting than the disaggregated alternative.
Does this matter? Yes and no.
Certain applications can demand a lot more of one or the other. Imagine an archival content app — lots of efficient storage, little compute. Or a real-time decisioning application — lots of compute, little storage.
Enterprise IT app portfolios can be wonderfully diverse zoos.
The counter-argument is standardization. The fewer diverse architectures that have to be operationally procured and managed, the better. So the architect is left with the question: is functional specialization worth it?
The other consideration is significant scale. For lack of a better term, large-scale (or web-scale?) environments that are running largely 3rd platform applications can theoretically benefit from discrete yet globally shared pools of compute, memory and storage — extracting maximum resource efficiency -- although that isolated goal can introduce all sorts of negating operationally complexities.
One implication for me is to pick core technologies (e.g. hypervisors) and associated control planes that can play identically in either hyperconverged (aggregated) or disaggregated environments as needed.
The Dreaded Lock-In
What if you like the mgmt GUI, but hate the storage subsystem?
The idealized hyperconverged environment would be constructed of decomposable pieces that would present the option to swap in, or out, if the need presents itself down the road.
Storage, networking, management, hypervisor — it should all be potentially on the table.
To be clear, I am *not* arguing that most customers should attempt to assemble their own hyperconverged environment out of piece parts — a lot of the “ease of” value proposition would inevitably disappear in the process.
But, should the situation present itself, the option should be considered.
Part of the hyperconverged value proposition is a valid attempt to minimize the need for such detailed control of the infrastructure.
But there’s a difference between not needing detailed controls — and not having them in any form.
Non-hyperconverged infrastructure usually has a rich set of controls: the hypervisor, the network and storage. Yes, it’s hard to make it all disappear, but it’s clearly there when needed.
But hyperconverged solutions vary greatly in the amount of control and optimization they grant the sophiticated administrator. Putting on my vendor hat, the vSphere/VSAN combination has a very rich set of controls. I think our major challenge is convincing customers to stick with the UI and policy settings — and not be tempted to twist the wrong underlying knob :)
Thus, delivering any infrastructure service requires complex -- often negotiated -- interactions between all three.
The benefit of hyperconverged is simple: most roles collapse into a single admin role — and that’s appealing.
But politics can be a real and tangible force in IT organizations. Better approaches shouldn’t be adopted if it results in a riot. And it has to be acknowledged — hyperconverged seriously redraws organizational lines — and that might be a non-starter in some situations.
Stranded Assets and Technical Debt
Lots of different stuff on the floor, all locked into very long depreciation schedules or long-term leases. And a harried IT team trying to keep the lights on with their accidental architecture.
Ouch — I feel your pain.
Occasionally, one of these groups gets lucky: new leadership, a bit of new money — and they’re looking to catch up on their technical debt in a big hurry. So they go out and splurge on a bunch of hyperconverged gear.
While it’s great to get the new stuff in, the problems usually run far deeper and start at the top: an under-appreciation of the value that IT brings to the organization — which results in an under-investment pattern in people, process and technology.
Put differently, hyperconverged isn’t going to make an ugly IT organizational situation any prettier. No, I won’t make analogies around lipstick and swine. It may buy you some time, though.
There is no magic IT pill, unfortunately.
The Bigger Picture?
Yes, there are many places where traditional, specialized IT still makes sense — but there are also plenty that are strong candidates for a standardized, simplified — and hyperconverged — approach. The advantages can be significant.
But stepping back from the bright, shiny thing, the realities of enterprise IT intrude into the warm glow.
Hyperconverged has to work with the rest of the landscape. Hyperconverged has to work with the IT organization. The same concerns we’ve always had with enterprise IT solutions are still in evidence.
Smart folks will realize — like everything else the IT industry comes up with — it’s just another tool in the tool belt. And a good one at that.
Like this post? Why not subscribe?