In an industry powered by new, shiny things — we’ve got a new one that’s gaining traction: hyperconverged.
The basic idea is simple: collapse external storage (and, eventually networking) into a single, software-powered environment that runs on commodity servers.
The potential benefit is two-fold: reduced capex through use of commodity server platforms, and reduced opex through less reliance on storage (and network) specialists.
The market pundits forecast that this category will continue to grow. VMware plays in multiple ways, but so do a bevy of startups.
The realist in me knows that everything has its pros and cons. Enterprise IT is a diverse, complex beast — where does hyperconverged fit, and — most importantly — where does it not?
Aggregation Vs. Disaggregation
The power of hyperconverged is aggregating previously disparate functions into a single software platform and associated server-based consumption model.
One somewhat valid criticism is there’s less ability to independently scale compute, memory and storage. While there is decent ability to vary configurations, the server combined form-factor can be more limiting than the disaggregated alternative.
Does this matter? Yes and no.
Certain applications can demand a lot more of one or the other. Imagine an archival content app — lots of efficient storage, little compute. Or a real-time decisioning application — lots of compute, little storage.
Enterprise IT app portfolios can be wonderfully diverse zoos.
The counter-argument is standardization. The fewer diverse architectures that have to be operationally procured and managed, the better. So the architect is left with the question: is functional specialization worth it?
The other consideration is significant scale. For lack of a better term, large-scale (or web-scale?) environments that are running largely 3rd platform applications can theoretically benefit from discrete yet globally shared pools of compute, memory and storage — extracting maximum resource efficiency -- although that isolated goal can introduce all sorts of negating operationally complexities.
One implication for me is to pick core technologies (e.g. hypervisors) and associated control planes that can play identically in either hyperconverged (aggregated) or disaggregated environments as needed.
The Dreaded Lock-In
Prudent long-term IT planning mandates an exit strategy if your first choice doesn’t work out. With hyperconverged, you’re inevitably putting more eggs in the same basket.
What if you like the mgmt GUI, but hate the storage subsystem?
The idealized hyperconverged environment would be constructed of decomposable pieces that would present the option to swap in, or out, if the need presents itself down the road.
Storage, networking, management, hypervisor — it should all be potentially on the table.
To be clear, I am *not* arguing that most customers should attempt to assemble their own hyperconverged environment out of piece parts — a lot of the “ease of” value proposition would inevitably disappear in the process.
But, should the situation present itself, the option should be considered.
Extreme Optimization
We often joke about “nerd knobs” — the hundreds of controls, parameters and settings that are exposed in compute, networking and storage.
Part of the hyperconverged value proposition is a valid attempt to minimize the need for such detailed control of the infrastructure.
But there’s a difference between not needing detailed controls — and not having them in any form.
Non-hyperconverged infrastructure usually has a rich set of controls: the hypervisor, the network and storage. Yes, it’s hard to make it all disappear, but it’s clearly there when needed.
But hyperconverged solutions vary greatly in the amount of control and optimization they grant the sophiticated administrator. Putting on my vendor hat, the vSphere/VSAN combination has a very rich set of controls. I think our major challenge is convincing customers to stick with the UI and policy settings — and not be tempted to twist the wrong underlying knob :)
Organizational Concerns
Traditionally, the virtualization/server team sits in one group, the network team sits in another group, and the storage group in yet another.
Thus, delivering any infrastructure service requires complex -- often negotiated -- interactions between all three.
The benefit of hyperconverged is simple: most roles collapse into a single admin role — and that’s appealing.
But politics can be a real and tangible force in IT organizations. Better approaches shouldn’t be adopted if it results in a riot. And it has to be acknowledged — hyperconverged seriously redraws organizational lines — and that might be a non-starter in some situations.
Stranded Assets and Technical Debt
It’s a bit sad when I run into an IT group that seems to be managed by purchasing or finance.
Lots of different stuff on the floor, all locked into very long depreciation schedules or long-term leases. And a harried IT team trying to keep the lights on with their accidental architecture.
Ouch — I feel your pain.
Occasionally, one of these groups gets lucky: new leadership, a bit of new money — and they’re looking to catch up on their technical debt in a big hurry. So they go out and splurge on a bunch of hyperconverged gear.
While it’s great to get the new stuff in, the problems usually run far deeper and start at the top: an under-appreciation of the value that IT brings to the organization — which results in an under-investment pattern in people, process and technology.
Put differently, hyperconverged isn’t going to make an ugly IT organizational situation any prettier. No, I won’t make analogies around lipstick and swine. It may buy you some time, though.
There is no magic IT pill, unfortunately.
The Bigger Picture?
Yes, I believe the advantages of hyperconverged are real and tangible for many.
Yes, there are many places where traditional, specialized IT still makes sense — but there are also plenty that are strong candidates for a standardized, simplified — and hyperconverged — approach. The advantages can be significant.
But stepping back from the bright, shiny thing, the realities of enterprise IT intrude into the warm glow.
Hyperconverged has to work with the rest of the landscape. Hyperconverged has to work with the IT organization. The same concerns we’ve always had with enterprise IT solutions are still in evidence.
Smart folks will realize — like everything else the IT industry comes up with — it’s just another tool in the tool belt. And a good one at that.
---------------
Like this post? Why not subscribe?
Really good points, I especially like how the increase in efficiency of the project team may be improved.
Posted by: Valamis | June 11, 2015 at 04:57 PM
I like your comments about lock-in. It's no surprise that scale-out architectures are difficult to migrate off of. Isilon, Avamar and Centera are all scale-out technologies that become harder to migrate from as they grow as well.
In regards to System, Network, and Storage administrators being affected by advancements in software...software is the key to exploiting commodity hardware and the jobs will shift from managing hardware to managing software. It's only a matter of time before those silos will innovate or disappear.
In my opinion, hyperconverged is gen2 of converged infrastructure. I'll be curious where we go from here.
Posted by: Jason | June 16, 2015 at 12:12 PM
Really liked the way the author stopped short of saying "and EMC has everything, come buy from us". But yes a really meaningful article in the maze of hyperbole by converged infrastructure providers. Of course everything in life has pros and cons so does these new things. But then IT is not concerned about "future management" but solving the problem of today. The businesses, who view them as avoidable cost centers and partner in lethargy, want to chuck them out as soon as possible. To satisfy these, IT must keep running the treadmill. Despite may be knowing what they are doing is doomed or not correct, they may not have real options.
Posted by: TechYogJosh | June 18, 2015 at 06:51 AM