Watching the current raft of hyperconverged players go at it in the blogosphere has turned into a movie where I've lost all interest in the plot and characters. Here's just one recent example of yet another intense piece from my colleague Chad Sakac.
The problem is that I'm just not interested anymore. I know how the movie predictably ends.
That wasn't always the case. Long-term readers will remember me going on and on about hyperconverged, etc. etc. Things change. I move on. Maybe you should too?
Here's the pitch: for medium-to-larger IT shops, hyperconverged isn't strategic, it's just a tactical cost-reduction tool. And if something isn't strategic to the people who buy large amounts of IT stuff, it's not strategic to me either. Everything else gets quickly commodotized.
Since I've historically done a decent job predicting shifts in the IT world, you might want to invest a few moments and understand my thinking.
Agree or disagree -- it's up to you.
What's This All About?
Look beyond the buzzword, and you'll find some very simple ideas. A hyperconverged architecture generally involves implementing storage functionality in software and using server-resident storage devices vs. a traditional external storage array.
The resulting environment can be simpler, less expensive and easier to manage with this approach. And then the debates start: who's got the most technically elegant implementation, which environment is the easiest to manage, has the best feature set, is the least expensive, and so on.
At one time, I thought the idea was pretty revolutionary, and devoted much time to it. And then I stepped back and realized it was more of a band-aid then a cure.
Looking At The Enterprise Application Landscape
Simply put, IT's core job is to deliver the applications people want to use. But not all applications are created equally, are they?
Spend any time working with moderate-to-large enterprise IT groups, and you'll find that they think in infrastructure buckets.
One bucket is clearly about taking the cost out of application infrastructure that can't justify something differentiated. Think virtualization, hyperconverged, and all that.
And another bucket consists of the enterprise applications that the business uses to run the business: ERP, SCM, financials, and so on. You'll rarely find familiar x86 virtualization, or hyperconverged approaches that use it.
Maybe separate buckets for things like VDI, or big data, and so on. Every decent-sized environment is a little bit different.
Here's the point: enterprise IT thinks about each bucket very differently. Yes, you will meet zealots who think that every class of workload can run on a single class of infrastructure, but that noble idea doesn't scale well beyond very modest environments.
For the cheap-and-cheerful bucket, it's really all about achieving the lowest cost. For the bet-your-job workloads, it's all about delivering predictable results at all times, followed by cost.
And the differences in approach matter to the folks who pay the IT bills.
Generic Infrastructure For Generic Workloads
All the hyperconverged players start with the same industry parts list, so no real differentiation is possible in hardware, or even desired. It's mostly about software and how well it does its job.
One of the underlying design principles of virtualization is to attempt to treat all workloads relatively equally. Pool your resources, everyone gets their fair share. That's its strength when it comes to generic workload consolidation.
Although there are certainly knobs that try to single out especially important VMs (e.g. CPU affinity and the like), these are mostly suggestions, introduce unwanted complexity, and aren't generally effective for most differentiated workloads.
What's a differentiated workload?
Big, honking databases and the demanding enterprise apps that use them. Large scale open analytics based on Hadoop. Real-time in-memory analytics. High-end transactional systems. You know, the really demanding stuff.
Typically, these workloads deliver oversized economic value back to business as compared to generic workloads. So the business -- and IT -- thinks of them quite differently.
"Fairness" isn't top-of-mind, getting results is. Not all workloads are created equally.
While it is theoretically possible to run these demanding workloads in a hyperconverged or typical virtualized environment, you'll almost never find this in the wild. And for a good reason: that wasn't the design point.
To be fair, virtualization (and now potentially hyperconverged) has done a great job of helping IT take cost out of the "long tail" of generic workloads (aka craplications) that can't justify isolated or differentiated infrastructure.
But, as always, there's an end to the road, and it isn't pretty. As I've said, we've all seen this movie and we know how it ends.
Racing To The Bottom
If you're a student of this industry, you know how the movie plays out.
A category gets established, initial players are somewhat differentiated, over time everyone ends up doing pretty much the same thing, the category gets largely commoditized, and it ends up being a race to the bottom.
The only question that then matters: who can do it the cheapest?
Servers and desktops have been there for a while, big pieces of the storage and network market are clearly heading in that direction, and hypervisors aren't far behind.
Now consider hyperconverged: who will be able to justify a premium in the long term, and why? Its mission in life is consolidating generic workloads on generic hardware infrastructure as cheaply as possible.
Sorry, I can't see how the entire category can avoid the black hole of commoditization, which is just one reason why I'm losing interest quickly.
And Then There's This "Cloud" Thing
Let me share even more cynicism: what's the cheapest place to run generic workloads? Yep, a public cloud. Why invest in equipment, software and people when you can just swipe a credit card for what you're going to use this month? And -- before long -- easily move to someplace better/cheaper/faster should the need arise?
To be fair, there are many reasons why public clouds aren't more popular for this use case today, but that shows every sign of changing before long, especially as better workload portability and control planes hit the market. We've already seen a race to the bottom for generic IaaS pricing, and there's no reason to believe it shouldn't continue.
The few surviving hyperconverged vendors will find themselves increasingly compared to public cloud services, as their only raison d'etre is saving money.
Unfortuantely, none of the hyperconverged players have a public cloud, nor the resources to build a viable one.
A quick side note: I am continually amused by the hyperconverged vendor cloudwashing that ignores a fairly obvious truth. Being "AWS-like" is not the same as being AWS, nor is being "Google-like" the same as being Google. That's like saying I'm as fast as Usain Bolt because I wear the same shoes.
But What About Cloud-Native Applications?
There is an interesting school of thought that deserves a mention: cloud-native applications that are designed to run well in generic environments, whether they be on-premises or using generic IaaS. And, no doubt, most new applications being created are moving towards this model.
But look at the proposition from a purely business perspective: invest many millions of dollars and several years to re-architect and re-create the essential applications that are running the core of the business today, just so you can have the privilege of eventually running on more generic infrastructure? Errr, when do the savings start?
Yes, you see it being done in a very few situations; but for most it's just not a reasonable option. Which helps explain why mainframes and UNIX systems and bare metal and similar are still with us.
Differentiation Matters
Briefly putting on my Oracle hat, it is fair to say that Oracle has an offering that falls into the hyperconverged category, although it is certainly not marketed that way. It's the Oracle Database Appliance, or ODA.
Its one mission in life is to run the Oracle Database faster/better/cheaper than anything else, except perhaps for a larger Oracle product like Exadata or SuperCluster.
It is extremely differentiated, as it was designed by the same team that is responsible for the Oracle Database. Yes, that is an unfair advantage.
It is not like -- nor does it attempt to be like -- generic hyperconverged approaches. If you've got a handful of oh-so-important applications that use the Oracle Database (and there are a LOT of those), it immediately becomes an interesting offering.
Otherwise, not so interesting.
To sweeten the deal, the same set of capabilities are available via the public Oracle Cloud via DBaaS. Workloads are easy to migrate back and forth as it's basically the same software running in two different places. As a result, cloud just becomes an extension of what you're already doing today.
The Life Cycle Of An Industry Category
In our infrastructure industry, categories are a result of both supply and demand: technology innovation and how IT organizations prefer to consume those technologies. They born, garner initial interest, mature and eventually sink into the sea of marketing mush.
If you are an IT vendor, you have clear choices.
You can ignore the category, and thus miss out on all the excitement and customer interest. You can join the category and battle it out with all the other players, thus accelerating the commoditization of the category. You can take a deep breath, and establish your own category, which -- if successful -- will quickly attract other players and eventually commoditize.
Or, you can choose to transcend the typical category game by precisely targeting an enterprise IT need that existed decades ago, and will likely exist decades from now: optimizing for the critical and demanding workloads that power the core of any modern enterprise.
When it comes to the hyperconverged category, I've lost all interest. We all know how this movie is going to end.
--------------------------------------------
Like this post? Why not subscribe via email?
Interesting article though. Like Sun did in the 90s and early 00s, building HWD where solutions run better is the key. Delivering solution is key. If you are just delivering a VM, not sure there will be a seat at the table. It's as you say, a race to the bottom.
That said, I dispute the statement that Public Cloud is cheapest. If you are running temporary jobs, maybe. But certainly it isn't when you need something all the time. Biggest fear with Public Cloud isn't security or costs, its how do I get my data out of there. It's like IBM of the 90's and '00s, you can check out but you can never leave.
Posted by: matt reese | August 24, 2016 at 07:11 PM
I find that very interesting considering the sheer number of HCI companies Cisco has courted and collaborated with over the past 3-4 years, including failed bids for HCI sweetheart Nutanix.
Posted by: Owen Bir | August 25, 2016 at 08:57 AM
I am an avid follower your blog since the days you worked for EMC. In general I pretty much agree with your vision, but here I have to strongly disagree with one of your statements, the one regarding mainframes: Mainframes (and by mainframes I mean IBM z's) are not in the same bag than UNIX, bare metal or x86 servers. In fact, mainframes are in its own bag them alone. Today's world most mission critical workloads usually run in a mainframe, and is hard to think that , lets say, Citibank, will have its core banking transactional environments running on Amazon, not in the short term at least. Just my two cents.
Posted by: Sergio Pardos | August 25, 2016 at 09:22 AM
I think Cisco's actions are quite rational.
As they have zero presence in storage, any gains through a hyperconverged offering are incremental and might even help sell more of their UCS servers.
Whether they (or anyone else in the category) can make a sustainable business out of it is really the question.
Thanks
-- Chuck
Posted by: Chuck Hollis | August 25, 2016 at 09:24 AM
Hi Sergio. I agree. Not fair to lump them in with all those other architectures.
That being said, I was hoping to make a distinction between "generic infrastructure for generic workloads" and "differentiated infrastructure for differentiated workloads".
And mainframes are very, very differentiated.
-- Chuck
Posted by: Chuck Hollis | August 25, 2016 at 09:26 AM
Chuck, as a fellow Oracle employee I (naturally!) agree with a lot of what you say, but I have to add that in practice I find a lot of IT managers are not as discerning as you seem to hope. The reality is that they spend fortunes on their virtualisation estates, more on trying to rectify the hotch-potch of hardware they had lying around to host it (which is where HCI comes in) and then feel they have to justify all that spending by putting EVERY workload in their Data Center into that virtual morass - whether that's the right place for it or not. Just because you can, it doesn't make it right.
People are more careful about their tea and coffee than their IT estates. They know that tea needs boiling water, but that boiling water ruins good coffee. So getting a 'hyper-converged beverage infrastructure' that purports to make any hot drink you like without actually differentiating between them is only going to produce mediocre results across the board. And if it's only meant to be a budget/convenience device (given the quality of output), then they'll only pay a budget/convenience price.
But like I say, I don't see that discretion so much in IT.
Posted by: Bernard Wheeler | August 26, 2016 at 03:26 AM
I would like to open the opportunity of rationalization breeds innovation concept into the mix. I agree with some of the points and in the sense of the article entirely. Hence why my comment I believe and I'm a little biased with both being a consultant and an Innovation Officer I believe we are just starting to see the effects of the hyperconvergence conversation. I think in reality as the climax of the story comes into focus we start having hope again based on what the outcomes of the end could be hoping for the surprise. I have seen the innovation of VDI take a leap because of Hyperconvergence. My team and I have recently released a set of Desktop Appliances that create a Decentralized compute model for VDI while maintaining a centralized management. This type of innovation starts to allow VDI to take advantage of the capabilities of today's Consumer technologies like "Broadband speeds" , WIMAX and and virtualization to verse traditional VDI (Centralized compute) and leased lines into a Datacenter. As well the ability of the workload optimization is key when applied to desktop virtualization because that's what makes "PC" personal computing personal. Being able to get the resources you need when you need it our platform was designed with that in mind because we focused on other technologies that can now be placed into a single form factor appliance verse just storage , similar to your statement over ODA - unfair advantage - or giving your customers what they want.. that innovation on these new Hyperconvergence platforms are the next step as the integration of Hyperconvergence and IOT start bringing true utility computing into focus as "purpose built" appliances start interacting with "enchanted IOT objects" like Raspberry PI devices it democratizes computing to the point of a utility. So I think Hyperconvergence is like the lord of the rings after three hours you cant believe it has a part two and three !
Posted by: Jaymes Davis | August 27, 2016 at 05:41 PM