When I speak in front of audiences, I usually try and push the boundaries a bit by being intentionally controversial.
Today was no exception. I spoke at a partner event (GreenPages) in front of about 100 people about the journey to the private cloud. Some of my comments even made it to CRN in real-time -- scary thought.
At one point, I asked if anyone in the room had done "hard time on mainframes". And, as usual, a few hands went up.
For these people, the current move to cloud architectures is going to give many of them a strange sense of deja-vu ...
Today was no exception. I spoke at a partner event (GreenPages) in front of about 100 people about the journey to the private cloud. Some of my comments even made it to CRN in real-time -- scary thought.
At one point, I asked if anyone in the room had done "hard time on mainframes". And, as usual, a few hands went up.
For these people, the current move to cloud architectures is going to give many of them a strange sense of deja-vu ...
To Begin With
In Chuck's Oversimplified Technology Dictionary, a "cloud" is any IT environment that's (a) built differently (dynamic pools of shared resources) (b) operated differently (end-to-end service delivery vs. individual silos) and (c) consumed differently.
Going a bit farther, a "private cloud" is a cloud that IT controls -- regardless of whether the resources run internally or externally.
However, this sort of definition doesn't prescribe things like processor technologies (even though there's a strong case to be made for x86) or hypervisor technology (ditto strong case for VMware) and so on.
Simply put, any set of technologies that can get you there ought to qualify as "cloud", right?
Indeed, I wrote a post a while back ("Back To The Future") where I compared many aspects of our idealized cloud world with the legacy established by mainframes. I drew some predictable fire for that one, but I still stand by my core assertions.
Much of the cloud discussion rests on the choice of processor architecture -- as no one has been overly successful in combining multiple incompatible processor architectures into a single, homogeneous pool of virtualized resources.
Processors are one of the key areas that people are going to have to standardize on when they consider their next-gen environments. And right now, all signs point to two clear architectural winners going forward: the x86 processor, and z/OS.
The Narrowing Of The Processor World
During the course of my career, I've been exposed to a dozen or so of processor technologies -- Motorola, MIPS, Alpha -- the list goes on and on.
No one talks seriously about Itanium any more, SPARC's role in Oracle's grand plans is decidedly unclear at present, and -- yes -- IBM keeps swinging away at Power -- but you've got to wonder how long they're going to keep that up -- especially with a wholesale migration of the IT ecosystem to x86 binaries.
Some people just assume that all clouds will be built on x86 architectures, and that's that. But I would beg to differ ...
So, What About Mainframes?
Anyone who blithely proclaims "mainframes are dead" probably hasn't spent much time in larger IT shops. The people who make those claims are more likely to be gone before the mainframes are ... :-)
As far as I can see, people continue to use mainframes for three primary reasons: (1) they're locked in by legacy applications, (2) they handle really scary workloads very well, and (3) run properly, they can be an incredibly efficient way of delivering IT services at scale.
#1 is unfortunate, but we can't do much about that. However, #2 and #3 deserve a bit more discussion.
So much of cloud is about scale-out -- relatively uniform workloads that are easily divisible into more manageable tasks. Mainframes also provide scale-up -- the ability to run enormous single-threaded tasks -- at blinding performance.
Yes, many of those workloads could be re-architected to be more amenable to scale-out architectures, but that's an expensive proposition in itself. And, of course, x86 and associated technologies keep pushing the boundaries with every tick and tock.
Where I think the mainframe approach clearly shines is in the maturity of the integration and associated operational processes. And there are a lot of great lessons to be learned by newer cloud practitioners from their mainframe elders.
Thinking About Those Processes ...
Imagine a very modern mainframe, perhaps running thousands of Linux VMs.
Self-service provisioning for end users? Integrated automation? Monitoring end-to-end service delivery across multiple elements? Been around for years, thank you -- nothing new here.
Sweating hardware and software assets to the n-th degree? Extremely efficient cost-to-serve? Old hat, thank you.
Chargeback? Oversubscription? Aggregate capacity planning? Integrated security and GRC? Advanced backup and replication? Plenty of real-world examples to choose from -- mainframe shops have been doing this for years.
I clearly remember getting my first chargeback bill from a mainframe service provider way back in 1980. One big number, followed by reams of overly precise usage detail on CPU/minutes, IOs, storage, memory, etc.
Of course, it came to me on green-bar paper :-)
Implementing a private cloud with a combination of internal and external resources? Moving workloads around? Mainframe shops have been doing this for many years, albeit with comparatively expensive technology.
The New Economics
In many ways, the current private cloud discussion might be construed as a re-framing of mainframe concepts, only this time with a healthy helping of the new tech economics: commercial hardware, plentiful bandwidth and open source application software.
Maybe the supporting technology is rather new, but the underlying cloud operational processes that deliver all the magic really aren't all that new -- especially if you've been around in some the larger mainframe shops.
So, Where Am I Going With All Of This?
Glad you asked ...
A rather innocuous press release from EMC announced several important capabilities we've added to our historical mainframe support.
Things like full storage API access from Linux VMs under z/OS. A spiffy new non-disruptive migration capability. Dedupe for mainframe backup. FICON enhancements. And a bunch more.
Much of the press release discusses the relationship between mainframes and private clouds. You might be tempted to dismiss the prose as so much marketing fluff, but it's not -- some of the most advanced private cloud implementations today are mainframes -- and this is likely to continue into the future.
Indeed, EMC's mainframe DNA impacts how we interpret terms like "non-disruptive" and "mission critical" and "high availability" and a lot else as well. I think that one of the reasons so many shops depend on EMC for their most important apps is that we understand serious IT -- regardless of whether it's z/OS, UNIX or an uber-large VMware farm on the floor.
There's another important message here as well: the cool things we continue to do for x86/hypervisor stacks, we also intend to do in the z/OS world.
Because mainframes aren't going away any time soon ...
Comments