I have been actively involved in discussing clouds here on my blog, as well as various customer and industry forums for a little over a year.
I've put forward some fairly definitive concepts (e.g. private cloud) as well as had plenty of time to discuss and occasionally defend my position. It's added up to quite a few posts.
I went back to one of the foundational posts I did way back in January, and was surprised as to how well the thinking has held up over time.
Today, I'd like to pick up the discussion where my esteemed Cisco colleagues Chris Hoff and James Urquhart have taken the discussion, as they give me a convenient jumping-off point for some deeper topics I've been itching to get into.
"Good" And "How To Get There"
What I like about the current discussions is that there's a reasonable balance between concepts around "good" (e.g. the idealized state we all want to be in) and "how to get there" (e.g. the practical path of evolution in the real world).
A case in point comes from the charismatic Chris Hoff (aka @beaker) who has taken Twitter to an art form.
In a recent post, "Virtual Machines Are The Problem, Not The Solution" he quite correctly points out that there's a ton of unneeded and unwanted cruft above the hypervisor -- operating system, application, middleware, etc. etc.
He's right, of course. But those trillions of lines of cruft-code and the tens of millions of IT professionals who expect them to be there aren't going to disappear overnight, are they?
In his next post ("The Emotion of VMotion") he takes on the promise of virtualized workload portability, he posits that the discussion will break into three, potentially competing camps:
1. The hypervisor-centric approach where entire virtual machines are mobilized.
2. The application-centric approach where the application knows how to mobilize itself, and
3. The network-centric approach where workloads are dynamically routed to multiple locations, rather than explicitly moved.
He argues that these cruft-laden application images are just too big to effectively sling them around hither and yon cost-effectively with current technology, raising doubts about the first model. He's got some good points, but there's more to it than that, which we'll cover in a bit.
James Urquhart (a *real* blogger, BTW), picks up these threads in "Cloud Computing And The Big Rethink". Both highlight recent VMware and EMC acquisitions as part of the backdrop.
A great starting point for where I want to go, so thanks, guys!
The Importance Of Mobilizing Workloads
Chris Hoff is right -- things get very interesting when workloads (and their information) can go from here to there in a cost-effective manner, as I explored in my post "Overcoming Distance". Whether it's playing global aribtrage for IT resources, improving application resiliency or simply getting a better user experience -- more mobility is a good thing.
Chris focuses on the above-the-hypervisor bloat, but I think that's actually becoming a manageable problem over time.
Newer approaches are starting to come into the marketplace to skinny-down legacy application images (think EMC's recent acquisition of FastScale), and newer application development environments produce application images that are inherently more cloud and mobility friendly (think VMware's recent acquisition of SpringSource).
Not perfect, but you can see a glimmer of light at the end of the tunnel. Hopefully it's not an oncoming train.
The bigger problem that EMC is concerned about is the information that the application needs to use. We at EMC live in a world of gigabyte, terabyte and petabyte information stores that most applications need to use. Being able to mobilize those suckers as well is an inherently attractive proposition to us.
As a matter of fact, it's an R+D theme you're going to see us talking a lot more about in the near future. You're already probably familiar with the Atmos storage discussion (relatively static content mobilized via policy), but the more interesting discussion is hot production workloads.
You know, all that Oracle, SAP, SQLserver, DW/BI application stuff that actually changes data. Solve how to mobilize that sort of information store, and it's a very interesting proposition indeed.
Chad was able to squeeze out a darkly-shrouded preview of what he referred to as "active-active distance VMotion" at VMworld.
Sure, there are other concerns around orchestration, security, etc. -- but I see strong solutions for those starting to form in the marketplace. Important, but less of a strategic concern in my world view.
Getting To Good
It's one thing to say "here's what we ought to be doing", and it's another thing to get people there.
I am of the opinion that the best way to move the marketplace forward is to sell a technology with a strong and immediate tactical appeal that ends up providing a "surprise" strategic advantage in a subsequent context.
Trying to sell cool technology solely based on a long-term strategic advantage is a very tough sell.
Example #1: VMware offers an immediate and visceral tactical advantage in server consolidation. It also has the strategic side effect of putting workloads in convenient cloud containers, hence part of the appeal of private cloud concepts.
Example #2: FastScale has an immediate and visceral tactical advantage of making it far easier to manage and provision virtualized application images. It also has the side effect of reducing memory footprint by anywhere from 50% to 90% (think about that one for a second, please), as well as making application images dynamically constructed and managed composite objects (think about that one as well). A tactical win becomes a strategic foundation.
Example #3: EMC's recent Data Center Insight has an immediate and visceral tactical advantage in creating a dynamic end-to-end picture of how various logical, virtual and physical components interrelate. See the demo if you don't believe me. It also has the side effect of creating "one version of truth" that is the underlying foundation for all sorts of next-gen management and security capabilities in both enterprises and service providers.
There are many more examples I could pull from our various portfolios, but I think you might see the point -- we as vendors have to both define the ideal future state, and get people there in a way that it solves an immediate problem.
Back To VMware and Hypervisors
There is one discussion thread in cloud-land that keeps popping up. The thinking is blue-sky: if we were to define the ideal abstraction for physical compute resources (CPU, memory, storage, network), could we come up with something better than what we see in the current hypervisor technologies?
The answer is -- theoretically "of course", but it would be utterly useless to the vast majority of use cases.
VMware's compelling tactical advantage is that it's able to cloud-ify any x32 or x64 instruction set on the face of the planet, as well as provide a rich set of services for newer applications. That's a hard advantage to dispute, simply because it's easy to see how people will get there in a logical progressing rather than a disruptive bang.
Back To Chris' List
Scenario #2 on Chris' list appears to move all the heavy lifting to the application developer.
In this world, these people are now responsible for discovering available resources, defining mobilization policies, implementing application mobility as well as guaranteeing end-to-end service levels, reporting on compliance and more.
Oh yes, and they need to have a nice user experience as well :-)
Categorically, I am starting to reject all strategies that appear to dump the world's infrastructure problems on the application developers. Didn't work well for the SOA discussion, probably won't work well in the generic enterprise cloud space, either.
Although I'm sure we'll exceptions :-)
However, providing advanced application development environments where application developers can express their infrastructure concerns (without actually having to address them), well -- that's an attractive scenario.
Scenario #3 on Chris' list moves things to the network, and presumes that there are multiple, pre-instantiated services to balance against. Not only is this approach used in many webby applications today, it has the nice property of horizontal scalability.
Unfortunately, this comes at the price of a certain amount of redudant resources, and -- more importantly -- the potential of an interesting data synchronization problem for many real-world use cases. But this certainly be a dominant theme going forward, as it is today.
Frankly, I think there's a major opportunity for innovation for his first scenario: the virtual machine (and its information) becomes the dominant cloud application abstraction.
Yes, we have to de-cruft the legacy, and we have to provide application tools that support legacy-free environments, and -- harder still -- figure out how to do global information logistics that are both reliable and cost-efficient.
But, in the big scheme of things, it has the compelling property of a evolutionary, vs. revolutionary path forward.
Thanks, everyone, for the great discussion!
Thanks, Chuck.
Good post.
Steve Todd reminded me that FastScale has a play here in re: to the Bloat problem and I'm glad you brought it up.
Great comments.
/Hoff
Posted by: Christofer Hoff | September 29, 2009 at 02:50 PM
Hi Chuck,
Been looking on keenly in regards to developments with the "cloud" and the possibilities it brings forth, a lot of technologies have taken shape over the last couple of years which make the hole concept of infrastructure as a service in its current and future state a very exciting prospect.
With all these mechanisims now emerging through the revelation of virtualisation (be it storage/server/infrastructure) and ever increasing bandwidth speeds, data/resource mobility could be such that ultimately if an end user decided to utilise an IAAS offering in the future, his data could effectively reside in a data centre in any given country. In terms of compliance/ownership/etc.. this brings forth the question, if something should happen to that customers data... the rule of law in which country should apply in regards to his rights and the IAAS providers responsibility to that customer? There may be something in place already or something in the works to regulate companies offering such a service.. but its not something I've seen. Are you aware of anything ?
Posted by: Evan | November 01, 2009 at 09:09 PM
The "rule of law" is a work in progress when it comes to these topics. The laws are poorly written, enforcement is virtually non-existent, and some are impossible and/or unrealistic to consider.
That being said, enforcing data location policy is becoming a rather straightforward proposition if the system is designed for it.
For example, Cisco networks can enforce policy routing *if the data is appropriately tagged*, i.e. feel free to route this file packet, as long as it doesn't leave the country.
EMC's Atmos can do the same thing at an object metadata level, i.e. this object is not allowed in this country, or can't leave that country, or must be encrypted a certain way.
The problem, of course, is legacy. File systems and ordinary networks don't understand information logistics concepts. These people can use DLP approaches that are driven by content awareness, but usually most auditors frown on even the remote possibility that something is missed, and exposure is created.
Great discussion -- thanks for the comment!
Posted by: Chuck Hollis | November 02, 2009 at 02:53 PM