Software costs seem to get less attention, even though they're arguably more amenable to IT negotiations.
More interestingly, improvement of operational processes seem to get almost no attention at all.
As hardware costs continue to plummet, and software costs slowly start to move more in line, I'd argue that the next big 800-pund gorilla in the IT organization is re-engineering IT processes to be orders-of-magnitude more effective than they generally are today.
If you're a regular reader of my blog, you'll know my rant by now: virtualization is changing the way IT infrastructure can be done.
Build it differently (virtualized pools of dynamic resources), operate it differently (zero-touch or low-touch processes) and consume it differently (convenient and variable consumption), and you've got the basics of a "cloud", regardless of whether you run it yourself or someone does it on your behalf.
I believe that learning to think like a cloud operator or service provider is the next big thing for enterprise IT organizations. Without that mindset, your internal users will increasingly be attracted to external organizations that offer this kind of service.
In my previous post, I discussed a composite metric ("Thinking In Terms Of Cost-To-Serve") in an effort to move the focus away from the ingredients (server, storage, network, etc.) and more towards the visible result, e.g. a usable virtual machine, ready for work.
In doing so, we not only combine the physical inputs but the labor inputs as well. Focusing on "cost to serve" as a metric makes newer infrastructure approaches (e.g. Vblock) more attractive. More importantly, it creates a strong incentive to examine the labor (operational) aspects as well.
From Cost To Speed
Most of our organizations today run on knowledge workers -- people who use their intellectual capital to create value. Often, their tool of choice in doing so involves the use of technology, and frequently the use of IT infrastructure.
Want to accelerate the value of your knowledge workers? Make IT infrastructure quick and easy to get to.
Whether it's a new web application, or engineers designing something, or financial analytics, or whatever -- new uses of IT infrastructure often ends up being the tool that creates new value for the business.
And, if you're in business, time is money. Speed matters more than precision. Good business ideas are not like fine wine -- they don't age well.
So, in addition to my "cost-to-serve" metric, I'm proposing a "time-to-serve" metric -- the complete cycle time between a valid business request for IT infrastructure, and when it's turned over for unrestricted use by the requestor.
Now, the obvious disclaimers -- not all IT requests are amenable to this type of approach -- but I'd argue that the bread-and-butter requests (i.e. "I need a new development server") certainly fit in this category.
Indeed, I can't tell you how many large IT organizations that have come to the conclusion that services such as Amazon's are incredibly expensive at scale. I think perhaps they've missed the point -- Amazon's strength is its convenience for smaller requests.
Amazon competes on speed, not price.
How It Usually Works
For a while, I've been asking enterprise IT organizations to outline all the steps along the journey of the average business user who might want some small-scale IT infrastructure to get something done. The details and the sequence may vary, but the essentials are surprisingly similar in many organizations.
First, the business user has to discover the internal process for requesting IT infrastructure. Very often, this is not well documented, and involves a bit of digging and calling around to find out who to ask and how it works.
Second, a formal request usually needs to be drawn up, detailing what is needed, why it is needed, justification, etc. This gets submitted in some fashion to the IT organization.
Third, the IT organization periodically meets to prioritize requests. The business user typically is not invited to this meeting. A ruling is made as to whether or not the request can be supported.
Fourth, the business user and the IT organization go back and forth on how the new request will be paid for. These negotiations can become quite extended.
Fifth, a decision is made to go ahead and provision the request. Server and/or storage resource might be ordered from a vendor, or existing resources allocated to the new request.
Sixth, the environment is configured, tested and handed over to the user.
Elapsed time for the business user? Easily weeks, usually months. Not to mention many hours of everyone's time: business user as well as IT organization. And precisely what value is being created by all of this?
When I talk about this with IT organizations, they usually are looking for efficiencies in step #5. I usually end up arguing that the correct business-oriented view is optimizing steps #1 through #6. This is not what they wanted to hear from me.
If you think about it, "cost-to-serve" ought to be "all in" regarding each and every cost. That's the way external service providers do it. Similarly, "time-to-serve" should be "all in" regarding each and every activity between identified business need and getting on with it.
Will People Pay For Speed?
One of the IT debating rat holes in this discussion is the interplay between cost and speed.
IT people will sometimes passionately argue that they can do a far better job at controlling costs if you give them more time and longer planning horizons. When I meet with business people, the mindset is just the opposite -- they don't have time, and they certainly don't usually have the luxury of long planning horizons.
You could debate who is right, but the reality is that IT exists to serve the business, and not the other way around!
Business people are used to paying for speed. They get the fastest flight between two points, and usually book it on very short notice. If they're really senior business people, they get a corporate jet. Or they invest in the ultimate accelerator: telepresence. They use overnight air express service instead of cheaper ground transportation.
Why? Time is money. And, if you're in a very competitive or high-growth industry, doing things faster and faster with an increasing sense of urgency is a key part of your business model.
Shouldn't your IT processes reflect that as well?
Putting It All Together
OVer the last year, I've sketched out what I believe to be the dominant model for next-generation enterprise IT: the private cloud. What makes it a "cloud" is that it's built differently, operated differently and consumed differently. What makes it "private" is that it's under the control of the enteprise IT organization.
But new models require new metrics to judge their effectiveness, and drive continually improvements.
I've now outlined two of these metrics that I believe will drive the next wave of IT thinking.
One is obvious -- cost-to-serve. It's the "all in" price to the end user. Expose those prices, and two important things happen: users make intelligent choices, and IT organizations get benchmarked against external alternatives. Both are good things.
The second is perhaps a bit less obvious -- time-to-serve. It's the "all in" elapsed time between someone needed IT infrastructure, and getting on with it. EXpose those end-to-end times, and two important things happen: business users can understand how long things take, and IT organizations get focused on reducing cycle times for key parts of their service portfolio.
There are more next-gen metrics I'd like to discuss -- but those will have to wait for a future post!