When I was CTO of Red Hat, one of the most profound questions I heard a CIO ask his staff was this: "What few things must be the same so that everything else can be different?"
That single question both defines and challenges the value of standards. It concisely articulates a minimax strategy for IT procurement. Most importantly, it creates a principle for prioritizing IT standards and strategies.
The commoditization of IT, whether at the hardware level, networking, storage, middleware, or now platforms and even applications, might lead many to believe that the safest place to be is to be average in every way. Buy what others are buying. Deploy what others are deploying. Manage what others are managing. And be happy that what little differentiation this provides is worth the massive amount of risk that is avoided by relying on the law of averages.
But the law of averages is not always a safe place to hide, as the U.S. Air Force discovered in the 1950s. And it is not actually working out very well for CIOs, either.
Averages don't necessarily exist
The summary of the U.S. Air Force story as told by Todd Rose in The End of Average is that while averages can be calculated, they don't necessarily exist. Even when a bell curve shows that most of the data points are closest to the mean, appearances can be deceiving. For example, in this particular plot, the data tells us that most men and most women in this survey are 30-35 years old.
But that average may change radically when using different qualifiers, such as "what is the average age of an intern?" Or "what is the average age of a female executive?" It turns out that the concept of average works very well for a single-dimensional measurement, but multidimensional averages quickly degrade into noise. This resulted in fatal consequences for US Air Force pilots in the 1950s.
During a time of great technical innovation in aviation — when fighters planes became fighter jets, and later super-sonic fighter jets — the design of the cockpit became more and more decisive in determining whether a pilot could stay ahead of the plane and successfully execute the mission, or whether the plane would get ahead of the pilot and fly out of control.
As Todd Rose tells the story about The End of Average,
Using the size data he had gathered from 4,063 pilots, Daniels [Lt. Gilbert S. Daniels, who majored in physical anthropology at Harvard before joining the Air Force] calculated the average of the 10 physical dimensions believed to be most relevant for [optimal cockpit] design, including height, chest circumference and sleeve length. These formed the dimensions of the “average pilot,” which Daniels generously defined as someone whose measurements were within the middle 30 per cent of the range of values for each dimension. So, for example, even though the precise average height from the data was five foot nine, he defined the height of the “average pilot” as ranging from five-seven to five-11. Next, Daniels compared each individual pilot, one by one, to the average pilot.
Before he crunched his numbers, the consensus among his fellow air force researchers was that the vast majority of pilots would be within the average range on most dimensions. After all, these pilots had already been pre-selected because they appeared to be average sized. (If you were, say, six foot seven, you would never have been recruited in the first place.) The scientists also expected that a sizable number of pilots would be within the average range on all 10 dimensions. But even Daniels was stunned when he tabulated the actual number.
Zero.
Not only was there no such thing as an average pilot, but when selecting just three dimensions instead of ten, the “average” thus calculated fit fewer than 3.5 percent of the population. The myth of the average resulted in a generation of planes that almost no pilots could reliably fly, and which killed as many as 17 pilots in a single day.
Don't let averages guide your strategy
In much the same way that increasingly demanding aircraft exposed a fatal flaw in how the aircraft were designed for use, ever-increasing business complexity, competitive environments, and customer expectations, to name only three of perhaps ten dimensions, place ever-increasing demands on IT. Moreover, underlying each of these dimensions are additional dimensions that are as unique and divergent as can be. It is simply not possible for a single average to cover any one, let alone a few of the many dimensions of today's enterprise businesses. Suddenly, the logical flaw of simply playing the averages of cloud computing's major purveyors becomes obvious in terms of its short-comings. The conclusion we can draw from this is simple: averages, should they be necessary to report, can be computed from observational data, but they cannot be a guiding procurement strategy.
Specifically, measurements of uptime, mean time between failures, response time, etc., may well produce averages that meet service level agreement promises, and that's good. But any assumption that composing a solution from well-known average suppliers is no guarantee of mission feasibility, let alone outcome.
One of the things that most excited me about open source software when first I learned of it (and which keeps me excited about it today!) is precisely the idea that however good it may be when you get it, it can be adapted for specific tactical or strategic advantage whenever the need arises. As these adaptations are rolled back into the mainstream code, the costs of those adaptations amortize to zero. This does not mean that the baseline release has a zero cost; rather that incremental features, when they become part of the standard, have zero incremental cost.
Aim for optimal with open source
Because I was very early to the open source party, I have entertained many, many arguments against open source. For a time, one of the more vexing of these arguments was the free-rider argument: if Company A develops a feature at their own expense and it is adopted into the mainstream code, Companies B through Z, who paid nothing for the development get it for free. Isn't that unfair?
Carliss Baldwin, the William L. White Professor of Business Administration at Harvard University, answered this question with game theory. She showed that as systems became increasingly modular (as opposed to monolithic), there were more positive network externalities to participating as a contributor than there were to being an innovation hoarder. To a zero-sum thinker, sharing is a counter-intuitive proposition. But once there is a viable modular platform, one that offers not only breadth and depth of choice, but also the agency to create new choices, not sharing becomes a losing proposition.
But now we can take her argument one step further. We can now understand that from a competitive perspective, despite the fact that everybody now has access to some great new enhancement, the flaw of averages teaches that the originators can remain the principal beneficiary for years to come because the enhancement fit their needs best.
In order to execute this strategy, there must be a good mechanism to return changes back to the core, or upstream as we call it. If that cannot be done, the ever-accumulating changes become too expensive to maintain, and the benefits of standards work against, not for, the organization. Thus, an optimal innovation strategy requires understanding clearly the elements of differentiation that are most valuable, and adopting the standards that, paradoxically, enable the greatest exploitation of those differentiators.
I invite you to look at your IT assets, your IT processes, and especially your IT challenges using the rubric "what few things must be the same so that everything else can be different." Do this to identify the truly valuable elements of standardization that actually enable the kind of platform and functionality customizations necessary to fit the needs of the business. Then consider: what is the best way to manage both standards and customizations in a long-term, cost-effective way. In doing so, you might discover (or better understand) why so many IT organizations turn to open source software solutions. And why it is so valuable to maintain an effective conduit to the upstream communities that can help amortize the cost of customization even as you exploit its value. Don't aim for average: it may not exist. Aim for optimal, and use the power of open source to achieve what uniquely benefits your organization.