Agile success: Don't settle for metrics that tell half the story

Measuring agile development results? Prioritize the impact on business outcomes, says DHS CTO Michael Hermus
868 readers like this.
remote work lessons

(Editor's note: This is the second article in a two-part series. In the first article, U.S. Department of Homeland Security CTO Michael Hermus describes the process changes DHS had to make to pave the way for agile development.) 

As Peter Drucker is often quoted saying: “You can’t manage what you can’t measure.” While many detractors like to equate agile with undisciplined, I believe that successful agile organizations are the opposite.

At the Department of Homeland Security, we have been working to adopt agile methods and practices for several years, but in 2016 we launched a major initiative to transform the entire technology acquisition and oversight process – an effort that still continues today. (At DHS, almost everything that we did to obtain new capability was considered an “acquisition,” including software we built or purchased, and was subject to cumbersome oversight processes originally designed for “heavy metal” acquisitions like battleships.)

[ Read part one of this story: See “When process thwarts agile development, turn the ship .” ]

To reinforce the value of discipline, we’re developing metrics to measure our level of agile success and the impact on business outcomes.

We’re pursuing two different types of metrics: those that seek to measure the performance of the individual development programs, and those that seek to measure the impact of the agile transformation initiative itself.  The latter is quite hard to do, particularly since it will take time to collect significant data across multiple completed programs.

However, we have already begun to collect metrics around acquisition program cycle time (i.e. how long from step one to step two), document creation, review, and approval timelines, and documentation volume. In the five pilot programs, we have seen significant improvements across all of those metrics. Over time, this data will give some insight to the aggregate impact of our transformation efforts.

As for individual program metrics, they currently focus on the cost, schedule, and performance of the delivery of software capabilities. Historically, this means evaluating the program to ensure it performs well relative to a predetermined baseline. For example: If the plan was to spend $100 million over three years and achieve several key performance parameters, how did the program do relative to plan?

Unfortunately, that mechanism doesn’t work well in a truly agile environment, where you don’t plan out every detail for the life of the software program up-front.  If you then try to enforce cost, schedule, scope, and performance constraints simultaneously, something usually breaks. In fact, that approach often doesn’t work out terribly well even for traditional waterfall programs, which is one of the reasons for the move to agile in the first place! So, as a starting point, we focused on identifying best-in-class agile software development metrics.

Creating a core set of agile metrics

"We didn’t want to focus on any one type of metric too heavily and introduce possibly counterproductive incentives."

We stood up a team of subject-matter experts to review the current universe of metrics typically used across industries to measure and manage agile software development projects. We organized them into categories and selected the two or three most effective from each, making sure that they worked well together to tell a cohesive story. We added a few twists of our own, and this became the DHS “Agile Core Metrics”, acting as the common starting point for all programs.

This core set includes typical metrics of velocity, such as story points completed per iteration, and planned vs. accepted points. We also ask for the number of deployments in the reporting period, and average deployment lead time – the time it takes to go from the beginning of feature development to deployment. This is recognized as one of the most effective metrics to understand the maturity of a software development organization.

As you would expect, we also included testing metrics and code quality metrics. Percentage of unit test coverage, outages per deployment, and number of defects in the backlog are important indicators of delivery health. In addition, we tried to include metrics that give us insight into technical debt, and the cost of each iteration.

One important note: We didn’t want to focus on any one type of metric too heavily and introduce possibly counterproductive incentives. Since no single metric is without flaw and can sometimes be “gamed” to make a program appear healthier than it actually is, we attempted to pick a set of metrics that provided a complete, well-rounded view of progress and health.

Measuring the value to the business

While we are just beginning to collect these Agile Core Metrics from our pilot programs, they are already providing valuable insight. It is worth noting that we don’t currently compare performance between projects, but rather are looking at indicators of health over time within each. However, programs that are able to deliver these metrics, regardless of values submitted, are likely delivering software in a modern fashion and exhibiting some level of agile maturity. Programs that struggle to even capture these metrics probably aren’t.

The core metrics represent an excellent start: But alone, they are not sufficient to measure program performance. Perhaps you have a team that’s effectively delivering software: They are hitting velocity numbers, stories are being accepted as planned, and code quality and technical debt metrics show no warning signs. But how do we know if the functions and capabilities being delivered are doing anything important for the business?

What if the end-user is getting a lot of features, but those features don’t actually impact mission or business performance? It is entirely possible to have a team churning out software efficiently, but not adding real value to the enterprise. For this reason, we are also focused on developing effective business value metrics.

To put it simply, programs should define metrics that measure business and/or mission outcomes, as directly as possible. For example, we often see system uptime defined as a key performance parameter for programs. While this may be an important system attribute, it doesn’t measure mission/business value. On the other hand, metrics of threat detection accuracy, or throughput of an adjudication process, directly relate to mission/business value.

Ultimately, the goal of our entire transformation effort is to improve IT program outcomes, but if programs themselves aren’t focused on producing and measuring the right business and mission outcomes, we will not have succeeded. A persistent focus on the value delivered by these programs is required to make DHS a world-class technology organization.

No part of our journey to agile has been easy. I like to joke that the process of transforming our oversight to be more agile and streamlined … has not been nearly as agile and streamlined as I would like! That aside, the journey has been truly exciting and rewarding. We have already begun to implement real changes that will make a tangible difference across DHS, and we’ve got leadership and organizational support to make many more. It has been one of the great honors of my career to help make the Homeland Security enterprise more effective. The reason is simple: Everything we do is on behalf of our frontline operators – the “pointy tip of the spear” – performing missions that safeguard this nation, and those within it, each and every day.

Michael Hermus is the Chief Technology Officer (CTO) at the Department of Homeland Security (DHS), working within the Office of the Chief Information Officer (OCIO). He assumed this position on June 15, 2015.