How to use feedback loops to improve your team's performance

Consider these lessons from DevOps research and real-world production lines on how feedback loops prove fundamental to your team's success
192 readers like this.

You’ve added people, you’ve bought tools, you’ve completed long and expensive migrations, but the biggest problems – like years-old customer or associate complaints, or slow adaptation to changing markets – still remain. You’ve adopted Agile or DevOps ways of working, but you’re still releasing once every few months – or worse – and projects are still delayed. You’ve added more checks and approvals to production changes, but outages are still happening. You’re desperately trying to change outcomes in a complex, adaptive, sociotechnical system.

What you need now is new sources of feedback – to teach you more.

Leaders can start by considering the dynamics common to all systems. To begin, Donella Meadows uses the example of a Slinky toy in her seminal book, "Thinking in Systems". Holding the Slinky with both hands, she removes the bottom hand such that it falls, suspended and bouncing.

She then asks, “What made the Slinky bounce up and down like that?”

“Her hand,” right?

We often erroneously attribute cause to individual actors within a system.

She repeats the demonstration, but instead of the Slinky, she suspends the box it came in. Of course, when her hand moves, the box doesn’t bounce at all. Now we see her hand wasn’t what made the Slinky bounce; it was the Slinky itself. If you’ve ever blamed a problem on “human error,” you’ve probably made the same mistake: We often erroneously attribute cause to individual actors within a system, without considering the larger system around them. In IT, this lesson is the origin of “blameless postmortems.”

But if not human error, then what exactly is the problem?

[ Need to explain key Agile and DevOps terms to others? Get our cheat sheet: DevOps Glossary. ]

The importance of dissent

In “Outliers,” Malcolm Gladwell discusses that during the 80's and 90's, airline pilots all over the world had similar practice and planes, yet some countries’ accident rates were an order of magnitude higher than others. The apparent correlation between region and safety was uncomfortable, yet undeniable. What was going on?

The most dramatic recent advances in flight safety have not come from analyzing the recorded instrument readings, but from analyzing the recorded cockpit conversations.

While flying a plane is obviously technical, it’s also social. A flight, like IT, is an example of a sociotechnical system. The plane is only a technical subsystem, part of a larger social structure that includes the flight crew, air traffic control, and more. As such, the most dramatic recent advances in flight safety have not come from analyzing the recorded instrument readings, but from analyzing the recorded cockpit conversations. Accidents were a result of poor communication between crew.

And so, safety records varied regionally primarily because cultural norms about communication varied regionally; in particular, norms on challenging authority.

This challenge is innately human. Chris Clearfield – a consultant and, incidentally, a pilot – and his co-author András Tilcsik discuss in their book, "Meltdown", that when an authority figure is in control, there is often a lack of dissent, and a lack of dissent predicts failure. For example, in the U.S. the vast majority of accidents occurred when the captain – the more experienced pilot! – was the one flying. People are generally uncomfortable challenging their superiors, but in complex systems, you need as many opinions as you can get, especially those that are uncomfortable.

Faced with dire consequences, airlines began training crews how to dissent, and by doing so crashes declined dramatically. The practice, Crew Resource Management, has since become a global standard with influence far beyond aviation.

[ Want DevOps best practices? Watch the on-demand webinar: Lessons from The Phoenix project you can use today. ]

Feedback and information flow

In systems, feedback is a fundamental force behind their workings. When we fly a plane, we get feedback from our instruments and our co-pilot. When we develop software, we get feedback from our compiler, our tests, our peers, our monitoring, and our users.

Dissent works because it’s a form of feedback, and clear, rapid feedback is essential for a well functioning system. As examined in “Accelerate”, a four-year study of thousands of technology organizations found that fostering a culture that openly shares information is a sure way to improve software delivery performance. It even predicts ability to meet non-technical goals.

These cultures, known as “generative” in Ron Westrum’s model of organizational culture, are performance–and learning–oriented. They understand that information, especially if it’s difficult to receive, only helps to achieve their mission, and so, without fear of retaliation, associates speak up more frequently than in rule-oriented (“bureaucratic”) or power-oriented (“pathological”) cultures. Messengers are praised, not shot.

"The antidote to complexity is not necessarily simplicity, it’s transparency."

Feedback loops are often the source of surprising, nonlinear dynamics in complex systems. But not all surprises are bad. By leveraging feedback loops intentionally, we can wield unpredictable, adaptive behavior to our own ends.

In one of Meadow’s favorite examples, simply moving homes’ electrical meters to a high visibility area reduced electricity consumption by 30 percent. It’s the same reason the authors of Accelerate found that continuous delivery – rapid feedback cycles from development to production – is predictive of a generative culture and industry-leading performance.

As Clearfield is fond of saying, “The antidote to complexity is not necessarily simplicity, it’s transparency.”

The learning organization: How to cultivate it

Getting feedback is one thing, but learning from it is another. Organizations have sources of information everywhere, hidden with their people and their failures, but few are effective at incorporating it. It’s those few that separate from the pack.

If you're a leader, don't speak first

Clearfield and Tilcsik suggest that the next time you have an idea in mind, don’t speak first. Instead, start by soliciting diverse, contrary views, and ensure others feel safe enough to offer them. Try suggestions from your peers or your reports, even if you are skeptical. Experiment – like Toyota.

If you hear information that’s uncomfortable, thank the messenger. Their behavior is exactly what you need. Listen for and amplify the faint hints of problems, like quiet complaints, near misses, or – and especially – emotional cues. These are signals trying to teach you something. If you’re not amplifying them, you’re giving implicit feedback that those signals aren’t wanted.

Bring unlike minds together

The Accelerate and Meltdown authors all quote the fact that teams made up of multiple races or genders are proven more likely to critically discuss each others’ opinions, make fewer mistakes recalling facts, and score higher on collective intelligence.

At Red Hat, we had a small IT team working on redhat.com's single sign-on solution. When we joined with “outsiders” from our engineering and marketing teams in improving for the next generation of hybrid cloud, the change that ensued was transformative. In six months, our release cadence doubled, while we simultaneously reduced the number of failed requests per month by 98 percent. It all started with an unlikely conversation.

From disappointment to curiosity

Were these results predictable? Not exactly. What was predictable was that through new sources of feedback, we would learn something. Of course we have a long way to go. But if we accept that our systems are too complex and dynamic to be predictable, we give ourselves room to move from disappointment and fear to curiosity and adaptation.

If we activate the information and creativity within our teams, we may not be able to predict exactly how we achieve results, but we can predict we’ll succeed.

[ How can automation free up more staff time for innovation? Get the free eBook: Managing IT with Automation. ]

Alec Henninger
Alec Henninger is an Integration Domain Architect at Red Hat who specializes in data-intensive distributed systems, software architecture, and domain driven design. With a background in music, he takes a creative and entrepreneurial approach to solving hard software and organizational problems.