DevOps process: 4 ways to improve

Consider these DevOps lessons learned – on key factors from testing to continuous improvement - as you improve your team's processes
341 readers like this.
devops trends 2020

When it comes to DevOps process, many variables can derail even the most well-thought-out projects. However, several best practices can help teams meet deadlines while also lowering risks. In the end, no one wants to be the person held responsible for ill-performing applications that under-deliver and over-promise. Therefore, everyone should learn how to manage expectations and utilize proven tools to decrease common burdens.

[ What tools can help? Read also: Top 7 open source project management tools for agile teams. ]

Here are the lessons I’ve learned to improve the DevOps team process - and how you can apply them in your own organization. 

Lesson 1: Do more regression testing

Regression testing helps solve problems early in the DevOps process.

Regression testing enables issues to be identified and solved early in the DevOps process. To truly adhere to this regimen, developers must maintain a top-level view of the entire project, since many elements have dependency chains that prevent forward progress until certain items are completed. The lesson here: Empower teams to constantly strive for the highest level of quality possible. That means plenty of regression testing is in order.

Many times other forces (sales, finance, customers, etc.) exert their influence to develop with expediency rather than quality. In this case, engineers must clearly document the associated risks of speed vs. quality.

A rule of thumb to follow: The level of quality should align with the criticality of the application. For instance, is the app financial, revenue-generating, healthcare-related, or a human resource procedure? In each case, there are different levels of risk introduced when regression testing is limited. As the developer, you must be prepared to say, “We haven’t tested, and we haven’t delivered the features. We need to push the release of this back so we can deliver functions as promised.”

Lesson 2: Automate regression testing — where it makes sense

All developers know that the quality of an application is measured by how many defects are coming back once it is released to QA. If too many bugs are discovered, software engineers are forced to trade precious development time reserved for new software for fixing bugs on past projects. In other words, if teams are spending a fraction of their time on new work and more time on defects in previously written code, then they’re not being efficient. The lesson learned here: Automate for more efficiency.

A rule of thumb to follow: It’s shocking how many clients want to do automated testing, but in the end decide not to. If a particular process is being done more than three or four times, then automate it! Some developers are willing to spend more time supporting the production defects that could have been easily caught by an automated regression test. Developers must understand the value of automating portions of regression testing and the dividends it will pay in terms of time savings, reputation, and revenue.

Lesson 3: Always strive for continuous improvement

One of the best methods to improve software development is to apply user telemetry information. User telemetry data will give developers very good insights into things such as how culture and language must be factored into software development. Lesson learned here: Use application performance management to improve development.

A rule of thumb to follow: Application performance management applies user-synthetic scripts, or “fake users” to test the functional software aspects in the same manner a real user would. Using this method will help developers uncover problems before the software goes live. If an issue is found, the system will alert the operations team as to whether the resource is getting throttled or it’s a single point of failure.

[ How does continuous improvement help? Read also: How to set up a CI/CD pipeline. ] 

Lesson 4: Don’t ignore the basics

Quantitative code analysis is typically a senior resource person’s job. But due to unforeseen issues, this responsibility may be pushed down to less qualified developers to judge if the code is optimal or not. Lesson learned here: You always want a senior resource — with an eye for detail — to review the software and ensure development moving forward in a progressive manner.

A rule of thumb to follow:  Detailed senior oversight ensures the basics are not overlooked, such as developers using configuration management to ensure controlled software reviews, or collecting the necessary features and the stories with the acceptance criteria. This is basic planning – don't overlook it.

Put it together for more DevOps success

From a software perspective, having a clear understanding of what constitutes success and what determines failure is critical. Everybody talks about smart goals, but it’s really about smart acceptance criteria. Every developer needs to remember these items:

  • Criticality will be high or low depending on the app; push to hold the release if it’s only been 25 percent tested; provide thorough documentation.
  • Understand whether the stated development process is effective before development starts, and automate the mundane.
  • User telemetry always delivers vital data that improve the software’s experiences.
  • Senior developers have been given that title because they pay attention to the basics and small details.

And perhaps most importantly, if there’s a problem and you didn’t say anything, then it’s everyone’s problem — so speak up!

[ How can automation free up more staff time for innovation? Get the free Ebook: Managing IT with Automation. ] 

Sean Kenney is Managing Director of Application Services at Sparkhound. Mr. Kenney leads the organization’s custom software engineering, quality assurance, and enterprise data management functions.