How to align test automation with agile and devops

Along with CI/CD’s continuous integration and continuous development, you need continuous testing

How to align test automation with agile and devops
Thinkstock

One key devops best practice is instrumenting a continuous integration/continuousdelivery (CI/CD) pipeline that automates the process of building software, packaging applications, deploying them to target environments, and instrumenting service calls to enable the application. This automation requires scripting individual procedures and orchestrating the steps from code checkin to running application. Once matured, devops teams use the automation to drive process change and strive to do smaller, more frequent deployments that deliver new functionality to users and improve quality.

But there’s a significant assumption that the automation coupled with frequent deployments will drive quality. Here’s what test automation has to test for: Have the application and code changes been tested thoroughly? Does the release meet the minimal acceptance criteria for deployment? Will the new release introduce production defects that affect users, are difficult to debug, and may be disruptive to the organization to resolve? Has performance of the application been evaluated sufficiently? Has the application been tested for known security vulnerabilities?

Defined: What is continuous testing?

To be truly running CI/CD, testing must be automated and integrated into the CI/CD pipeline. In other words, the development teams that target and achieve this goal are implementing continuous testing.

Continuous testing requires teams to have already automated a set of tests that can be plugged into the CI/CD pipeline. Although testing applications is done in most organizations developing software, having a technical practice in place that evaluate risks, prioritizes quality assurance implementations, and automates the most critical application tests requires people, practice, and technology to establish.

Why test automation is more important today

Before starting down the path of automating tests and then implementing them in a continuous testing pipeline, it’s important to identify some of the target goals and benefits. To do this, consider some of the issues that manual testing exposes. For organizations still running tests manually, automation enables a level of consistency in the testing process. As new tests are automated, they can be orchestrated into a regression test to fully validate the application against all established criteria. Because the tests are automated, they can be run in shorter duration, more frequently, and at lower cost.

In addition to improvements in productivity and increasing test coverage, automation provides other benefits. Distributed computing architectures such as serverless computing make debugging issues harder and more time-consuming. Automation let teams develop tests against individual or groups of application components and services, making it easier to isolate issues before they become production defects.

There probably is more business importance around the overall quality and performance of applications today. Businesses investing in customer experiences as part of digital transformation programs should be looking for high-quality and fast-performing experiences. Mobile applications need to be tested on iOS and Android. Organizations using more data and analytics from enterprise systems, customer-facing applications, and third-party data sources should be testing their data integration processes and data quality.

From a technology perspective, there are many good reasons to invest in automation. Security vulnerabilities and data protection are far more important today to organizations that are developing customer-facing applications. In addition, being able to test applications easily and consistently lets IT organizations patch and perform upgrades more frequently.

Continuous testing requires defining priorities around risk

It is no simple task to automate testing. The problem for most organizations is that they have too few people testing, too many changes going through the application development process, and too little time to perform tests and fix defects. So, while continuous testing has obvious importance, its implementation needs to be prioritized strategically by leaders. It then needs a business and technical strategy because most organizations cannot easily afford to test everything with equal priority.

To prioritize what areas of the application to focus testing efforts, you need an assessment of critical business processes and user experiences. If an application has ten functions but the majority of user activity is on using three of them, testing should start with these three more-critical functions.

Then there is a question of what types of testing is most important. In a typical web or mobile application, development teams may want to implement unit testing at the code level, formalized API testing for any web services, user experience testing for primary user interactions, browser or device testing, code analysis, performance and load testing, and—most critical these days—security testing. That’s a lot of testing, especially for small testing teams.

What types of testing teams focus on should also align to business need and risk. For example, if the APIs are going to be commercialized, testing this layer becomes more critical. If an application has significant user activity during peak periods, performance testing becomes important. If the application processes a large variety of data or delivers analytics capabilities, regression tests that have a variety of data inputs are important.

Key questions to answer about continuous testing

Once a business risk assessment and technology strategy are in place, organizations should look at the technologies and processes to develop and execute tests. Enterprise-grade tools from Microsoft, IBM, HP Enterprise, MicroFocus, and others compete with tools from vendors specializing in testing software such as Parasoft, Tricentis, and SmartBear. Then there are a large number of testing-type-specificand platform-specific tools that can be used for unit testing, functional testing, penetration testing, managing acceptance criteria, performance testing, and API testing. Many are open source testing tools and frameworks such as Selenium, JMeter, Junit, Maven, and SoapUI that have long histories of being used for both small and large testing needs.

There are several questions that should be considered where reviewing testing tools:

  • How well does the tool enable implementing the testing strategy around risks and focus areas?
  • Is the tool being used by developers, testers, or system engineers, or by a combination of all three?
  • When in the agile development process will tests be created and executed?
  • How easy does the tool integrate with CI/CD tools?
  • How can the tool group tests into different categories and used at different stages in the CI/CD process?

The first question is somewhat obvious: If you’re focusing testing efforts on functional and browser-level testing, ten you’re more likely going to want to use solutions that focus on this capability. The answer is less obvious when organizations support multiple applications with different testing needs. Trying to decide on a reasonable number of tools for the scale of application development and for the different application development needs is not trivial when there are few quality assurance engineers and growing expectations on testing capabilities.

As part of considering different tools, it’s important to discuss who will be operating them and how they will be used. For example, it’s most common for developers to be responsible for unit and API-level testing because this is usually integrated into the coding environments. Most functionality testing, browser testing, and device testing are left to quality assurance testers, while performance and penetration testing often requires collaboration among developers, testers, and engineers.

What’s critical is to align on what types of testing is being done by framework and whether test scripts from one framework can be used in others. Functionality testing that replicates what unit tests are already doing isn’t adding value. On the other hand, using functionality tests in performance testing can increase the scope of what’s tested.

The next consideration is where in the agile development process different types of tests will be created and executed. Most teams implementing unit and API testing as part of developing the application. But when to implement functionality, load, and penetration testing is less obvious. It depends on how sprints and releases are managed, the frequency of releases, the overall risk with each deployment, and the level of automation achieved.

Factors going from automated testing to continuous integration

The next set of considerations is whether, where, and how automated tests get integrated into the CI/CD pipeline. Devops teams should look for documented integrations such as integrating Jenkins with Selenium or running Postman API tests from Travis. These integrations should make it easy to make service calls to run the tests, but also be able to parse the response and handle error conditions.

With integrations in place, the team should consider what types of tests are optimal for continuous integration. Because devops teams often run CI/CD pipelines frequently, automated tests that require long run times are not be optimal for continuous testing and instead should be run outside the CI/CD pipeline. On the other hand, running unit tests with every automated deployment to development, testing, or staging servers provides quick feedback to the developers when new code breaks a regression test.

Mature teams orient there testing into different channels. Smoke tests need to run quickly and provide a fast, comprehensive view of whether the core functionality of the application is working. Unit and API tests can be aligned by software components and have rule-based triggers executing them when there are code changes in the dependent components. End-to-end testing that often include more comprehensive functionality tests, performance testing- and penetration testing can run on a defined schedule and still provide feedback to the CI/CD platform.

As with all technology practices, continuous integration takes a strategy to get started, new practices to consider, and an ongoing commitment to mature. Devops teams looking to implement CI/CD should factor in continuous testing as a third track of implementation and mature it along with continuous integration and delivery.

Copyright © 2018 IDG Communications, Inc.