Test Impact Analysis Hands-On Introduction for Faster Releases

How to use test impact analysis to accelerate testing cycles while diminishing risk.

Key Takeaways

  • Test impact analysis is a way to accelerate the development cycle by only running the tests relevant to the changed source code.

  • If you don't have access to a tool or library specifically designed for test impact analysis, you can manually track changes to your codebase and use this information to determine which tests are likely to be affected.

  • Manual test impact analysis quickly gets unwieldy and even more challenging as devs add new features and refactor code over time.

  • With Launchable, you don’t need to guess which tests run and constantly update your test impact analysis suite. Get hands-on with the Python example of how Launchable can work with the Pytest framework.

You're not alone if your organization has gone full speed ahead on your DevOps strategy in the past five years. Three-quarters of organizations have adopted a DevOps approach - a figure that’s more than doubled in the past five years.

Despite the explosive growth in DevOps, only 10% of Harvard Business Review Survey respondents describe their business as “very successful at rapid software development and deployment.”

DevOps transformation requires faster launch and deployment, but increasing the frequency of builds results in more tests building up and causes development bottlenecks.

For teams in a position where slow test cycles are blocking rapid development, test impact analysis is a way to accelerate the development cycle by only running the tests relevant to the changed source code.

What Is Test Impact Analysis?

Test impact analysis is a way to accelerate software testing by running only the tests that matter for a set of code changes. Performing test impact analysis allows teams to speed up their development and deployment cycle by reducing the overhead of shipping a change.

Traditionally, impact analysis in software testing often relies on static source code analysis to build dependency graphs between code and tests.

If you don't have access to a tool or library specifically designed for test impact analysis, you can manually track changes to your codebase and use this information to determine which tests are likely to be affected. For example, you could maintain a list of the tests that are associated with each module or component in your system, and update this list as you make changes.

How To Perform Test Impact Analysis

To manually perform test impact analysis, you run each test and build a map of which code each test exercises. Once you create the map, you can write a small program that runs every time a developer pushes code. The program reviews modified files and looks up which tests you need to run for the change. 

You need to regularly update the map to have an accurate dependency graph as code changes over time.

Hands-On Python Code Example: Manual Test Impact Analysis

The code snippet below maps test names to the components with which they are associated. In this example, we have three modules:

  • Login

  • Account Creation

  • Password Reset

For each modified component, add the tests to a list that we can pass to the test execution framework.

1# Define a dictionary that maps test names to the modules or components they are testing
2tests_by_component = {
3    "test_login": ["login_module"],
4    "test_account_creation": ["account_creation_module"],
5    "test_password_reset": ["password_reset_module"]
6}
7
8# Define a list of the components that have been modified
9# This should be dynamically generated based on the code changes.
10modified_components = ["login_module"]
11
12# Determine which tests are likely to be affected by the changes
13affected_tests = []
14for test, components in tests_by_component.items():
15    for component in components:
16        if component in modified_components:
17            affected_tests.append(test)
18
19# Now, we can pass the affected tests to our test harness.
20print(affected_tests)  # Output: ["test_login"]

Benefits and Challenges of Test Impact Analysis

When done efficiently, test impact analysis comes with a host of benefits, including:

  • Reducing the amount of time spent on re-testing

  • Improving the overall efficiency of the testing process

  • Better developer experience

Manual software testing impact analysis can be challenging to get right. Whether your project is a small microservice or a giant monolith, the amount of test data you need to work with can get large quickly. Manual test analysis quickly gets unwieldy and even more challenging as devs add new features and refactor code over time.

For every line of code added, you need to determine the potential impacts and which tests are relevant to that line of code. Many dev teams report that selecting the right tests takes a lot of work to perform at scale.

Software Testing at a Growing Org: The Common State of Testing

Let’s walk through a very familiar scenario - a software development team at a midsized tech startup has enjoyed explosive growth over the past three years. They’ve reached their Series C in venture capital funding and have used the cash infusion to hire developers to build new features quickly. The company uses an agile, DevOps-centric model and prides itself on a robust set of tests.

Where do we go from here?

The rapid company expansion comes with growing pains for the dev team. The influx of new features means an influx of new tests and breaking changes, which in turn causes test flake and long runtimes.

Nobody at the startup trusts that failures are legitimate anymore, so developers repeatedly hit the “rerun” button until tests pass. They merge changes anyway when they can’t get the tests to succeed and assume that the problem is with the test, not their code.

Devs disable tests that take too long or don’t seem relevant to the code - they have a job to do and have started to see software testing as a barrier to completing their tasks.

The developers are in a scenario where they no longer trust the tests and arbitrarily disable or ignore them - essentially engaging in their own risky version of manual test selection.

The engineering team is starting to worry that this state of affairs is unsustainable. 

  • What happens if someone merges broken code because they ignored the test that would have caught the problem?

  • How much money is the team spending on cloud resources to continuously rerun flaky tests that they ultimately ignore?

  • How much time are they wasting waiting for tests to run?

The startup’s head of engineering decides it’s time to get ahead of the DevOps technical debt before it causes a costly incident.

Instead of ad-hoc test impact analysis driven by developers trying to speed up their workflow, they’ll figure out how to pick the tests that matter for code changes.

Advancing Test Impact Analysis with Predictive Test Selection

Predictive Test Selection is a branch of test impact analysis that uses data to predict which tests your CI system needs to run based on historic test results and code changes. Launchable is democratizing the Predictive Test Selection approach so that it is available to teams of all sizes at the push of a button. 

Launchable’s Predictive Test Selection solves Test Impact Analysis by harnessing the power of machine learning to streamline software development. Predictive Test Selection uses data-driven intelligence to determine which tests best suit each type of change. You can reduce the number of test runs and accelerate time to delivery with fewer wasted resources.

In the absence of this practice, teams have to manually create subsets of "smoke tests" or parallelize their tests.

In the previous scenario, the startup’s dev team could benefit from Predictive Test Selection. Their developers can focus on delivering the most important features, speed up their workflow, and trust the test suite again.

Hands-On Python Code Sample: Predictive Test Selection With Launchable and Pytest

With Launchable, you don’t need to guess which tests run and constantly update your test impact analysis suite. Here’s a Python example of how Launchable can work with the Pytest framework.

Pytest Setup and Execution with Launchable

  1. Install pytest-launchable into your environment with pip3 install pytest-launchable

  2. Generate a Launchable config file by running launchable-config --create.

  3. Generate a Launchable API key from https://app.launchableinc.com/ .

    1. Set it as the LAUNCHABLE_TOKEN environment variable on the machine that will be running the tests.

  4. From the directory containing your Launchable config file, run pytest --launchable <your-pytest-project>

Your pytest results will be reported to Launchable. Launchable then starts training a machine learning model on your test results over time. The model optimizes for which tests are most likely to be useful in the shortest amount of testing time.

Final Thoughts on Test Impact Analysis, Predictive Test Selection, and Making Your Pipeline Data-Driven.

With Launchable’s ML-driven Predictive Test Selection, teams typically see a 60-80% reduction in test times without impacting quality.

The primary reasons that organizations choose Launchable’s Predictive Selection feature are:

  • Save developer time

  • Reduce infrastructure spending

  • Ship code faster

See how engineers across industries are succeeding with Launchable with these case studies.

Test impact analysis is an essential tool for improving the efficiency of the testing process. However, manual or static analysis can be cumbersome and might fail to deliver value. Proper implementation of test impact analysis with Predictive Test Selection can save time and improve testing quality by making your pipeline more data-driven.

Launchable integrates seamlessly with your CI, regardless of commit frequency or the number of Git branches you have. It supports all apps and languages, and teams report up to a 90% reduction in test times without any impact on quality.

Seeking Your Expert Feedback on Our AI-Driven Solution

Quality a focus? Working with nightly, integration or UI tests?
Our AI can help.