Predictive Test Selection

Keep your developers happy with intelligent testing that reduces idle time by 70%

Used by elite engineering teams

  • UKG Logo
  • BMW Logo
  • Gocardless Logo
  • Infosys Logo
  • Optim Logo
  • Line Logo
  • Jenkins Logo
  • Vitess Logo
  • Delphix Logo
  • UKG Logo
  • BMW Logo
  • Gocardless Logo
  • Infosys Logo
  • Optim Logo
  • Line Logo
  • Jenkins Logo
  • Vitess Logo
  • Delphix Logo
  • UKG Logo
  • BMW Logo
  • Gocardless Logo
  • Infosys Logo
  • Optim Logo
  • Line Logo
  • Jenkins Logo
  • Vitess Logo
  • Delphix Logo
  • UKG Logo
  • BMW Logo
  • Gocardless Logo
  • Infosys Logo
  • Optim Logo
  • Line Logo
  • Jenkins Logo
  • Vitess Logo
  • Delphix Logo

Reduce infrastructure costs

Automating your pipelines often results in bulky test suites that are expensive to maintain. As you add new features, each test has a compounding effect on your infrastructure costs.

Get feedback earlier

Your most valuable assets are your people. Waiting for feedback is one of the most expensive parts of maintaining a team. Getting feedback earlier is critical to running a modern org.

Keep developers happy

More is expected from devs that ever before. Help keep them happy, improving their quality of life by giving them the tools that are used by elite engineering teams.

What is Predictive Test Selection?

Predictive Test Selection is a way of using machine learning to choose the highest value tests to run for a specific change. This rapidly evolving science is used by companies like Facebook to deliver with high confidence, without sacrificing quality.

What is Predictive Test Selection video

AI-powered test automation

Today, many software projects have long-running test suites that run all the tests each in no particular order. When you are working on a small change in a large project, this is wasteful. You know that only a few tests are relevant, yet there’s no easy way to know exactly which tests to run.

AI-powered automation from Launchable

Predictive Test Selection fits into your existing development pipeline

Unlock the ability to run a much smaller set of tests at various points in your software development lifecycle. With Launchable, tell your test runner exactly which tests to run based on the changes being tested:

Predictive Test Selection development pipeline diagram

How a machine learning model is trained

Every time tests run, your changes and test results are passed to Launchable to continuously train a model.

Model training looks at the changes associated with each build and which tests failed when tests ran. It builds associations between changed files and which tests tend to fail. It can be thought of as an advanced frequency counting algorithm that tracks associations between failures and source code.

Calculating test prioritization

One way to think about how Launchable prioritizes your tests is that with each successful test, Launchable's confidence grows that the entire run will be successful. The ideal algorithm optimizes for yielding the highest confidence as early as possible.

So confidence and individual test run time are the two primary determining factors for test prioritization.

Confidence is a function of the probability of failure for each individual test as tests run. Tests with a high probability of failure yield a higher confidence gain when successful. When tests with a low probability of failure pass, they yield smaller confidence gains.

Machine Learning Reorder Tests

Since the goal is to deliver as much confidence as quickly as possible, it makes sense for Launchable to deprioritize a long-running test if the confidence gain from that single test is not high enough to offset the gain of running shorter tests during the same period of time. This is exactly what the Launchable algorithm does.

For example, if test T8 has a high probability of failure and takes 3 minutes to run, and test T4 has a slightly lower probability of failure but only takes 300 milliseconds, Launchable will prioritize the shorter test (T4) before the longer test (T8) because it yields a higher confidence gain in a shorter period of time.

Subsetting Tests

If your tests take a very long time to run, you should consider running a subset of your tests earlier in your development cycle. We call this use case "shift-left." For example, if a test suite that runs after every pull request is merged takes 5 hours to run, a 30-minute version of the same suite could be run on every git push while the pull request is still open.

While you could accomplish this by manually selecting tests to run, this has the disadvantage that the tests most relevant to the changes present in a build may not be run until much later in the development cycle.

Launchable provides the ability to create a subset based on the changes present in the build every time you run tests. We call this a dynamic subset because the subset adapts to your changes in real-time.

Using AI to create a dynamic subset of tests

A dynamic subset prioritizes all of your tests and then returns the first part of the total sequence for your test runner to run. The cutoff point can be based on either the maximum length of time you specify (e.g. 30 minutes in the above example) or the minimum confidence level you wish to achieve.

Launchable also provides the ability to run the rest of the tests that were not selected in your dynamic subset in a separate run.

You may be an amazing developer, but every test is making you slower

Test code faster, ship code faster, and shoot for the moon with Launchable

Start 1 month trial