September 2022 Launchable Product Updates
Observe Predictive Test Selection Behavior, Track Time Savings, and Spot Integration Issues
Key Takeaways
Add observation to the launchable subset command you added to your pipeline after following Requesting and running a subset of tests
You can now view time savings values for every test session that used Predictive Test Selection (subsets).
We introduced a new component called Subset Impact.
Similar to the month's equinox – where daylight and night find seasonal balance – Launchable is helping development teams to find balance and offset testing bottlenecks with another batch of product updates and feature releases.
Launchable knows that having confidence in running your Predictive Test Selection subsets is critical. That’s why we are giving you the ability to test confidently with observation mode, where you can see what would happen if you ran a subset of tests IRL. Also, new this month is a time savings feature where you never have to ask yourself, “Now, where did I find that information again?”.
Read on to find out more.
Observe Predictive Test Selection behavior before you roll out Launchable
Sometimes teams want to observe the potential impact and behavior of running Predictive Test Selection subsets in a real environment before they enable subsetting for all test sessions. In other words, they want to measure subsets' real-world efficacy against the Confidence curve shown on the "Simulate" page.
For example, a workspace's Confidence curve might state that Launchable can find 90% of failing runs in only 40% of the time. Some teams might want to verify that statistic in real life.
Well, we now have a solution to that problem. Behold observation mode!
To enable observation mode, just add --observation to the launchable subset command you added to your pipeline after following Requesting and running a subset of tests:
When observation mode is enabled for a test session, the output of each launchable subset command made against that test session will always include all tests, but the recorded results will be presented separately so you can compare running the subset against running the full suite.
Because you marked the session as an observation session, Launchable can analyze what would have happened if you had actually run a subset of tests, such as whether the subset would have caught a failing session and how much time you could have saved by running only the subset of tests:
🔍 Dive into observation mode.
Or you can reach out to your customer success manager for more info.
View time savings for Predictive Test Selection test sessions
As a team of developers, we know how incredibly important your time is. When it takes less time to find the information you need, we are doing more than speeding up your test runs.
You can now view time savings values for every test session that used Predictive Test Selection (subsets). The value is shown in the Subset Impact section of the test session summary row, which is shown in various places across the Launchable webapp:
⏳ Explore the time savings feature.
Spot integration issues on the Test Sessions page
When we released the Predictive Test Selection - Analyze Test Session page, we introduced a new component called Subset Impact.
This component gives you a high-level view of how Predictive Test Selection impacted that test session while also highlighting any integration issues.
At first, this component was only available on the Analyze page. Now, we've updated the webapp to show this component everywhere that the test session row is shown. This makes it easy to spot integration issues while scanning the Test Sessions list: