Why are QA teams struggling with velocity, and cost issues?

Your delivery pipeline comprises test suites that a commit must pass to go to production. Since tests form the bulk of your delivery process, any impact here radically affects your delivery times. As the application matures, test suites and the number of test suites keep growing, further compounding the problem where every test suite run delays your release.

âś•
The gauntlet of tests that a commit must pass through

Unhealthy tests cause a huge friction to your QA team

Entropy sets in as test suites grow over time. Developers lose trust in these unhealthy tests, shifting the burden to the QA team to maintain, and run these tests. Nobody knows why a specific test is still being run because the original coders have left the company. Data-driven decisions are absent as test data is not being collected.

Long executing tests burn DX, and your testing budget

Test execution times grow exponentially as your team tests on multiple browsers, devices, and platforms. Your team re-runs these tests every time there is a failure “just to be sure.” Every test run burns through your budget. There is friction between your team and developers because long execution times cause developer feedback delays.

All the drudge work takes away from your teams' ability to work on things that add business value, such as writing more tests and automating more test suites.

The Launchable AI-based Test Intelligence Platform

An AI-based Test Intelligence Platform that brings advancements in AI to your development teams. The platform analyzes code and tests metadata to enable developers & team leads to

  • Improve the dev-test loop by bringing feedback from tests into Slack and provides a test sessions dashboard that brings insights on test failures to developers

  • Get meaningful insights on tests suite to eliminate tests causing friction

  • Radically reduce test times while maintaining quality

Code and test metadata are sent from your CI service to the Launchable SaaS. Your code is never sent over to Launchable. Your tests continue executing wherever they reside (on-premises or cloud).

âś•

The three benefits of the Launchable SaaS approach for engineering teams

  • Use AI & ML without having to manage and maintain massive AI infrastructure

  • Launchable constantly tunes ML models, offloading such tasks from engineering

  • Results in weeks while the engineering teams focus on other efforts

Our secret sauce: machine learning called Predictive Test Selection

Facebook pioneered predictive test selection, a pragmatic risk-based approach to testing. This new approach to Test Impact Analysis uses machine learning to dynamically select which tests to run based on the characteristics of a code change. Historical test results and information about the tested changes are used to train a machine-learning model to achieve this. The model learns the relationships between the characteristics of code changes and which tests passed and failed, enabling a high-quality prediction of which tests to run.

Launchable's Predictive Test Selection product has made the approach turn-key and accessible to every team.

âś•
The ML model for this test suite indicates that by cutting execution time by 50%, we can find 99% of failing builds.

Launchable helps QA teams

Launchable saves us 25% of resources on the mainline build while going faster.
Mo Johnson

Insights to find Unhealthy tests

Most developers know there are unhealthy (e.g., flaky) tests in their test suites that cause friction in their development cycle, but they cannot quantify the impact of these tests to their management leaders. Additionally, the picture is not shared across developers—one dev might perceive the impact differently than others. Consequently, insufficient resources are allocated to fixing issues, increasing development friction. Our ML algorithm will find unhealthy tests, such as flaky tests and others. Developers can use this information to fix the problems that impact their developer experience.

âś•

Read more on our docs page

Drastically reduce test execution and feedback times with Predictive Test Selection

Predictive Test Selection to testing gives another dimension (test execution times) to testing efforts that can be used to reduce the cost of testing without impacting either quality or speed of delivery.

It can be used in a variety of use cases, such as:

It applies to verticals including:

âś•
Post-merge UI tests shifted left, bringing faster feedback to devs
Launchable provides fast and smart subsets of tests, which are run by our developers on every commit, as well as during nightly and release regressions, benefitting us by giving targeted testing, and saving money on resources.
Roma M. Engineering Manager, Delphix

Auto-triaging to help find issues that need to be investigated

With Personal Notifications via Slack & Test results and reports, Launchable provides a richer view of test results, helping developers triage and fix failures quickly. Personal notifications imply that a developer gets notified of test status that impacts them. When paired with Predictive Test Selection, the iterative loop for the dev-test becomes lightning-fast.

âś•

Works with your existing tools, languages, and processes

Results in weeks—no months-long DevOps transformations

Launchable's ML-based approach means it can work with existing languages and tools. Developers start seeing their dev cycles go faster without changing their processes.

âś•
Launchable works with your existing tools