In the business world, data is increasingly being used to effectively drive decisions and make processes more efficient, but software development hasn’t kept pace with this evolution. Developers are using automated systems that are producing data, but aren’t using that data to then make the process more efficient or help with engineering decisions, often relying on gut instincts.
Click to listen to the full episode here.
Co-Founders and Co-CEO’s of Launchable, Harpreet Singh and Kohsuke Kawaguchi (KK), sat down with DevOps Chat host Alan Shimel to share how they found Launchable’s inspiration from development process inefficiencies and untapped data.
“I was talking to somebody [...] who had this challenge where their delivery velocity was slow. [T]heir aim was to put a few engineers in and look at it and how to improve it. And having the DevOps background [...] I felt like there has to be a better way to do this than just, ‘Let’s allocate a few engineers and see what comes out of it.’.” explained Singh.
Increased execution time of tests constrains the pace of code changes, especially when working with large test suites. Launchable’s mission is to solve everyday, common developer cycle barriers, focusing on shipping faster by reducing the time developers spend waiting for tests to run.
“I was working on this Jenkins project, and it’s a sizable project with lots of test cases, which is great, but what that means is that every time when anybody wants to make, like, a one-line change, the first thing that happens is the CI system comes in and they run the whole one-hour test cycle before the code review happens, and then somebody makes some observations or a suggestion that, ‘Oh, you should change this part to something else’, and then you follow that with another whole hour of testing. [T]wo hours is a long time to wait if you get some changes in[.]” described Kohsuke.
Launchable helps developers release quality code with confidence faster. We’re kicking DevOps into high gear using machine learning to create dynamic subsets of the most important tests for individual changes, getting feedback from the right tests quicker.
“[W]hat if we could figure out the right subset for that test to run? They’re most likely detached programs, so they’ll not only finish quickly, but they also run cheap, and then you still get a pretty sizable, meaningful feedback. [Y]ou’re not going to get 100% coverage, but you might get 95%, and that’s plenty good, and that cuts the execution down in, let’s say, half.” said Kohsuke.
In the Launchable platform, Predictive Test Selection uses a machine learning algorithm that predicts the likelihood of failure for each test. Every time tests run, the results are passed to Launchable to continuously train the model, which makes predictions based on past runs and the source code changes under test.
The Launchable platform is also focusing on finding flaky tests faster, speeding up test failure analysis with Flaky Tests Insights. Analyzing your test runs, the platform scores each test on a flakiness scale, helping to identify which flaky tests to tackle first.
“[I]f you are able to get and solve a software team’s problems everywhere, to get their products in the hands of customers first—that’s success for me. [...] It’s really helping people who are struggling with delivering software.” shared Singh.
Launchable is looking to aid development teams into becoming data drive scientists, helping drive their velocity and deliver software confidently. And the platform works with tools you already have so integration is quick and seamless.