A Better Developer Experience Requires Better Testing Tools

Bloated test suites damage developer experience. Applying machine learning to the testing process can help developers move faster.

Key Takeaways

• Developers spend an average of 35 hours during their typical workweek debugging bad code, tackling technical debt, etc., and the time has come for organizations to focus more purposefully on developer experience.

• You can foster happier developers by improving your existing pipelines to become data-driven rather than reinventing the wheel. 

• Launchable speeds up the testing feedback loop using machine learning to identify and run tests with the highest probability of failing based on code and test metadata.

Originally published at The New Stack

Despite software development being more efficient, agile, and integrated than ever, we’re still missing an essential piece of the puzzle of efficient, high-quality pipelines. Most businesses are not considering their developer experience, and it shows, often in employee dissatisfaction surveys or worse yet in exit interviews.

Developer experience combines interactions and feelings software engineers encounter as they work towards a goal. All the positive and negative experiences your developers have with your organization’s architecture, tools, processes, and culture constitute developer experience.

Organizations fail to prioritize developer experience (and overall happiness) and often default their focus on output, under the term developer productivity. Overtaxing development teams leads to increased attrition caused by unhappy developers, forcing companies to prioritize hiring — no wonder developer productivity is a term developers don’t look forward to.

The more turnover your company experiences, the more time spent working and reworking processes. On average, you’re in a lose-lose cycle if it takes four to six months from initial onboarding to getting your new developer productive. Your developers must train new team members rather than focus on innovation. Not to mention the years’ worth of time loss from losing the last employee to hiring and making the next one productive.

Luckily, most modern development practices are at least conducive to a better developer experience. Adopting DevOps and CI/CD methods leads to a more streamlined and efficient developer experience. These practices are about automation, shared responsibility, and testing early and often. Iterative testing means that developers no longer need to go back and fix code errors weeks or months after the fact. And automation means that development teams no longer have to perform many monotonous, manual tasks. So when comparing today’s developer experience to that of ten years ago, it’s advanced — but it still has a ways to go.

The time has come for organizations to focus more purposefully on developer experience. Despite our pipeline efficiency and quality advancements, many developers still aren’t having a great experience, costing businesses immeasurable resources and time. 96% of upper management are still looking to increase developers’ productivity. A primary driver of this poor productivity and negative developer experience is the slow burning of long feedback cycles from tests.

Bloated Test Suites Negatively Impact DevEx

Automated testing is a familiar concept for those who have implemented CI/CD pipelines. Despite levels of automated testing, developers face velocity bottlenecks without data-driven testing practices. These slowdowns stem from bloated test suites. This bloat happens because management isn’t taking the time to understand the why behind the tests they’re asking developers to run. Year-over-year tests increase as more features, and platforms are added. What used to be a good developer experience now turns into long cycles. It is slow-burn friction.

Bloated test suites damage developer experience. At every small stage of the pipeline, developers need to run a growing number of tests. They know that tests need to be in their processes but aren’t efficiently selecting which tests should run and where they should execute them in the pipeline.

Answering a simple question like “how long do our tests take” is often difficult to answer, let alone monitoring, and measuring the impact of entropy in your test suites. As test suites grow, so does the likelihood of flaky tests, resulting in additional developer resources. It becomes an endless loop of wasted time, inaccurate testing, and frustrated developers.

According to a recent TechStrong report, 72% of respondents said they waste more than 25% of their developers’ time waiting on test execution. A typical 40-hour work week would equal over 10 hours a week. Something has to change.

Using Machine Learning to Slash Testing Cycles

Large test suites produce a tsunami of data. Teams struggle to identify the riskiest tests amongst the noise and run more tests than necessary.

The team at Launchable saw this deluge of data and broken testing feedback loop as an opportunity to improve developer productivity and overall happiness.

Launchable speeds up the testing feedback loop using machine learning to identify and run tests with the highest probability of failing based on code and test metadata. Tampering with unnecessary feedback and getting the right signals to the right people drastically improves the developer experience. Launchable helps teams identify critical tests to shift left, moving them earlier in the pipeline.

Improve DX by Fixing Slow-Burn Issues with Data and Machine Learning

Developer experience is a difficult concept to quantify and measure. Understanding the pain points caused by testing bottlenecks and measuring the health of your test suite over time is the key to fixing the slow burn causing poor developer experience. Launchable helps track the health of your team’s bottlenecks beyond the standard outputs.

Measure test suite entropy and improve developers’ lives with these deep insights on your tests.

  1. Determine if your tests are returning accurate results or if they are sending back false negatives and positives with Flaky Tests Insights.

  2. Identify if there has been an increase in test session time. Longer sessions could imply that the developer cycle time has been trending up with Test Session Duration Insights.

  3. Flag which tests are being run less often with Test Session Frequency Insights. Look for negative health metrics like increased cycle time, reduced quality, or fewer changes flowing through the pipeline.

  4. Track which tests fail most often with Test Session Failure Ratio Insights. An increase in failed tests means a release is unstable, especially if the bump is in post-merge tests.

Using Test Suite Insights, development teams can understand how failed tests impact the development lifecycle and developer experience.

Developers spend an average of 35 hours during their typical workweek debugging bad code, tackling technical debt, and more. Focus on advancing your pipeline to become data-driven and give your developers more time to innovate. You can foster happier developers by improving your existing pipelines to become data-driven rather than reinventing the wheel. Successful organizations are already adopting solutions for improving developer experience — prioritize finding the tools and methods that will support your team’s long-term developer happiness.

Seeking Your Expert Feedback on Our AI-Driven Solution

Quality a focus? Working with nightly, integration or UI tests?
Our AI can help.