When it comes to software testing, there are countless options available at your disposal. Not all teams use all testing types, but each has its place in the software development world.
Smoke, sanity, and regression testing can be used to help your teams get the most test coverage (and the most results) out of your testing suite. But when it comes to this trio, understanding why and when to use them to get the most accurate testing without bloating your test cycles can be unclear. Today, we’ll cover these three software tests and how they fit into your testing lifecycle.
Smoke, Sanity, and Regression Testing: The Stormtroopers, Jedi, and Rebels of the STLC
Smoke testing is a process that determines whether or not a software build is stable. QA teams use smoke testing to check if they need to continue testing on a build. Essentially, teams use smoke testing to ensure that all the major features are working as intended and whether or not the build is ready for further testing. Think of smoke tests as Stormtroopers - with their shiny white armor and hilariously bad spatial awareness - great in numbers, but they don’t usually get too far alone.
Like sanity testing, the Jedi acts as a voice of reason in the galaxy. They help mediate between factions and planets and are armed when it matters. A subset of regression testing, sanity testing is done to ensure that code changes are working as intended. These tests are run on code that is already stable, making sure that it works within the context of the rest of the build. It’s a “sanity check” to ensure the software is ready for more in-depth tests.
Lastly, we have regression tests that are like the Rebels, a rag-tag group of do-gooders trying their best to cover the entirety of a system to reach a peaceful end. Regression testing is the practice of testing an entire application after a change has been made. And yes, this means the application as a whole, from end to end. These tests are performed to ensure that all aspects of an application work when a new update is released. All elements of a code base are tested here so that no parts of the software are affected by the new additions.
Smoke Testing: Verify Stability
These are the most basic tests that are run on parts of software, making sure that major components are working as intended, like login forms, loading assets, or even just clicking buttons. Smoke tests are some of the first tests run to see how the software performs.
Smoke testing is generally performed after a build has been released to a QA team, allowing them to test it for any significant issues before proceeding. If tests continue to fail, then devs know not to continue with more in-depth tests.
They verify “stability” to continue with testing.
Performed on critical functionalities of the application.
A subset of regression tests, if smoke testing fails it’s an instant rejection.
Sanity Testing: Verify Rationality
Unlike smoke testing, sanity testing is performed on stable builds with recent changes in the code or new functionality is added. Sanity tests exist to make sure that these new functions work as intended and that the build is ready for further testing.
Sanity testing comes before or after complete regression testing, mainly to check if further regression testing is required. Smoke testing is the first step in making a build stable, while sanity tests ensure a build is verified, usually done at the end of the testing cycle
They verify “rationality” of the app for further testing.
Performed on more detailed functions of the app.
Also a subset of regression tests.
Regression Testing: Verify Every Existing Feature
Regression testing ensures that every aspect of a software build works as intended. These are end-to-end tests that check every bit of functionality to ensure that any new changes made to the codebase haven’t affected other parts of the application.
This is generally done at the last step of the testing lifecycle after smoke testing has been completed. It’s the final step before software is passed to users to be used or tested further by the user base.
They verify every existing detailed functionalities and features of the application.
Regression testing should be well documented.
Bringing the Team Together: Combining Sanity, Smoke and Regression Testing
In an optimal workflow, development teams run smoke tests in the initial phase of the SDLC. These tests check for any significant bugs that can be found while development is still ongoing. After these tests all pass, a build is ready. Sanity testing comes after smoke tests, ensuring that the most extensive functionality still works within the build. If everything goes as planned, regression tests are run to ensure that the new changes work alongside the rest of the product.
You always have a more secure and stable application when you use more than one of these testing methods. Smoke testing is a reliable way to ensure that your application’s functions work in a sandbox. Still, you'll struggle without sanity or regression tests when the whole application is trying to work together.
Similarly, suppose you solely rely on regression or sanity tests. In that case, you’ll often be stuck with long testing cycles trying to make sure that new functionality works, while also trying to make pre-existing functionality still works.
Best Practices for Smoke, Sanity, and Regression Tests
Naturally, before any soldier valiantly charges into battle, they need a plan and to prepare. The same goes for your engineering org too. Setting up a test plan can help your devs and QA teams break up larger test suites into smaller, more manageable chunks.
Steps for planning and preparing your tests steps:
Identify the scope of the testing: Scopes should include details on areas that need to be tested, objectives that need to be met, and specify the criteria for success.
Define the testing environment: This includes specifics on what hardware and software will be used and system requirements.
Create test cases: Your test case outlines should include details on expected results, input values, and output values.
Set up the testing environment: Install any necessary hardware and software and configure your system to meet your specific requirements.
Create a test plan: Outline the steps to be taken, document the test objectives, and establish your criteria for success.
Gather the necessary resources: This includes personnel, tools, and other resources.
Now that you have a plan of action for our tests, you need to monitor them to see how they perform. You can use test management software to help with this, giving you insight into your test suites so your teams can get all the data they need to improve the testing lifecycle.
Steps for managing and tracking your test results:
Execute the tests: Once the test cases are created, the next step is to execute the tests. This involves running the tests and collecting the results.
Analyze results: After the tests are executed, the results should be analyzed. This includes looking for any unexpected results or errors that occurred during the tests.
Track and document results: When we talk about best practices for tracking and documenting test results, we look to the top tier test suite insights tools to make this automated and easy to analyze.
And with tests running smoothly, your team will need to watch and update tests as the process goes along. Teams should use the data they get from these tests to update and improve the existing tests and eventually add automation to speed up the process.
Steps for maintaining and updating your tests:
Identify test cases that need to be updated: Before updating any existing smoke, sanity, or regression test cases, it is important to identify which tests need to be updated. This can be done by reviewing the existing test cases, looking at the changes that have been made to the system and determining which tests may no longer be valid. Launchable helps identify the critical tests to run and visibility into your test suite health to make this step simple.
Analyze the system changes: Once the test cases that need to be updated have been identified, it is important to analyze the changes that have been made to the system. This will help to determine which tests need to be modified, added or removed.
Update the test cases: After analyzing the system changes, the test cases should be updated to reflect the changes. This may involve modifying existing test cases, creating new test cases or removing outdated test cases.
Execute the updated tests: Once the test cases have been updated, they should be executed to ensure that they are valid. If any tests fail, they should be corrected and re-executed.
Document the changes: After the tests have been updated and executed, it is important to document the changes that have been made. This will help ensure that any future changes to the system are tested properly.
The Importance of Smoke, Sanity, and Regression Testing for Ensuring the Software Quality
Each testing type is an invaluable part of your overall testing lifecycle. However, testing always ends up being a lengthy, resource-intensive process. But thanks to Launchable, it doesn’t have to be.
We created Predictive Test Selection, a technique that leverages machine learning algorithms to dynamically select which tests to run based on the characteristics of a code change. It can be used for any step of your testing process, whether smoke, sanity, or regression testing.
With smoke tests, harness Predictive Test Selection to determine which tests are the most important to run based on their chance of failure. With that knowledge, your teams can ensure that the most critical aspects of your code are tested early and often for the best results.
For sanity testing, Predictive Test Selection allows you to see which tests will pass. That allows your team to see what parts of your software may be problematic. Then, they can use their time more effectively rather than wasting time re-testing functionality that will already pass.
Regression tests can benefit from Predictive Test Selection to identify which of your pre-existing tests are affected by code changes. This is done by measuring their past performance and what changes were made. Then, your testers can ensure that all relevant tests are run after code changes are done, avoiding unnecessary tests being run.
And as with all tests, Launchable knows that context switching is a leading cause of alert fatigue and kills productivity. Checking your inbox or CI requires proactive interruptions in your tasks to see when tests have failed or passed. We’ve not only made it easier to intelligently select tests for faster test suites, but we’ve also developed a Slack integration so you can get your Test Notifications directly in your favorite messaging app.
Whether you’re waiting on sanity or regression test results, Test Notifications sends you personalized updates when tests are complete and let’s you dive directly into results, pass or fail.