`raw` profile for custom test runners
This is a reference page. See Getting Started, Sending data to Launchable, and Predictive Test Selection for more comprehensive usage guidelines.
The raw
CLI profile provides a low-level interface for interacting with Launchable. It is meant for use with custom-built test runners and requires additional integration steps in comparison to the native profiles built for each test runner.
Requirements:
You need to be able to programmatically create a list of tests you were going to run before running them
Your test runner needs to be able to accept a list of tests to run
You need to be able to convert your existing test results to JSON format, including creating Launchable-specific
testPath
identifiers for every test
The raw
profile is an advanced feature. We strongly suggest you contact us if you plan to use it so that we can help!
Recording test results
The raw profile accepts JUnit XML files and JSON files. Depending on your setup you may want to use one or the other. Read the entire document before deciding.
JUnit format
When you pass JUnit files into the raw profile, the profile extracts the classname
and name
attributes from each TestCase
. Each test is stored as class={classname}#testcase={name}
. This is important because the subset service will return a list of tests in this same format. You need to confirm that your test runner can accept a list of classes and/or a list of classes+testcases to run/not run.
If your test runner identifies tests using a different structure/hierarchy, you'll need to construct test paths and convert to JSON format instead. See the next section for those instructions.
Make sure you establish the correct format before you start sending production data!
To record tests, every time your tests finish, run:
launchable record tests --build <BUILD NAME> raw /path/to/xml/files
You can use launchable inspect tests --test-session-id [TEST SESSION ID]
to inspect the list of test paths that were submitted.
You might need to take extra steps to make sure that launchable record tests
always runs even if the build fails. See Ensuring record tests always runs.
JSON format
If your test runner identifies tests using a different structure than class+name (as described above), you'll need to convert your existing test results to JSON. This gives you total control over the stored data structure and the subsetting interface structure.
Converting to JSON format
After you run tests, convert your test results to a JSON document using this schema. In particular, you'll need to create a testPath
value for every test (see below).
1## Example JSON document
2{
3 "testCases": [
4 {
5 "testPath": "file=login/test_a.py#class=class1#testcase=testcase899",
6 "duration": 42,
7 "status": "TEST_PASSED",
8 "stdout": "This is stdout",
9 "stderr": "This is stderr",
10 "createdAt": "2021-10-05T12:34:00"
11 }
12 ]
13}
Constructing test paths
A testPath
is a unique string that identifies a given test, such as
file=login/test_a.py#class=class1#testcase=testcase899
This example declares a hierarchy of three levels: a testcase
belongs to a class
which belongs to a file
(path). Your hierarchy may be different, but you'll need to include enough hierarchy to uniquely identify every test.
When creating your testPath
hierarchy, keep in mind that you'll also use this structure for subsetting tests. See #subsetting-hierarchy for examples.
Finally, include relative file paths instead of absolute ones where possible.
A note about file paths on Windows and Unix
If you include file paths in your testPath
values and a given set of tests runs on both Unix and Windows, submit file paths with either forward slashes or backslashes, but not both. If you submit a test with forward slashes in the file path and then submit the same test with backslashes in the file path, you will record two separate tests.
Recording JSON test results with the CLI
Then, pass that JSON document (e.g. test-results/results.json
) to the Launchable CLI for submission:
launchable record tests --build <BUILD NAME> raw test-results/results.json
You can use launchable inspect tests --test-session-id [TEST SESSION ID]
to inspect the list of test paths that were submitted.
You might need to take extra steps to make sure that launchable record tests
always runs even if the build fails. See Ensuring record tests always runs.
Subsetting your test runs
The high level flow for subsetting is:
Create file containing a list of all the tests in your test suite that you would normally run
Pass that file to
launchable subset
with an optimization targetlaunchable subset
will get a subset from the Launchable platform and output that list to a text fileUse that file to tell your test runner to run only those tests
Subset input file format
The input file is a text file that contains test paths (e.g. file=a.py#class=classA
), one per line Lines that start with a hash ('#') are considered comments and are ignored.
For example:
file=login/test_a.py#class=class1#testcase=testcase899
file=login/test_a.py#class=class2#testcase=testcase900
file=login/test_b.py#class=class3#testcase=testcase901
file=login/test_b.py#class=class3#testcase=testcase902
If you recorded JUnit XML files, the input file format is fixed to class={classname}#testcase={testcase}
. For example:
class=class1#testcase=testcase899
class=class2#testcase=testcase900
class=class3#testcase=testcase901
class=class3#testcase=testcase902
Subsetting hierarchy
One common scenario is that a test runner cannot subset tests at the same level of granularity used for reporting tests.
For example, if your tests are organized into a hierarchy of file
s, class
es, and testcase
s, then your testPath
s for recording tests will look like file=<filePath>#class=<className>#testcase=<testcaseName>
, e.g.:
1{
2 "testCases": [
3 {
4 "testPath": "file=login/test_a.py#class=class1#testcase=testcase899",
5 "duration": 10.051,
6 "status": "TEST_PASSED",
7 },
8 {
9 "testPath": "file=login/test_a.py#class=class2#testcase=testcase900",
10 "duration": 13.625,
11 "status": "TEST_FAILED",
12 },
13 {
14 "testPath": "file=login/test_b.py#class=class3#testcase=testcase901",
15 "duration": 14.823,
16 "status": "TEST_PASSED",
17 },
18 {
19 "testPath": "file=login/test_b.py#class=class3#testcase=testcase902",
20 "duration": 29.784,
21 "status": "TEST_FAILED",
22 }
23 ]
24}
This creates four testPath
s in Launchable:
file=login/test_a.py#class=class1#testcase=testcase899
file=login/test_a.py#class=class2#testcase=testcase900
file=login/test_b.py#class=class3#testcase=testcase901
file=login/test_b.py#class=class3#testcase=testcase902
However, perhaps your test runner can only subset at the class
level, not the testcase
level. In that case, send Launchable a list of testPath
s that terminate at the class
level, e.g.
file=login/test_a.py#class=class1
file=login/test_a.py#class=class2
file=login/test_b.py#class=class3
Launchable will then return a list of prioritized class
es for you to run.
Similarly, if your test runner can only testcase at the file
level, then submit a list of testPath
s that terminate at the file
level, e.g.:
file=login/test_a.py
file=login/test_b.py
Launchable will then return a list of prioritized file
s for you to run.
Request a subset of tests to run with the CLI
Once you've created a subset input file, use the CLI to get a subset of the full list from Launchable:
launchable subset \
--build <BUILD NAME> \
--confidence <TARGET> \
raw \
subset-input.txt > subset-output.txt
The
--build
should use the same<BUILD NAME>
value that you used before inlaunchable record build
.The
--confidence
option should be a percentage; we suggest90%
to start. You can also use--time
or--target
; see Requesting and running a subset of tests for more info.
This will output subset-output.txt
which contains a list of tests in the same format they were submitted. For example:
file=b.py#class=class4
file=b.py#class=class3
You can then process this file as needed for input into your test runner.
Zero Input Subsetting
To use Zero Input Subsetting with the raw profile:
Use the
--get-tests-from-previous-sessions
optionUse the
--rest=<OUTPUT FILE>
option to get a list of tests to exclude (instead of include) so that new tests always run
For example:
launchable subset \
--build <BUILD NAME> \
--confidence <TARGET> \
--get-tests-from-previous-sessions \
--rest=launchable-exclusion-list.txt
raw > launchable-inclusion-list.txt
Then launchable-exclusion-list.txt
will contain the tests you can exclude when running tests.