Best Practices & FAQ
Best practices for automation integration, troubleshooting tips, and common questions.
Best Practices
-
Always use
continue-on-errororallow_failureon your test step. Your tests may fail, but you still want the reporting step to run. Without this flag, a test failure would skip the reporting step and you would lose visibility into the results. -
Run reporting with
if: always()orwhen: always. Ensure the reporting step runs regardless of whether earlier steps passed or failed. This is the most common misconfiguration — tests fail but results never reach TestKase. -
Store tokens as CI/CD secrets. Never hardcode PATs in pipeline files. Use platform-specific secret management (GitHub Secrets, GitLab CI Variables, Jenkins Credentials, etc.). See the API Keys guide for platform-specific instructions.
-
Include build IDs for traceability. Use
--build-idwith your CI build number (e.g.,$GITHUB_RUN_NUMBER,$CI_PIPELINE_IID). This links each execution to a specific CI build, making it easy to trace results back to code changes. -
Create a dedicated test cycle for CI results. Keep automated CI results in their own test cycle, separate from manual testing cycles. This makes it clear which results are automated and which are manual.
-
Use dry run first. Before enabling real reporting in your pipeline, run with
--dry-runto verify Automation ID matching and configuration. Fix any issues before pushing real data. -
Embed Automation IDs consistently. Generate 5-digit Automation IDs in TestKase and embed them in your test names using the
[XXXXX]bracket pattern (e.g.,test('[48271] should login with valid credentials', ...)). Apply this pattern consistently across all test frameworks. -
Handle skipped tests intentionally. Decide how skipped tests should appear in TestKase. Use
--report-skipped-as blockedif skipped tests indicate an environment issue, or--report-skipped-as ignoreif they are intentionally excluded.
Troubleshooting
▶Tests are reported but no test cases are matched
The reporter extracts 5-digit Automation IDs from test names using the regex \[(\d{5})\]. Steps to diagnose:
- Run with
--dry-run --verboseto see which[XXXXX]IDs the reporter extracts from your test names. - Verify that each extracted ID exists as an Automation ID on a test case in your TestKase project.
- If no IDs are extracted, ensure your test names include the
[XXXXX]bracket pattern (e.g.,[48271] should login). - If IDs are extracted but not matched, check that the 5-digit ID in your test name matches the Automation ID linked to the test case in TestKase.
▶Authentication error (401)
Your PAT token may be expired or invalid. Steps to resolve:
- Check the token expiration date in API Keys settings.
- Generate a new token if expired.
- Update the CI/CD secret with the new token value.
- Verify the secret name in your pipeline file matches exactly.
▶Connection timeout or network error
Verify the --api-url value is correct and accessible from your CI environment. Common causes:
- Incorrect API URL (should end with
/api/v1). - Firewall or network policy blocking outbound requests from CI runners.
- For self-hosted TestKase, ensure the CI runner can reach your API server.
- Proxy configuration needed but not set.
▶Skipped tests are reported as 'Blocked'
By default, skipped tests are mapped to "Blocked". To change this behavior:
--report-skipped-as blocked— marks them as Blocked (default behavior)--report-skipped-as not-executed— marks them as Not Executed--report-skipped-as ignore— ignores them entirely (skipped tests are not reported)
▶'Results file not found' error
The reporter cannot find the results file at the specified path. Common causes:
- Tests did not produce the expected output file (check test framework configuration).
- The path is relative and the working directory is different from expected.
- In multi-stage CI pipelines, artifacts may not be carried over to the reporting stage (e.g., GitLab needs
artifactsconfiguration).
▶Partial results — some tests matched, others didn't
This is normal if not all test names contain a [XXXXX] Automation ID, or if some IDs have not been linked to test cases in TestKase yet. To handle this:
- Check the dry-run output to see which tests are unmatched (missing
[XXXXX]or unknown ID). - For unmatched tests, generate Automation IDs in TestKase and embed them in the test names.
- Use
--missing-id-in-script skip(default) to silently ignore tests without embedded IDs. - Or accept partial matching if some tests are intentionally not tracked in TestKase.
FAQ
▶Can I report results from multiple test frameworks in one pipeline?
Yes. Run the reporter multiple times in the same pipeline with different --format and --results-file flags. Each invocation can target the same or different test cycles.
▶Can I report results to multiple test cycles?
Yes. Run the reporter once per cycle, changing the --cycle-id flag. For example, report to both a "CI Automated" cycle and a "Sprint 14 Regression" cycle.
▶What happens if I run the reporter twice for the same cycle?
When you re-run the reporter for the same automation ID, TestKase updates the existing automation run entry rather than creating a duplicate. The status, actual result, duration, error message, and executed-on fields all reflect the latest run.
The cycle history tab always records the full timeline of every execution — including older runs — so you retain the complete audit trail.
The test case's overall status uses worst-status-wins precedence: if any automation ID reports a failure, the overall status stays as Fail until that specific automation ID reports Pass again.
▶How do I start a fresh test run after a build fix?
Use the Reset Executions feature on the execute page (available in the ⋯ more-actions menu). It clears all execution statuses, actual results, and automation runs for the cycle while keeping the test cases and cycle metadata intact. You can reset every test case in the cycle, or select specific test cases to reset only those. See Reset Executions for full details.
▶Will I see retry details for flaky tests?
Yes — for Playwright JSON and JUnit XML (with Maven Surefire or Gradle retry plugin output).
The reporter captures each attempt's status, duration (Playwright only), and error message. On the
execute page, the Error Message field shows the full retry log and the Actual Result field
labels flaky tests explicitly (e.g. "Flaky — passed after 2 retry(ies)").
For Cypress, TestNG, NUnit, and Cucumber, per-attempt retry data is not available in the output format — the framework's own reporter only writes the final attempt. See Retry Details for the full support matrix.
▶Does the reporter create test cases automatically?
No. The reporter only updates execution status on existing test cases. You must create test cases in TestKase first and set their Automation IDs. Unmatched results are logged but do not create new test cases.
▶Can I use the reporter without npm/Node.js?
Currently, the @testkase/reporter CLI requires Node.js. If your CI environment does not have Node.js, use a Docker image that includes it (e.g., node:20) or add a Node.js setup step to your pipeline.
▶Is there a GitHub Action or GitLab CI template available?
The pipeline templates shown in the platform-specific guides are the recommended starting point. Copy the template for your platform, update the flags with your project details, and you are ready to go. There is no separate marketplace action at this time.
▶How do I handle dynamic test cycle IDs?
If you create test cycles dynamically (e.g., one per PR), you can pass the cycle ID as an environment variable. Create the cycle via the TestKase API first, capture the ID, and pass it to the reporter with --cycle-id.
▶Can I use the reporter in a Docker container?
Yes. Use any Node.js-based Docker image (e.g., node:20, cimg/node:20.0) and run npx @testkase/reporter in a script step. No additional Docker configuration is needed.
