TestKase
TestKase
|Docs
Test ManagementAutomation

CLI Reference

Complete reference for the @testkase/reporter CLI — all flags, supported formats, debugging, and usage examples.

Installation

Install the reporter as a dev dependency (npm):

npm install --save-dev @testkase/reporter

Or run it directly without installation:

npx @testkase/reporter report --help

Usage

npx @testkase/reporter report \
  --token <PAT> \
  --project-id <PROJECT_ID> \
  --org-id <ORG_ID> \
  --format <FORMAT> \
  --results-file <PATH> \
  [--cycle-id <CYCLE_ID>]        # optional, defaults to TCYCLE-1
  [--api-url <API_URL>]          # optional, defaults to https://api.testkase.com

Flag Reference

FlagRequiredDescription
--tokenYesYour TestKase Personal Access Token (PAT). Generate one from API Keys. Store as a CI/CD secret — never hardcode in pipeline files.
--api-urlNoTestKase API base URL. Defaults to https://api.testkase.com.
--project-idYesTestKase project ID (e.g., PRJ-1). Found in your project settings or URL.
--org-idYesTestKase organization ID. Found on your Profile page.
--cycle-idNoTarget test cycle ID (e.g., TCYCLE-5). Defaults to TCYCLE-1 (the default test cycle for the project). Override to report into a specific cycle.
--formatYesTest output format. Must match the format of your results file. One of: junit, playwright, cypress, testng, nunit, cucumber.
--results-fileYesPath to the test results file (e.g., test-results/junit.xml). Supports absolute paths, relative paths, and glob patterns (e.g., "test-results/junit-*.xml") to match multiple files — useful when a reporter creates one file per spec.
--build-idNoOptional label to tag results with your CI build number (e.g., --build-id "#142"). Stored alongside each execution so you can tell which CI run produced a given result. Typically set from a CI variable like $GITHUB_RUN_NUMBER or $CI_PIPELINE_IID.
--dry-runNoPreview results without pushing to TestKase. Shows matched/unmatched tests, status mapping, and skipped handling.
--verboseNoEnable detailed logging output. Shows API request/response details, file parsing progress, and matching decisions.
--report-skipped-asNoHow to map skipped tests. Options: blocked (default — environment issues), not-executed, ignore (skip entirely).
--automation-id-formatNoRegex pattern used to extract Automation IDs from test names. Default: \[(\d{5})\]. The first capture group is used as the ID.
--missing-id-in-scriptNoHow to handle test results with no [XXXXX] Automation ID in the test name. Options: skip (default — ignore untagged tests), fail (exit with error), create (create a new test case in TestKase).
--unknown-id-in-testkaseNoHow to handle extracted Automation IDs that do not match any test case in TestKase. Options: skip (default — log and continue), fail (exit with error).

Per-Automation-ID Run Tracking

Each test result sent by the reporter is tagged with the automation ID that produced it. TestKase stores a separate run entry per (test case, automation ID) combination, which powers the Automation Runs card on the execute page.

How It Works

  1. The reporter extracts [XXXXX] automation IDs from your test names.
  2. Each result is sent with its automation_id to the execute-bulk API.
  3. The backend creates (or updates) a run entry per test case, per automation ID.
  4. When the same automation ID reports again, the existing entry is updated (UPSERT) — no duplicates.
  5. The test case's overall status uses worst-status-wins precedence: fail > blocked > not_executed > pass.

When a Test Case Has Multiple Automation IDs

If a test case is linked to multiple automation IDs (e.g., one per CI pipeline or browser), the reporter sends one result per automation ID in the same bulk call. Each run is tracked independently:

  • If 12345 passes and 67890 fails → overall status is Fail (worst wins)
  • Both runs remain visible in the Automation Runs card with individual status, duration, and error details
  • Re-running 67890 as Pass updates that specific entry and re-computes the overall status

See the automation guide for the full behavior.

Retry Details

When tests run with retries enabled, the reporter captures the full retry log — status, duration, and error for each attempt — and stores it in the Error Message field of the automation run. Flaky tests (failed then eventually passed) are detected automatically and labelled as such in Actual Result.

See Retry Details on the execution docs for how this appears on the execute page.

Retry support by format:

FormatSupportDetails
playwrightFullExtracts every test.results[] entry from Playwright JSON with per-attempt status, duration, error message, and stack trace
junitPartialDetects Maven Surefire's <flakyFailure>, <flakyError>, <rerunFailure>, <rerunError> elements and Gradle's retry plugin output. Captures per-attempt error + stack but not per-attempt duration (format limitation)
cypressNoneMochawesome JSON does not record per-attempt retry data
testngNoneTestNG XML uses a single <test-method> per result
nunitNoneNUnit XML uses a single <test-case> per result
cucumberNoneCucumber JSON does not have a retry concept

When a format does not support retries, the reporter only sees the final attempt — previous attempts are discarded by the test framework before the result file is written. This is a framework/format limitation, not a TestKase limitation.

If your framework supports both a native format and JUnit (e.g. Playwright), prefer the native format for richer retry data. Playwright's native JSON captures per-attempt durations; its JUnit output does not.

Supported Formats

FormatFile TypeFrameworksQuick Start
junit.xmlJest, Mocha, pytest, Selenium, WebdriverIO, Robot Framework, Detox, Espresso, XCUITest, PlaywrightMost universal — works with any framework that produces JUnit XML
playwright.jsonPlaywrightRicher data than JUnit (attachments, retries, traces)
cypress.jsonCypress (via mochawesome)Native Cypress JSON output
testng.xmlTestNG, Appium (with TestNG)Java/Android test suites
nunit.xmlNUnit.NET test suites
cucumber.jsonCucumber (JS/Java/Ruby)BDD scenarios

Which format should I use?

  • Playwright supports both --format playwright (JSON) and --format junit (XML). Use playwright for richer data (attachments, retry details). Use junit if you want a universal format across multiple frameworks.
  • All other frameworks — start with junit unless a framework-specific format is listed above.
  • Use framework-specific formats (playwright, cypress, testng, nunit, cucumber) only when the framework has a native reporter that provides richer data than JUnit XML.

Format-Specific Guidance

JUnit XML (--format junit)

The most universal format. Most test frameworks can produce JUnit XML output, making this the recommended default for cross-framework compatibility.

FrameworkCommand / Setup
Jestnpx jest --reporters=jest-junit (requires jest-junit package)
Mochanpx mocha --reporter mocha-junit-reporter
pytestpytest --junitxml=test-results/junit.xml
Playwrightnpx playwright test --reporter=junit
Java (Maven)Surefire plugin — outputs to target/surefire-reports/ by default
.NETdotnet test --logger "junit;LogFileName=junit.xml"
Robot Frameworkrobot --xunit test-results/robot-junit.xml
WebdriverIOwdio-junit-reporter package

See the Test Frameworks section for detailed setup guides.


Playwright (--format playwright)

Use the JSON reporter for the richest data (attachments, retries, traces):

// playwright.config.ts
reporter: [['json', { outputFile: 'test-results/playwright.json' }]],
# 1. Run tests (generates test-results/playwright.json)
npx playwright test

# 2. Report to TestKase
npx @testkase/reporter report --format playwright \
  --results-file test-results/playwright.json ...

Alternatively, use --format junit with Playwright's JUnit reporter for cross-framework compatibility:

// playwright.config.ts
reporter: [['junit', { outputFile: 'test-results/playwright-junit.xml' }]],
# 1. Run tests (generates test-results/playwright-junit.xml)
npx playwright test

# 2. Report to TestKase
npx @testkase/reporter report --format junit \
  --results-file test-results/playwright-junit.xml ...

The reporter extracts 5-digit Automation IDs from test titles using the [XXXXX] bracket pattern. See the Playwright guide for full setup instructions.


Cypress (--format cypress)

Use cypress-mochawesome-reporter for native Cypress JSON output:

// cypress.config.js
reporter: 'cypress-mochawesome-reporter',
reporterOptions: { reportDir: 'test-results', overwrite: false, json: true }
# Report to TestKase
npx @testkase/reporter report --format cypress \
  --results-file test-results/mochawesome.json ...

Alternatively, use --format junit with mocha-junit-reporter for cross-framework compatibility:

// cypress.config.js
reporter: 'mocha-junit-reporter',
reporterOptions: { mochaFile: 'test-results/junit-[hash].xml' }
# Report to TestKase
npx @testkase/reporter report --format junit \
  --results-file test-results/junit-*.xml ...

See the Cypress guide for full setup instructions.


TestNG (--format testng)

TestNG produces XML reports by default in the test-output/ directory:

# 1. Run tests (TestNG generates test-output/testng-results.xml automatically)
mvn test

# 2. Report to TestKase
npx @testkase/reporter report --format testng \
  --results-file test-output/testng-results.xml ...

See the TestNG guide for full setup instructions.


NUnit (--format nunit)

Generate NUnit XML reports with the --logger flag:

# 1. Generate results
dotnet test --logger "nunit;LogFileName=nunit-results.xml"

# 2. Report to TestKase
npx @testkase/reporter report --format nunit \
  --results-file nunit-results.xml ...

See the NUnit guide for full setup instructions.


Cucumber (--format cucumber)

Cucumber tests produce JSON reports. Configure the JSON formatter in your runner:

// cucumber.js
module.exports = { default: '--format json:test-results/cucumber.json' }
# Report to TestKase
npx @testkase/reporter report --format cucumber \
  --results-file test-results/cucumber.json ...

See the Cucumber guide for full setup instructions.

Additional Commands

create-run

Create a new test cycle run in TestKase before reporting results. Useful when you want each CI build to report into a fresh run:

npx @testkase/reporter create-run \
  --token $TESTKASE_PAT \
  --project-id PRJ-1 \
  --org-id 1173 \
  --api-url https://api.testkase.com \
  --title "CI Build #$GITHUB_RUN_NUMBER"

Returns the cycle ID on success.

FlagRequiredDescription
--tokenYesPAT token (or env: TESTKASE_PAT)
--project-idYesProject ID (e.g., PRJ-1)
--org-idYesOrganization ID
--api-urlNoAPI base URL. Defaults to https://api.testkase.com.
--titleYesCycle title (e.g., "CI Build #123")

list-projects

List all projects accessible with your PAT. Useful for verifying your token and finding project IDs:

npx @testkase/reporter list-projects \
  --token $TESTKASE_PAT \
  --org-id 1173 \
  --api-url https://api.testkase.com
FlagRequiredDescription
--tokenYesPAT token (or env: TESTKASE_PAT)
--org-idYesOrganization ID
--api-urlNoAPI base URL. Defaults to https://api.testkase.com.

Dry Run & Debugging

Before pushing results to TestKase for the first time, use the --dry-run flag to preview the matching without actually updating any data:

npx @testkase/reporter report \
  --token $TESTKASE_PAT \
  --project-id PRJ-1 \
  --org-id 1173 \
  --format junit \
  --results-file test-results/junit.xml \
  --dry-run \
  --verbose

What Dry Run Shows

  • Matched tests — Tests that were successfully matched to a TestKase test case via Automation ID.
  • Unmatched tests — Tests in the results file that have no corresponding Automation ID in TestKase.
  • Status mapping — What status each test would be reported as (Pass, Fail, Blocked, Not Executed).
  • Skipped handling — How skipped tests are being mapped based on your --report-skipped-as setting.

Verbose Logging

Add --verbose for detailed output during troubleshooting. This shows:

  • API request/response details
  • File parsing progress
  • Individual test case matching decisions
  • Error details with stack traces

Run with --dry-run --verbose as part of your initial setup to verify everything is configured correctly before enabling real reporting. This is especially useful for validating Automation ID matching.