TestKase
TestKase
|Docs
Test ManagementCore TestingTest Cycles

Test Execution

Execute tests, track progress with real-time metrics, and record results across your test cycles.

Execute Tests

Test execution is the core activity within a test cycle. For each test case in the cycle, testers review the test steps, perform the actions described, verify the expected results, and record an execution status. TestKase supports a streamlined execution workflow designed to minimize friction and maximize the information captured during testing.

Execution Statuses

Every test case in a cycle has one of the following five execution statuses. Each status carries a specific meaning and is color-coded for quick visual identification:

StatusColorMeaningWhen to Use
Not ExecutedGrayThe test has not been run yet.Default status for all newly added test cases. Indicates pending work.
PassGreenThe test case met all expected results.All steps passed and the actual behavior matches the expected behavior exactly.
FailRedThe test case did not produce the expected results.One or more steps produced an unexpected result, a bug was found, or the feature is broken.
BlockedYellowThe test case cannot be executed due to an external dependency or blocker.A prerequisite is not met, the environment is down, a related feature is broken, or test data is unavailable.
SkippedBlueThe test case was intentionally skipped for this cycle.The test is not applicable for this build, was descoped, or is being deferred to a future cycle.

Step-by-Step Execution Workflow

  1. Open the test cycle and locate the test case you want to execute. You can use the search bar or filter by assignee to find your assigned tests quickly.
  2. Click on the test case row to open the test case detail view. This displays the full test case information including the title, description, preconditions, and all test steps with expected results.
  3. Read through the preconditions and ensure they are met before starting execution. If a precondition cannot be satisfied, consider marking the test as Blocked and adding a comment explaining which precondition is not met.
  4. Follow each test step sequentially. For each step, perform the described action on the system under test and compare the actual result against the expected result documented in the step.
  5. After completing all steps, click the status dropdown for the test case and select the appropriate status (Pass, Fail, Blocked, Skipped, or leave as Not Executed if deferring).
  6. If the test failed, add a comment describing what went wrong. Include specific details such as the step number that failed, the actual result observed, the expected result, and any error messages or screenshots. Clear failure descriptions help developers reproduce and fix the issue faster.
  7. Optionally, log a defect directly from the execution view when a failure is identified. TestKase allows you to create a new defect or link to an existing one. See Defects for more information on defect management.
  8. Move to the next test case and repeat the process. The cycle's progress bar updates in real time as you execute tests.

Adding Comments During Execution

Comments provide context that raw statuses cannot capture. Use comments to document:

  • Failure details -- What specifically went wrong, including the actual vs. expected result, the step number that failed, and any error messages or stack traces observed.
  • Environment observations -- Any environment-specific behavior, such as performance degradation on staging or intermittent failures related to network latency.
  • Workarounds discovered -- If a blocked test can be unblocked with a workaround, document it so other testers can proceed.
  • Partial results -- If a test case partially passed (e.g., 7 of 10 steps passed before a failure), note which steps succeeded to provide useful debugging context.
  • Data or configuration notes -- Specific test data used, configuration values, or environment variables that are relevant to reproducing the result.

Linking Defects During Execution

When a test fails and you identify a bug, you can link a defect directly from the test execution view. This creates a traceable connection between the failed test execution and the reported defect. You have two options:

  • Create a new defect -- Open the defect creation form directly from the execution view. The new defect is automatically linked to the failed test case execution.
  • Link an existing defect -- Search for and select a defect that has already been reported in the project. This is useful when the same bug affects multiple test cases.

If you have Jira integration configured, defects can be synced automatically to your Jira project, creating a seamless bridge between test management and issue tracking.

Execution Rules and Guidelines

  • A test case can only have one execution status at a time within a cycle. Changing the status overwrites the previous one (but the change is recorded in history).
  • Any project member with appropriate permissions can execute any test case in the cycle, regardless of whether it is assigned to them. Assignments are advisory, not restrictive.
  • Statuses can be changed as many times as needed. There is no lock-out after setting an initial status. This supports re-testing workflows where a failed test needs to be re-executed after a fix.
  • Execution comments and defect links are preserved in the cycle's history even if the status is changed later.

When marking a test as Fail, always add a comment explaining the failure. A status of "Fail" without context forces developers to re-run the test to understand what went wrong, which wastes time. Good failure comments include: the step that failed, the actual result, the expected result, and screenshots or logs if available.

Avoid using Blocked as a catch-all status. "Blocked" specifically means the test cannot be executed due to an external dependency. If you simply have not gotten to the test yet, leave it as Not Executed. If the test is intentionally being skipped (e.g., not applicable for this build), use Skipped. Using the correct status ensures accurate progress metrics.

Bulk Execution

Bulk execution allows you to update the execution status of multiple test cases simultaneously. This is significantly faster than updating each test case one by one, and is especially useful in scenarios where a large group of tests share the same outcome.

How to Perform a Bulk Status Update

  1. In the cycle view, select the test cases you want to update by clicking the checkbox in the leftmost column of each row.
  2. To select all visible test cases, click the select all checkbox in the table header. This selects all test cases on the current page.
  3. Once you have selected your test cases, the bulk action bar appears at the top of the table showing the count of selected items.
  4. Click Set Status in the bulk action bar and choose the status you want to apply (Pass, Fail, Blocked, Skipped, or Not Executed).
  5. All selected test cases are updated to the chosen status immediately. Each status change is recorded individually in the cycle history with a timestamp.

Filter Then Bulk Update

One of the most powerful patterns is combining filtering with bulk execution. This lets you target a specific subset of test cases precisely:

  1. Apply filters to narrow down the test case list. For example, filter by status "Not Executed" and assignee "Your Name" to see only your pending work.
  2. Use the "Select All" checkbox to select all filtered results.
  3. Apply the bulk status update. Only the filtered (and selected) test cases are affected.

Common Bulk Execution Scenarios

  • Environment down -- If the test environment goes offline, select all unexecuted tests and mark them as Blocked. Add a comment explaining the environment issue so the team knows why testing is paused.
  • Automated test results -- After reviewing automated test results from your CI pipeline, bulk-mark the passing automated tests as Pass to quickly update the cycle. For a fully automated flow, consider using the Automation instead.
  • Descoping tests -- If a feature is descoped from the release, bulk-mark the related test cases as Skipped so they do not count against the cycle's pass rate.
  • Resetting for re-test -- After a bug fix is deployed, filter by "Fail" status, select all failed tests, and reset them to Not Executed so they can be re-run against the new build.
  • Smoke test pass -- After a quick smoke test confirms core functionality is working, bulk-mark all smoke test cases as Pass in a single action.

Combine filtering with bulk execution for maximum efficiency. For example, filter the cycle to show only "Not Executed" tests assigned to you, then select all to quickly process your assigned queue.

Bulk status updates are recorded in history and can always be changed again, but be careful when bulk-marking tests as "Pass". Ensure each test genuinely passed before using bulk pass. Bulk passing tests without actual verification undermines the integrity of your test results and can mask real defects.

Track Progress

Every test cycle in TestKase displays real-time progress metrics that help you understand how much testing has been completed, what the outcomes are, and whether the cycle is on track. Progress tracking is available at a glance from the cycle listing view and in more detail within the cycle detail view.

The Progress Bar

The progress bar is a horizontal, color-coded bar that visually represents the distribution of execution statuses across all test cases in the cycle. Each segment of the bar is proportional to the number of test cases in that status:

  • Green -- Passed test cases
  • Red -- Failed test cases
  • Yellow -- Blocked test cases
  • Blue -- Skipped test cases
  • Gray -- Not Executed test cases

As testers execute tests and update statuses, the bar updates in real time. This gives managers and stakeholders an immediate visual indicator of testing progress without needing to open detailed reports.

Status Counts and Completion Percentage

In addition to the progress bar, each cycle displays numeric counts for each execution status and an overall completion percentage. The completion percentage is calculated as:

Completion % = (Passed + Failed + Blocked + Skipped) / Total Test Cases x 100

This formula counts all test cases that have been actioned (i.e., any status other than "Not Executed") as complete. The idea is that even a failed or blocked test has been reviewed and its status is known. Only "Not Executed" tests represent truly pending work.

Pass Rate

The pass rate is a separate metric that indicates the quality of the test results:

Pass Rate % = Passed / (Passed + Failed) x 100

The pass rate only considers test cases with a definitive outcome (Pass or Fail). Blocked and Skipped tests are excluded because they do not indicate whether the feature works or not. A high completion percentage with a low pass rate signals that testing is progressing but the build has significant issues.

How Progress Is Calculated -- Example Scenario

Consider a cycle with 100 test cases and the following distribution:

StatusCount
Passed62
Failed8
Blocked5
Skipped3
Not Executed22
  • Completion: (62 + 8 + 5 + 3) / 100 = 78%
  • Pass Rate: 62 / (62 + 8) = 88.6%

This tells us that 78% of testing is done, with 22 tests still pending. Of the tests that have a definitive result, about 89% are passing. This indicates a relatively healthy build with some failures that need investigation, plus 5 blocked tests that may require environment fixes before they can be executed.

Interpreting Results for Release Decisions

When using cycle progress to inform release decisions, consider these guidelines:

  • 100% completion with high pass rate (>95%) -- All tests executed and most passed. Review any failures to confirm they are known issues or low-severity. This is typically a green light for release.
  • High completion but significant failures -- Testing is complete but critical test cases failed. Investigate failures, prioritize defect fixes, and plan a re-test round before release.
  • Low completion -- Many test cases still at Not Executed. Testing is incomplete. Identify blockers, reassign resources, or extend the testing timeline.
  • Many blocked cases -- A large yellow segment indicates environmental or dependency issues. Resolve blockers before drawing conclusions about overall test quality.

Use the Reports tab for cross-cycle analytics. While the progress bar shows the current state of a single cycle, Reports let you compare pass rates across multiple cycles, identify flaky tests, and track quality trends over time.

Automation Runs

When a test case is executed through automation — via the TestKase Reporter or the execute-bulk API — each automation ID that reports a result creates a separate run entry on the test case's Automation Runs card on the execute page.

Each run shows its own:

  • Automation ID -- The 5-digit identifier of the script/pipeline that produced the result
  • Status -- Pass, Fail, or Blocked for that specific run
  • Actual Result -- A short summary of what the test produced (or the error message on failure)
  • Duration -- How long the automated test took to run
  • Executed On -- Timestamp of the run, shown in your profile timezone
  • Error Message -- Full stack trace and call log when the run failed

This gives you full visibility into which automation script or pipeline produced each result — useful when a single test case is covered by multiple automated tests (e.g., across CI pipelines, browsers, or retries).

Status Precedence (Worst-Status-Wins)

When a test case has multiple automation IDs reporting different results, the test case's overall status is the worst across all runs. The precedence is:

fail > blocked > not_executed > pass

Example: If automation ID 12345 reports Pass and 67890 reports Fail for the same test case, the overall status is Fail. Both runs remain visible individually in the Automation Runs card, so you can see exactly which automation path failed.

This behavior ensures that if any automation path detects a problem, the test case is flagged as failed regardless of how many other paths passed.

Re-running the Same Automation

Running the reporter a second time for the same automation ID updates the existing run entry rather than creating a duplicate. The status, duration, result, and error fields all reflect the latest run.

The cycle history tab always shows the full timeline of every execution — including older runs that have since been updated.

Multi-Environment / Multi-Browser Runs

When the same automation ID runs in multiple environments — e.g. Cypress with --browser chrome + --browser firefox, or Playwright projects chromium / firefox / webkit — each environment gets its own run entry in the Automation Runs card. The status label shows the environment tag next to the run:

  • [97131] ✅ Pass · chromium
  • [97131] ❌ Fail · firefox

The overall test case status uses worst-status-wins across all environments — if Chrome passes but Firefox fails, the test case is marked Fail. Both runs stay visible so you can see exactly which environment failed.

The reporter auto-detects the environment from:

  • JUnit XML[Browser] suffix in the <testsuite name="..."> attribute (e.g. "Sample Test Suite [Chrome]")
  • Playwright JSON — the projectName field on each test (configured via projects in playwright.config.ts)
  • Other formats — not auto-detected; same test across environments collapses into one run

Auto-detected environment tags (browser/project names) are stored per-run and do not need to be in the project's allowed environment list. The project's environment field is only set when the reported environment matches an allowed value.

Retry Details

When a test runs with retries enabled (e.g. Playwright's retries: 2, Maven Surefire's <reruns>2</reruns>), TestKase captures the full retry log in the Error Message field of the automation run — showing each attempt's status, duration, and error message. This makes flaky tests immediately identifiable on the execute page.

Flaky test (failed then eventually passed):

  • Actual Result: "Flaky — passed after 2 retry(ies)"
  • Error Message: Log of each failed attempt with its error + stack trace, followed by the successful final attempt
  • Status: Pass (worst-status-wins does not apply to retries within one run — only the final outcome counts)

Consistently failing test (failed every retry):

  • Actual Result: First attempt's error
  • Error Message: "Failed after 3 attempt(s) (3 failed)" followed by each attempt's error
  • Status: Fail

Retry support by format:

FormatRetry Details
Playwright JSON (--format playwright)Full — per-attempt status, duration, error, stack
JUnit XML (--format junit)Partial — detects Maven Surefire's <flakyFailure>, <flakyError>, <rerunFailure>, <rerunError> elements. Per-attempt error + stack captured; per-attempt duration not available in JUnit format. Also works with Gradle's retry plugin.
Cypress / TestNG / NUnit / CucumberNot available — these output formats do not expose per-attempt retry data, even when the underlying framework retries. Only the final attempt's result is captured.

This is a format-level limitation, not a TestKase limitation. If you need full retry details and your framework supports both JUnit and a native format, prefer the format with richer retry info (usually Playwright JSON for Playwright, or Surefire JUnit for Java).

Reset Executions

You can clear all execution data for a test cycle to start a fresh round of testing while keeping the same test cases linked. This is useful when re-running a full regression after a build fix, starting a new sprint with the same cycle, or clearing old automation results before a new CI run.

How to Reset

  1. Open the Execute page for the test cycle.
  2. Click the (more actions) menu in the toolbar next to the search bar.
  3. Click Reset — the button reads Reset (N) when you have test cases selected, or just Reset to reset every test case in the cycle.
  4. Confirm in the dialog. The action cannot be undone.

Scope — All or Selected

  • No selection → resets every test case in the cycle
  • With selection → resets only the selected test cases

What Gets Cleared

  • All execution statuses are reset to Not Executed
  • Actual results, error messages, durations, build IDs, and executed-on timestamps
  • All automation run entries (the Automation Runs card becomes empty for reset test cases)

What Is Preserved

  • Test case links to the cycle (test cases stay in the cycle)
  • Test case content (steps, description, attachments)
  • Cycle metadata (title, environment, folder, assignments)
  • The cycle history — the reset itself is recorded as a history entry

Permissions

Available to all project roles except Guest. The underlying POST /projects/test-cycles/reset-executions/:projectId/:cycleId endpoint is PAT-accessible, so you can also trigger resets programmatically from a CI script before reporting new results.

Reset vs Clone

ActionPurpose
ResetIn-place — keep the same cycle, clear executions only
CloneCreates a new cycle with the same test cases

Use Reset when you want metrics and history to accumulate in the same cycle across re-runs. Use Clone when you want a completely new cycle for a different build or test pass.