Test Execution
Execute tests, track progress with real-time metrics, and record results across your test cycles.
Execute Tests
Test execution is the core activity within a test cycle. For each test case in the cycle, testers review the test steps, perform the actions described, verify the expected results, and record an execution status. TestKase supports a streamlined execution workflow designed to minimize friction and maximize the information captured during testing.
Execution Statuses
Every test case in a cycle has one of the following five execution statuses. Each status carries a specific meaning and is color-coded for quick visual identification:
| Status | Color | Meaning | When to Use |
|---|---|---|---|
| Not Executed | Gray | The test has not been run yet. | Default status for all newly added test cases. Indicates pending work. |
| Pass | Green | The test case met all expected results. | All steps passed and the actual behavior matches the expected behavior exactly. |
| Fail | Red | The test case did not produce the expected results. | One or more steps produced an unexpected result, a bug was found, or the feature is broken. |
| Blocked | Yellow | The test case cannot be executed due to an external dependency or blocker. | A prerequisite is not met, the environment is down, a related feature is broken, or test data is unavailable. |
| Skipped | Blue | The test case was intentionally skipped for this cycle. | The test is not applicable for this build, was descoped, or is being deferred to a future cycle. |
Step-by-Step Execution Workflow
- Open the test cycle and locate the test case you want to execute. You can use the search bar or filter by assignee to find your assigned tests quickly.
- Click on the test case row to open the test case detail view. This displays the full test case information including the title, description, preconditions, and all test steps with expected results.
- Read through the preconditions and ensure they are met before starting execution. If a precondition cannot be satisfied, consider marking the test as Blocked and adding a comment explaining which precondition is not met.
- Follow each test step sequentially. For each step, perform the described action on the system under test and compare the actual result against the expected result documented in the step.
- After completing all steps, click the status dropdown for the test case and select the appropriate status (Pass, Fail, Blocked, Skipped, or leave as Not Executed if deferring).
- If the test failed, add a comment describing what went wrong. Include specific details such as the step number that failed, the actual result observed, the expected result, and any error messages or screenshots. Clear failure descriptions help developers reproduce and fix the issue faster.
- Optionally, log a defect directly from the execution view when a failure is identified. TestKase allows you to create a new defect or link to an existing one. See Defects for more information on defect management.
- Move to the next test case and repeat the process. The cycle's progress bar updates in real time as you execute tests.
Adding Comments During Execution
Comments provide context that raw statuses cannot capture. Use comments to document:
- Failure details -- What specifically went wrong, including the actual vs. expected result, the step number that failed, and any error messages or stack traces observed.
- Environment observations -- Any environment-specific behavior, such as performance degradation on staging or intermittent failures related to network latency.
- Workarounds discovered -- If a blocked test can be unblocked with a workaround, document it so other testers can proceed.
- Partial results -- If a test case partially passed (e.g., 7 of 10 steps passed before a failure), note which steps succeeded to provide useful debugging context.
- Data or configuration notes -- Specific test data used, configuration values, or environment variables that are relevant to reproducing the result.
Linking Defects During Execution
When a test fails and you identify a bug, you can link a defect directly from the test execution view. This creates a traceable connection between the failed test execution and the reported defect. You have two options:
- Create a new defect -- Open the defect creation form directly from the execution view. The new defect is automatically linked to the failed test case execution.
- Link an existing defect -- Search for and select a defect that has already been reported in the project. This is useful when the same bug affects multiple test cases.
If you have Jira integration configured, defects can be synced automatically to your Jira project, creating a seamless bridge between test management and issue tracking.
Execution Rules and Guidelines
- A test case can only have one execution status at a time within a cycle. Changing the status overwrites the previous one (but the change is recorded in history).
- Any project member with appropriate permissions can execute any test case in the cycle, regardless of whether it is assigned to them. Assignments are advisory, not restrictive.
- Statuses can be changed as many times as needed. There is no lock-out after setting an initial status. This supports re-testing workflows where a failed test needs to be re-executed after a fix.
- Execution comments and defect links are preserved in the cycle's history even if the status is changed later.
When marking a test as Fail, always add a comment explaining the failure. A status of "Fail" without context forces developers to re-run the test to understand what went wrong, which wastes time. Good failure comments include: the step that failed, the actual result, the expected result, and screenshots or logs if available.
Avoid using Blocked as a catch-all status. "Blocked" specifically means the test cannot be executed due to an external dependency. If you simply have not gotten to the test yet, leave it as Not Executed. If the test is intentionally being skipped (e.g., not applicable for this build), use Skipped. Using the correct status ensures accurate progress metrics.
Bulk Execution
Bulk execution allows you to update the execution status of multiple test cases simultaneously. This is significantly faster than updating each test case one by one, and is especially useful in scenarios where a large group of tests share the same outcome.
How to Perform a Bulk Status Update
- In the cycle view, select the test cases you want to update by clicking the checkbox in the leftmost column of each row.
- To select all visible test cases, click the select all checkbox in the table header. This selects all test cases on the current page.
- Once you have selected your test cases, the bulk action bar appears at the top of the table showing the count of selected items.
- Click Set Status in the bulk action bar and choose the status you want to apply (Pass, Fail, Blocked, Skipped, or Not Executed).
- All selected test cases are updated to the chosen status immediately. Each status change is recorded individually in the cycle history with a timestamp.
Filter Then Bulk Update
One of the most powerful patterns is combining filtering with bulk execution. This lets you target a specific subset of test cases precisely:
- Apply filters to narrow down the test case list. For example, filter by status "Not Executed" and assignee "Your Name" to see only your pending work.
- Use the "Select All" checkbox to select all filtered results.
- Apply the bulk status update. Only the filtered (and selected) test cases are affected.
Common Bulk Execution Scenarios
- Environment down -- If the test environment goes offline, select all unexecuted tests and mark them as Blocked. Add a comment explaining the environment issue so the team knows why testing is paused.
- Automated test results -- After reviewing automated test results from your CI pipeline, bulk-mark the passing automated tests as Pass to quickly update the cycle. For a fully automated flow, consider using the Automation instead.
- Descoping tests -- If a feature is descoped from the release, bulk-mark the related test cases as Skipped so they do not count against the cycle's pass rate.
- Resetting for re-test -- After a bug fix is deployed, filter by "Fail" status, select all failed tests, and reset them to Not Executed so they can be re-run against the new build.
- Smoke test pass -- After a quick smoke test confirms core functionality is working, bulk-mark all smoke test cases as Pass in a single action.
Combine filtering with bulk execution for maximum efficiency. For example, filter the cycle to show only "Not Executed" tests assigned to you, then select all to quickly process your assigned queue.
Bulk status updates are recorded in history and can always be changed again, but be careful when bulk-marking tests as "Pass". Ensure each test genuinely passed before using bulk pass. Bulk passing tests without actual verification undermines the integrity of your test results and can mask real defects.
Track Progress
Every test cycle in TestKase displays real-time progress metrics that help you understand how much testing has been completed, what the outcomes are, and whether the cycle is on track. Progress tracking is available at a glance from the cycle listing view and in more detail within the cycle detail view.
The Progress Bar
The progress bar is a horizontal, color-coded bar that visually represents the distribution of execution statuses across all test cases in the cycle. Each segment of the bar is proportional to the number of test cases in that status:
- Green -- Passed test cases
- Red -- Failed test cases
- Yellow -- Blocked test cases
- Blue -- Skipped test cases
- Gray -- Not Executed test cases
As testers execute tests and update statuses, the bar updates in real time. This gives managers and stakeholders an immediate visual indicator of testing progress without needing to open detailed reports.
Status Counts and Completion Percentage
In addition to the progress bar, each cycle displays numeric counts for each execution status and an overall completion percentage. The completion percentage is calculated as:
Completion % = (Passed + Failed + Blocked + Skipped) / Total Test Cases x 100This formula counts all test cases that have been actioned (i.e., any status other than "Not Executed") as complete. The idea is that even a failed or blocked test has been reviewed and its status is known. Only "Not Executed" tests represent truly pending work.
Pass Rate
The pass rate is a separate metric that indicates the quality of the test results:
Pass Rate % = Passed / (Passed + Failed) x 100The pass rate only considers test cases with a definitive outcome (Pass or Fail). Blocked and Skipped tests are excluded because they do not indicate whether the feature works or not. A high completion percentage with a low pass rate signals that testing is progressing but the build has significant issues.
How Progress Is Calculated -- Example Scenario
Consider a cycle with 100 test cases and the following distribution:
| Status | Count |
|---|---|
| Passed | 62 |
| Failed | 8 |
| Blocked | 5 |
| Skipped | 3 |
| Not Executed | 22 |
- Completion: (62 + 8 + 5 + 3) / 100 = 78%
- Pass Rate: 62 / (62 + 8) = 88.6%
This tells us that 78% of testing is done, with 22 tests still pending. Of the tests that have a definitive result, about 89% are passing. This indicates a relatively healthy build with some failures that need investigation, plus 5 blocked tests that may require environment fixes before they can be executed.
Interpreting Results for Release Decisions
When using cycle progress to inform release decisions, consider these guidelines:
- 100% completion with high pass rate (>95%) -- All tests executed and most passed. Review any failures to confirm they are known issues or low-severity. This is typically a green light for release.
- High completion but significant failures -- Testing is complete but critical test cases failed. Investigate failures, prioritize defect fixes, and plan a re-test round before release.
- Low completion -- Many test cases still at Not Executed. Testing is incomplete. Identify blockers, reassign resources, or extend the testing timeline.
- Many blocked cases -- A large yellow segment indicates environmental or dependency issues. Resolve blockers before drawing conclusions about overall test quality.
Use the Reports tab for cross-cycle analytics. While the progress bar shows the current state of a single cycle, Reports let you compare pass rates across multiple cycles, identify flaky tests, and track quality trends over time.