TestKase Docs
AI & AdvancedAI Features

AI Agent

A built-in conversational AI sidebar for creating test cases, querying data, summarizing coverage, and managing tests through natural language.

AI Agent

The AI Agent is a conversational sidebar panel built directly into the TestKase dashboard. It understands your project context — test cases, cycles, folders, execution data, and requirements — and lets you interact with everything through natural language.

Unlike external AI integrations (such as MCP Server), the AI Agent runs inside your dashboard with zero setup. Open it, type a request, and the agent executes it in real time.


Opening the AI Agent

There are three ways to open the AI Agent:

MethodDescription
Keyboard shortcutPress Ctrl+K (Windows/Linux) or Cmd+K (macOS) from anywhere in the dashboard
Sidebar iconClick the AI Agent icon in the left navigation sidebar
Command paletteUse the command palette and select "Open AI Agent"

The agent panel slides in from the right side of the screen. You can use it alongside your regular workflow — it does not take over the page.

The AI Agent is available on every page within a project. You must have a project selected for the agent to have context about your test data.


What the AI Agent Can Do

The agent supports a wide range of actions across the test management lifecycle:

Test Case Management

  • Create test cases — "Create 3 test cases for the login feature" generates structured test cases with titles, steps, priorities, and expected results.
  • Search and filter — "Show all high priority test cases" or "Find blocked test cases in the Payments folder" filters your test case list using natural language.
  • Query data — "How many test cases are in this project?" or "What is the pass rate?" returns real-time data from your project.

Coverage and Reporting

  • Summarize coverage — "Summarize test coverage for this project" returns a breakdown of pass rates, execution counts, and untested areas.
  • Execution insights — "How many test cases failed in the last cycle?" queries execution results and returns detailed answers.
  • Pass rate analysis — "What is the overall pass rate?" calculates metrics across all executed test cases.

Dashboard Interaction

  • Apply filters — "Apply filters for status: blocked and priority: critical" sets dashboard filters directly, saving you from manual filter configuration.
  • Navigate — The agent can direct you to specific areas of the dashboard based on your request.

Using Suggestion Cards

When you first open the AI Agent, you see prebuilt suggestion cards — common actions you can trigger with a single click:

Suggestion CardWhat It Does
Summarize test coverageGet a quick overview of pass/fail/pending stats
Create test cases for loginAuto-generate cases from a feature description
Show high priority test casesFilter and surface the most critical tests
How many test cases failed?Count test cases by execution status

Click any card to instantly send that prompt to the agent. This is useful for common tasks without typing.

Suggestion cards are contextual — they reference your current project's data. The results will vary based on what test cases and cycles exist in the project.


Typing Your Own Prompts

For anything beyond the suggestion cards, type a free-form request in the input field at the bottom of the panel. The agent understands natural language — you do not need to use specific commands or syntax.

Example Prompts

PromptWhat Happens
"Summarize test coverage for this project"Returns a coverage breakdown with pass rates, execution counts, and untested areas
"Create 3 test cases for the login feature"Generates three structured test cases with steps, priorities, and expected results
"Show all high priority test cases"Filters the test case list to show only high-priority items
"How many test cases failed in the last cycle?"Queries the most recent test cycle and returns failure count with details
"Apply filters for status: blocked and priority: critical"Sets dashboard filters to show only blocked, critical test cases
"What is the overall pass rate?"Calculates and returns the pass rate percentage across all executed test cases

Tips for Better Results

  1. Be specific — "Create 3 test cases for the password reset flow covering email validation, expired links, and successful reset" produces better results than "Create test cases."
  2. Reference existing data — The agent knows your folders, cycles, and labels. Use them: "Show test cases in the Checkout folder with status failed."
  3. Ask follow-up questions — The agent maintains conversation context within a session. You can say "Now filter those by high priority" after a previous query.
  4. Use action verbs — Start prompts with "Create," "Show," "Find," "Summarize," "Apply," or "Count" for clearer intent.

Real-Time Streaming

The AI Agent streams responses in real-time. As the agent processes your request:

  1. Thinking indicator — A loading state shows while the agent analyzes your prompt.
  2. Tool execution — If the agent needs to call internal tools (e.g., creating a test case, querying data), you see which tools are being executed and their status.
  3. Streaming response — The answer appears progressively, word by word, so you can start reading before the full response is complete.

This transparency means you always know what the agent is doing and can verify that it took the correct actions.


Tool Execution Tracking

The AI Agent uses internal tools to perform actions. When a tool is invoked, you see:

  • Tool name — Which tool was called (e.g., "create_test_case", "search_test_cases", "get_reports")
  • Execution status — Whether the tool succeeded or failed
  • Result summary — A brief description of what the tool returned

This is particularly useful for complex requests that involve multiple steps. For example, "Create a test case for login and add it to the Sprint 42 cycle" may invoke multiple tools in sequence.


Project Context

The AI Agent is project-scoped. It has access to:

  • Test cases — All test cases in the current project, including their steps, priorities, labels, statuses, and folder locations
  • Test cycles — Cycle names, statuses, linked test cases, and execution results
  • Folders — The full folder hierarchy for organizing test cases
  • Execution data — Pass/fail/blocked/skipped results across all cycles
  • Requirements — Linked requirements and their traceability
  • Defects — Reported defects and their associations

You must select an organization and project before using the AI Agent. If no project is selected, the agent will prompt you to select one before proceeding.


Credit Usage

Each message sent to the AI Agent consumes 1 AI credit from your organization's credit balance.

  • Credits are deducted when you send a message, regardless of the complexity of the response.
  • Clicking a suggestion card also consumes 1 credit (it sends a message on your behalf).
  • The agent maintains conversation context within a session, so follow-up questions do not need to repeat earlier context — this is more credit-efficient than starting a new conversation.

See Credits & Billing for details on purchasing credits and monitoring usage.


Requirements

RequirementDetails
AI enabledAI features must be enabled at the organization level (Organization Settings → AI Features)
Credits availableAt least 1 AI credit in the organization balance
Project selectedA project must be selected for the agent to have context
RoleAny project member can use the AI Agent — no special permissions required

Limitations

  • The AI Agent operates within the current project only. It cannot query data across multiple projects in a single request.
  • Complex multi-step operations (e.g., "Create a test plan, add three cycles, and generate a report") may not always execute in the expected order. Break these into separate prompts for reliability.
  • The agent does not modify or delete existing test cases unless explicitly asked. Read operations (search, filter, summarize) are always safe.
  • Conversation context is maintained within a session. Closing and reopening the panel starts a new session.

AI Agent vs. MCP Server

TestKase offers two ways to interact with AI:

AI AgentMCP Server
WhereBuilt into the TestKase dashboardExternal AI tools (Claude, Copilot, Cursor)
SetupZero setup — just open the panelRequires MCP server configuration
AccessWeb browserIDE, terminal, desktop app
Best forQuick queries, dashboard interaction, on-the-fly test creationBulk operations, automation workflows, developer-centric testing
Credits1 credit per messageNo credit cost (uses your own AI tool's API)

Both can be used together. Use the AI Agent for quick tasks during QA workflows, and the MCP Server for deeper automation from your development environment.


Next Steps

  • AI Test Case Generation — Generate test cases from natural language descriptions using the dedicated generation modal.
  • AI Reports — ML-powered analytics including flaky tests, risk heatmaps, and predictive failure.
  • MCP Server Integration — Connect external AI agents to TestKase via the Model Context Protocol.
  • Credits & Billing — Purchase credits, monitor usage, and understand costs.