TestKase Docs
AI & AdvancedAI Features

Best Practices & FAQ

Guidelines for effective AI usage and answers to common questions.

Best Practices

  1. Always review AI-generated content. AI output is a starting point, not a finished product. Review every generated test case for accuracy, completeness, and relevance to your specific application before saving.
  2. Write detailed descriptions for generation. The more specific your input, the better the output. Include feature details, user roles, edge cases, and expected behaviors in your description.
  3. Generate in iterations. Start with a broad description for general coverage, then run focused generations for specific areas (security testing, negative testing, accessibility). This produces more comprehensive test suites than a single generation.
  4. Use AI reports with sufficient data. AI reports require historical execution data to produce meaningful insights. Wait until you have completed at least 3-5 test cycles before relying on AI-powered analytics.
  5. Combine AI with human expertise. Use AI generation for routine and repetitive scenarios (form validation, CRUD operations), and write manual test cases for complex business logic, workflows, and domain-specific scenarios that require deep product knowledge.
  6. Monitor credit usage. Track which features consume the most credits and adjust usage patterns. Batch AI report generation to sprint boundaries rather than running them daily.
  7. Use the chat for exploration. The AI chat is most valuable for brainstorming test scenarios, identifying coverage gaps, and getting testing strategy advice. Use it as a thinking partner rather than a test case factory.
  8. Keep AI features disabled when not needed. If your team is in a documentation or planning phase with no testing activity, disable AI to prevent accidental credit usage.

FAQ

Do AI features work offline?

No. AI features require an active internet connection as they use cloud-based AI models for processing. An active subscription with available credits is also required.

Are my test cases and project data sent to external AI services?

When you invoke an AI feature, relevant data (e.g., your description for generation, project metrics for reports) is sent to the AI model via encrypted connections. The data is processed in real-time and is not stored or used for training by the AI provider.

Can I control which users can use AI features?

AI features are controlled at the organization level via the Enable AI toggle. When enabled, all users can use AI features. Role-based restrictions on specific AI actions (e.g., restricting report generation to admins) can be configured through the permissions matrix.

What happens when I run out of credits?

When your credit balance reaches zero, AI features show a "No credits available" message. The AI Generate button, chat, and AI reports become inactive. All other TestKase features continue to work normally. Purchase more credits to resume AI usage.

Can I generate test cases in different languages?

The AI generates test cases in the language of your input description. If you write the description in English, output is in English. You can write descriptions in other languages and the AI will attempt to generate test cases in that language, though English produces the best results.

How accurate are AI-generated test cases?

AI-generated test cases are generally high quality for common scenarios (form validation, CRUD, authentication flows). However, they should always be reviewed for domain-specific accuracy. The AI may not know your specific business rules, infrastructure constraints, or regulatory requirements.

Do AI credits expire?

No. Purchased credits do not expire and carry over indefinitely. There is no monthly reset. Credits are only consumed when AI features are actively used.

Can I get a refund on unused credits?

Please refer to the TestKase billing and refund policy or contact support for information about credit refunds.

How are AI report results different from regular reports?

Regular reports (Tier 1 and 2) show what happened — counts, percentages, trends. AI reports (Tier 3) go further to analyze why things happen and predict what might happen next. They use ML models to identify patterns, anomalies, and risks that aren't visible in simple metrics.

How often should I run AI reports?

For most teams, running AI reports at the end of each sprint or test cycle provides the best balance of insights and credit cost. Reports like Flaky Tests and Suite Optimization are most valuable when run periodically (monthly or quarterly) rather than daily.

On this page