You already know AI can help with QA. What you need is a prompt library you can bookmark, copy from, and use under pressure.
This page is prompts first. Explanations are collapsed.
If you want theory, read AI Prompts for QA Testing: What Actually Works.

Best Setup for Using These Prompts
Option 1: Use Project Folders ChatGPT Plus or Claude Pro: Create a “QA Testing” project folder AI remembers your formats and context across conversations Option 2: Two-AI Cross-Check (My Workflow) ChatGPT Plus for initial analysis (uses memory) Claude for validation and fresh perspective Option 3: Free Tier Save your context (bug format, product details, severity guidelines) in a doc Paste at the start of every conversation Pick one and stick with it.
- ✔ Start AI session with context setup prompt
- ✔ Copy the exact prompt you need
- ✔ Review and correct AI output
- ✔ Never ship without verification
Setup Prompt: Train Your AI QA Partner First
Start every AI session with this:
You are a senior QA engineer with 5+ years of experience testing production systems.
Your job:
- Identify bugs (functional, UI, UX, edge cases)
- Think like an angry user, not a happy demo scenario
- Flag anything that's technically correct but will piss off customers
- Write clear bug reports following standard formats
- Test against acceptance criteria with skepticism
Critical mindset:
- "It works" ≠ "users will tolerate it"
- Technical accuracy doesn't mean good UX
- If something would frustrate YOU as a user, it's a bug
When you analyze screenshots, videos, or features:
- Look for what developers MISSED, not what they built
- Think about edge cases they didn't test
- Consider real user behavior (typos, impatience, confusion)
- Flag design inconsistencies even if functionality worksCategory 1: Screenshot → Bug Reports
Prompt: Convert Screenshot to Bug Report
[Attach screenshot]
As a senior QA engineer, analyze this screen and document ALL issues.
For each issue, create a bug report using this EXACT format:
**Bug ID:** [Leave blank]
**Title:** [Clear, specific, one-line summary]
**Severity:** [Critical/High/Medium/Low - explain why]
**Steps to Reproduce:**
1. [Exact steps]
**Expected Result:**
**Actual Result:**
**Environment:** [Browser/OS/Build if visible]
**Additional Notes:**
For each issue, classify it as:
- Functional
- UI
- UX
- Edge case
Focus on issues that would frustrate or confuse real users, not just things that technically fail.When to use:
- Reviewing staging features
- Documenting bugs from Slack or Jira screenshots
Customize this:
- Replace the bug report template with your team’s format
- Align fields with your Jira or ticketing workflow
Prompt: Validate Against Acceptance Criteria
Here are the acceptance criteria for this feature:
[Paste acceptance criteria]
I'm testing based on this screenshot or description:
[Attach screenshot or describe behavior]
As a senior QA engineer, evaluate each criterion:
- Pass / Fail / Unclear
- Explain WHY with evidence
- Flag UX issues even if technically passing
List edge cases NOT covered by the acceptance criteria.
Be specific. No "looks good." Best used when tickets technically pass but feel off. This prompt is about gap detection, not confirmation.
Category 2: Video Analysis
Prompt: Document User Flows from Video
[Attach screen recording]
As a senior QA engineer, document:
1. Every user action in sequence
2. Expected vs actual behavior at each step
3. UI issues (flicker, layout shifts, broken states)
4. UX friction (confusing flows, unclear feedback)
5. Performance issues visible in the recording
6. Edge cases or error states shown
Create structured bug reports with:
- Clear titles
- Exact reproduction steps
- Timestamp where the issue occursBest for:
- Multi-step workflows
- Intermittent bugs
- Issues developers didn’t personally witness
This saves hours of back-and-forth.
Category 3: Test Case Generation
Prompt: Generate Test Cases from User Story
Here is the user story or feature description:
[Paste user story]
As a senior QA engineer, generate test cases covering:
- Happy paths
- Negative scenarios
- Edge cases
- UX validation
For each test case include:
- Test Scenario
- Preconditions
- Test Steps
- Expected Result
- Test Data
- Priority
Flag assumptions or gaps in the story. These are starting points, not gospel. You still own relevance and priority.
Prompt: Generate Negative Test Scenarios
Feature: [Describe the feature]
As a QA focused on breaking things, generate negative tests for:
- Invalid inputs
- Missing required data
- Boundary violations
- Permission issues
- System failures (timeouts, backend down)
For each:
- What breaks
- Input or condition
- Expected system behavior
- Priority based on real world riskCategory 4: Code Review
Prompt: Review Test Script for Missing Assertions
[Paste test script]
As a senior QA reviewer:
1. Summarize what this test intends to validate
2. Identify missing assertions
3. Identify edge cases NOT covered
4. Flag flaky patterns or timing risks
5. Assess whether UX behavior is validated or ignored
Output:
- Missing assertions
- Recommended additional tests
- Flaky or brittle patternsCategory 5: Production Bug Analysis
Prompt: Root Cause Analysis from Production Error
Production bug details:
**Error message:** [Paste error]
**User report:** [What customer said]
**Known steps:** [If any]
**Environment:** [Prod details]
As a senior QA, analyze:
1. Likely root cause
2. Why testing missed this
3. Test cases to prevent recurrence
4. Related areas at risk
5. How to reproduce in stagingCategory 6: Stop AI Mid-Generation
If AI goes off track, stop it immediately:
Stop.
Focus ONLY on [specific issue].
Ignore everything else.
The problem is [specific failure]. Don’t let AI waste your time finishing the wrong analysis. Interrupt early. Redirect fast.
Category 7: Correcting AI When It’s Wrong
That assessment is incorrect.
Here is why it's wrong:
[Explain briefly]
The correct assessment is:
[Your correction]
Update the report accordingly.Final Reminder
AI will:
- Miss domain context
- Misjudge severity
- Sound confident while being wrong
That’s normal.
Your value as QA is judgment, not generation.
AI-generated code can pass tests but break in production. The same applies to AI-generated test cases. Verify everything.
Related Reading
QAJourney.net:
- AI Prompts for QA Testing: What Actually Works
- AI-Assisted Manual Testing
- How to Write Effective Bug Reports
EngineeredAI.net:


