Lesson 3 of 9
Bug Reports & Test Documentation
Test Cases · Regression · What is a defect · Bug report anatomy · Severity & Priority
→ next slide | ESC overview
Lesson Plan
Lesson 3: Bug Reports & Test Documentation
Focus: Finding, reporting, and documenting test work professionally
1. From User Story to Test Case — the chain from requirement to verification
2. Test Case Anatomy & Best Practices — components, writing good cases
3. Confirmation & Regression Testing — re-testing and regression
4. What is a defect — error, bug, failure chain
5. Bug Reports — anatomy, fields, severity & priority
6. Bug Lifecycle & RCA — from discovery to closure
Homework: Find a real bug on any website. Write a full bug report and submit as a GitHub Issue to the Bug-Reporting repo.
From Requirement to Test
User Story → Requirement → Test Case
Test cases are derived directly from requirements.
This chain makes them traceable — you can prove which tests cover which requirements.
User Story
"As an online shopper, I want to filter search results by price range, so that I can find products within my budget more easily."
Requirement (derived from user story)
The search results page shall support filtering by a minimum and maximum price, showing only products within that range.
Test Case (derived from requirement)
Verify that entering min $50 and max $200 in the price filter shows only products priced $50–$200.
Key Concept
One user story → many requirements. One requirement → many test cases (happy path, negative cases, edge cases). When a test case fails, you write a bug report — which we cover later in this lesson.
Test Case Anatomy
Anatomy of a Test Case
A test case is a precise, repeatable script that anyone on the team can execute and get the same result.
ID / TitleUnique identifier (e.g. TC-001) + a short name that describes what is being tested
PreconditionsWhat must already be true before starting — logged in? specific data in the system?
Test StepsNumbered actions, each doing exactly one thing. Someone who has never used the app should be able to follow them.
Test DataExact values to use — which user account, which input, which environment
Expected ResultsWhat the system should do or show at each step (or at the end). Reference the requirement.
Actual ResultsFilled in during execution — what actually happened. If different from expected: log a bug.
PostconditionsWhat state should the system be in after the test is done
StatusPass / Fail / Blocked / Not Executed — updated each test run
Test Case Example
Test Case in Practice
TC-014 · Price filter shows results within range · Preconditions: logged in as registered user, on /search
| # | Test Step | Expected Result | Actual Result |
| 1 |
Locate the "Price Range" filter on the page |
Filter is visible with min and max input fields |
Pass |
| 2 |
Enter 50 in the min field and 200 in the max field |
Both fields accept the values |
Pass |
| 3 |
Click "Apply" |
Results list reloads. All shown products are priced $50–$200. |
FAIL — products under $50 shown. → Bug PRF-115 |
| 4 |
Click "Clear" to reset the filter |
All products are shown again |
Not executed (step 3 failed) |
When step 3 fails, you write bug report PRF-115. The connection between test cases and bug reports is exactly how professional QA tracking works — we build that bug report later in this lesson.
Best Practices
Writing Test Cases That Actually Help
✅ One test case = one scenarioDon't test login AND price filter in the same case. When it fails, you won't know which feature broke.
✅ Include negative testsWhat should NOT work is as important as what should. Always test invalid input, missing fields, unauthorized access.
✅ Be specific about expected results"Works correctly" is not a valid expected result. Write exactly what should appear, what value, what message.
✅ Name tests descriptively"TC-014 · Price filter shows results within range" tells you what passed or failed without opening it.
❌ Don't chain dependent testsIf test 3 requires the output of test 2, one failure cascades. Keep tests independent.
✅ Include edge casesWhat happens at boundaries? Empty input? Maximum length? Special characters? These are where bugs hide.
Confirmation & Regression
Re-testing & Regression Testing
Confirmation (Re-testing)
A bug was fixed. Run the exact same test case again — same steps, same data, same environment.
Only mark as "fixed" when it passes identically. Testing only the fix is not enough — nearby code often breaks too.
Regression Testing
Every code change can break something that was working. After any change, re-run tests for related features.
- Full regression — run everything, very thorough, slow
- Selective — run tests related to changed areas, faster
Why This Matters
Running the same 200 tests manually after every code change is tedious and error-prone. This is the main reason testing gets automated — not because it's impossible manually, but because humans can't do it reliably at scale.
Defects
What Is a Bug?
A defect / bug / error / issue is anything where the actual result differs from the expected result defined by requirements. It doesn't have to crash the app.
Error
A human mistake — wrong assumption, misread requirement, typo in logic.
e.g. Developer misread "age ≥ 18" as "age > 18"
Defect / Bug
The flaw in code or documentation that results from the error.
e.g. The comparison in the code is written as > instead of ≥
Failure
The observable wrong behaviour when the defect is actually triggered.
e.g. A 18-year-old can't register even though they should be able to
Key Concept
Not every error causes a failure. A bug in code that never runs is a defect — but you'll only see a failure when that code path is triggered. That's why testing edge cases matters.
Cost of Bugs
Why Production Bugs Are Expensive
The later a bug is found, the more it costs to fix. Each stage multiplies the effort required.
Requirements phase: 1×
Design phase: 3×
Development: 10×
QA / Testing: 25×
Production: 100×
Why bugs escape to production:
- Can't be reproduced reliably
- Found too late in the cycle
- Team decided not to fix it
- Fix introduced a new bug
- "It's not a bug, it's a feature" 😄
Key Concept
This is why testers must be involved early in the process — not just at the end.
A tester reviewing requirements can prevent 100× expensive production incidents.
Bug Report
Who Reads a Bug Report — and Why?
A good bug report serves multiple audiences at once. Understanding who reads it helps you write the right level of detail.
DeveloperNeeds exact steps to reproduce. Needs environment details to know where to look. The more precise, the faster the fix.
Product ManagerNeeds impact assessment — how many users affected, which features, how visible is it to customers.
Process ManagerNeeds root cause info — what process failure allowed this bug to reach this stage?
The Whole TeamNeeds clarity and professional tone. Bugs are system failures, not personal ones. Never write "you broke X".
Always check for duplicates before filing. A developer receiving the same bug 5 times from 5 reporters wastes everyone's time.
Report Fields (1/2)
Required Fields — Part 1
📋 Summary / Title
Short, descriptive, includes
which feature and
what goes wrong. Written so it's useful in a bug list without opening the full report.
✅ "[Search] Price filter shows products below $50 minimum"
❌ "Filter broken" — unactionable, unsearchable
🖥️ Environment
OS · Browser + version · App version · Environment (dev / QA / staging / production) · Device type
A bug that only appears in Safari on iOS needs a very different fix than one in Chrome on Windows.
⚠️ Severity — How badly does this break the system?
Critical — crash, data loss, system unusable ·
Major — key feature broken, no workaround ·
Minor — bug with a workaround ·
Trivial — cosmetic only
🔴 Priority — How urgently must this be fixed?
P0: Fix now, blocks all usage ·
P1: Fix this sprint, key flow affected ·
P2: Fix soon, app still usable
Report Fields (2/2)
Required Fields — Part 2
👣 Steps to Reproduce
Numbered. Exact. Starting from a known state (e.g. "logged in as regular user"). Every step should be reproducible by someone who has never seen this before.
Vague: "Go to search and filter"
Exact: "1. Open /search 2. Set min price $50, max $200 3. Click Apply 4. Scroll results"
✅ Expected Result
What the system
should do at the last step — ideally referencing the requirement or design spec that defines this behaviour.
❌ Actual Result
What the system
actually does. Be specific — not "it doesn't work" but "Products priced at $32 and $41 appear in the results list."
📸 Visuals / Logs
Screenshots, screen recordings. Browser console errors. Network tab response. Server logs with timestamps. The more evidence, the faster the fix.
Full Example
Bug Report — Complete Example
PRF-115 · [Search] Price filter shows products below the minimum price
Env: Chrome 124 · macOS 14.4 · QA env v2.3.1 · Severity: Major · Priority: P1
Steps to reproduce:
- Log in as a registered user
- Navigate to the product search page
- Locate the "Price Range" filter
- Enter min $50, max $200
- Click "Apply"
- Scroll through the results list
Expected:
All products in results are priced between $50 and $200 (inclusive).
Actual:
Products priced at $12, $32, and $41 appear in results. The minimum price filter is not being applied.
Notice
This is exactly the bug that TC-014 step 3 caught. The actual result names specific prices — not "filter doesn't work." The developer can immediately reproduce the exact failure case.
Severity vs Priority
Severity ≠ Priority
Severity is technical — how broken is it?
Priority is business — how urgently do we fix it?
High Severity + High Priority
Payment crashes on checkout for all users. Fix immediately — every minute costs money.
Low Severity + High Priority
CEO's name is misspelled on the homepage. Not breaking anything — but fix it before the morning press release.
High Severity + Low Priority
Crash when exporting a legacy report format used by 2 internal users once a year. Real crash — but low business impact.
Low Severity + Low Priority
Minor pixel misalignment in the footer on Firefox only. Nice to fix eventually.
Key Concept
QA sets severity based on technical impact. Product/business sets priority based on business context.
Bug Lifecycle
Bug Lifecycle — From Discovery to Closure
Every bug follows a defined path.
Understanding the lifecycle helps you know what action to take at each step.
New
→
Assigned
→
In Progress
→
Fixed
→
Re-test
→
Closed ✓
Alternative paths (all valid):
- Re-opened — fix didn't work, or caused a regression
- Rejected / Won't Fix — not a bug, or intentional behaviour
- Deferred — real bug, fix scheduled for a later sprint
- Duplicate — already reported, link to original
- Cannot Reproduce — needs more info from reporter
Bug management tools:
- Jira — industry standard, full lifecycle tracking
- GitHub Issues — used in this course
- Linear — modern, fast, developer-friendly
- YouTrack — JetBrains tool, great for QA workflows
Root Cause Analysis
RCA — Root Cause Analysis
Finding why a bug occurred — not just fixing it and moving on.
Good RCA prevents the same class of bug from appearing again.
Human Cause
Developer misunderstood the requirement. Wrong assumption about an edge case. Typo in logic.
Fix: better requirements, more review
Organizational Cause
No peer review process. Unclear specification. No testing environment. Unrealistic deadline.
Fix: improve process, not blame
Technical / Physical Cause
Infrastructure issue — server failure, network timeout, memory limit, third-party service.
Fix: monitoring, redundancy
The goal of RCA is to improve the system, not assign blame. The same bug happening twice means the process didn't learn.
Self-Check
✅ Can You Answer These?
Test your understanding before moving on. If any of these feel shaky, go back and re-read the relevant slide.
- What is the difference between a defect and a failure?
- A crash on a rarely-used admin page — what Severity? What Priority? Why might they differ?
- Name 3 fields that belong in a bug report and explain what each communicates to a developer.
- Draw the bug lifecycle from memory. What are two alternative paths a bug can take besides being fixed?
- A user story says "users can search by product name." Write one positive and one negative test case for this requirement.
- What is regression testing and why does it eventually need to be automated?
Homework
🐛 Lesson 3 Homework — Write a Real Bug Report
- Choose any app or website you use regularly
- Perform exploratory testing — look for UI issues, errors, unexpected behaviour, edge cases
- Find a real problem (broken feature, wrong error message, missing validation, UI glitch...)
- Submit it as a 🔗 GitHub Issue:
- Click New Issue → select the Bug Report Template — it has all the fields pre-filled for you
- Existing issues in the repo are real examples using the same template — use them as a reference
Also answer:
Why did you choose this bug?
What severity and priority would you assign, and why?