Lesson 5 of 9 · Automation begins
Refactor, Unit Testing & Code Coverage
Testing Pyramid · Clean Code · Refactoring · AAA Pattern · Code Coverage · TDD
→ next slide | ESC overview
Lesson Plan
Lesson 5: Refactor, Unit Testing & Code Coverage
Focus: Writing code you can actually test
1. The Testing Pyramid — why we automate and where unit tests fit in
2. Clean Code & Refactoring — principles from Robert Martin, safe refactoring with tests
3. Unit Testing — what it is, AAA pattern, FIRST principles
4. Code Coverage — line, branch, function coverage, thresholds in Jest
5. TDD — Red-Green-Refactor cycle, benefits, challenges
Teams Engagement: Code review activity — screen share a messy function, class identifies all Clean Code violations in the chat. Then refactor together.
The Shift to Automation
The Testing Pyramid
Not all tests are equal. We automate the ones that are cheap to run and give fast feedback.
E2E / UI Tests
Integration / API Tests
Unit Tests
Why unit tests form the base:
- Run in milliseconds, not minutes
- Pinpoint exactly which function broke
- Run on every commit, in CI/CD
- No browser, no server, no database needed
Clean Code
What Is Clean Code?
Clean code is code that is easily understood by any team member — even 10 years from now.
- Simple to read and follow
- Easy to modify, extend, and maintain
- The same principles apply to test code as production code
Why it matters: Bad code can bankrupt a company.
Productivity vs time: messy code slows development exponentially.
Source: Robert C. Martin — Clean Code: A Handbook of Agile Software Craftsmanship
General Rules
Clean Code — General Rules
Follow standard conventionsYour team's agreed style is the right style
Keep it simple, stupid (KISS)Simpler is always better. Reduce complexity as much as possible.
Boy Scout RuleLeave the campground cleaner than you found it. Improve as you go.
Find root causeAlways look for the underlying cause, not just the symptom.
Don't Repeat Yourself (DRY)Duplication is the root of all software evil. Extract and reuse.
Naming
Meaningful Names
// Bad ❌
let d;
let number;
let modymdhms;
let nameString;
// Bad ❌
function a(x, y) {
return x - y;
}
// Good ✅
let elapsedTimeInDays;
let numberOfTasks;
let modificationTimestamp;
let name;
// Good ✅
function calculateDiscount(price, discount) {
return price - discount;
}
Naming rules: · Descriptive and unambiguous
· Meaningful distinction (customer vs customerData)
· Pronounceable
· No magic numbers (use named constants)
Avoid: · Single-letter variables
· Encodings (nameString)
· Misleading abbreviations
· Numeric suffixes (data1, data2)
Functions
Function Rules
// Bad ❌ — validates, calculates AND logs
function handleRental(age, days, rate) {
if (age < 21 || age > 75) {
console.log("Driver not eligible");
return null;
}
let total = days * rate;
if (days > 7) total = total * 0.9;
console.log(`Total: €${total}`);
return total;
}
// Good ✅ — each function does ONE thing
function isEligibleDriver(age) {
return age >= 21 && age <= 75;
}
function calculateTotal(days, rate) {
const base = days * rate;
return days > 7 ? base * 0.9 : base;
}
function handleRental(age, days, rate) {
if (!isEligibleDriver(age)) return null;
return calculateTotal(days, rate);
}
Rules:
· Small
· Do one thing
· Descriptive name
· Prefer fewer arguments
· No side effects
Clean Tests
Clean Code in Tests
Test code is code — it must be clean too.
ReadableAnyone should understand what is being tested and why
One assertion per testA failing test should point to exactly one issue
IndependentTests must not depend on each other's state
FastRun in milliseconds — slow tests don't get run
RepeatableSame result every time, in any environment
FIRST: Fast · Independent · Repeatable · Self-validating · Timely
Unit Testing
What Is Unit Testing?
Testing the smallest testable piece of code — a single function or method — in isolation from the rest of the system.
Benefits:
- Catch bugs early in development
- Safety net for refactoring
- Documents expected behaviour
- Promotes modular design
Limitations:
- Time-consuming to write initially
- Doesn't catch integration issues
- Requires ongoing maintenance
- Can create false sense of security
AAA Pattern
Arrange · Act · Assert
Arrange
Set up all necessary preconditions and inputs for the test.
Act
Execute the unit under test — call the function.
Assert
Verify the result matches the expected behaviour.
test('calculateDiscount returns correct price', () => {
// Arrange
const price = 100;
const discountPercent = 20;
// Act
const result = calculateDiscount(price, discountPercent);
// Assert
expect(result).toBe(80);
});
Code Coverage
Code Coverage
The percentage of code lines/branches/paths executed during testing.
Line Coverage
% of code lines executed during tests. Simplest metric.
Branch Coverage
% of decision branches (if/else, loops) executed. More thorough.
function getDiscount(age) {
if (age < 18) { // Branch 1
return 0.20;
} else if (age > 65) { // Branch 2
return 0.15;
} else { // Branch 3
return 0.0;
}
}
// Need 3 tests for 100% branch coverage
100% coverage doesn't mean 0 bugs. But <60% coverage is a red flag.
Code Coverage in Practice
Running Coverage with Jest
# Run tests with coverage report
npm test -- --coverage
# Output:
# -------------------|---------|----------|---------|---------|
# File | % Stmts | % Branch | % Funcs | % Lines |
# -------------------|---------|----------|---------|---------|
# rentalPrice.js | 92.31 | 83.33 | 100 | 92.31 |
# -------------------|---------|----------|---------|---------|
Set a threshold (package.json):
"jest": {
"coverageThreshold": {
"global": {
"branches": 90,
"functions": 100,
"lines": 90
}
}
}
What each metric means:
Statements — expressions executed
Branches — if/else paths taken
Functions — functions called
Lines — lines of code executed
Branch coverage is the most important — an uncovered branch is an untested scenario.
Unit Testing Tools
Unit Testing Frameworks
Jest (JS/TS)Most popular for JavaScript. Built-in mocking, coverage, snapshots.
Jasmine (JS)Behaviour-driven testing framework for JavaScript.
Mocha + Chai (JS)Flexible test runner + assertion library combo.
pytest (Python)Simple, powerful Python testing framework.
JUnit (Java)Standard Java unit testing framework.
NUnit (C#).NET unit testing framework.
Teams Activity
🧹 Clean Code Review
Let's look at the Rental Car codebase.
- Spend 2 minutes reading the code silently
- List every Clean Code violation you see in Teams chat
TDD
Test-Driven Development
Write the test before writing the code. The test drives the implementation.
🔴 RED
Write a failing test.
Defines what you want to build.
→
🟢 GREEN
Write minimum code to make the test pass.
→
🔵 REFACTOR
Clean up the code.
Tests still pass.
TDD = small, safe steps. Each cycle is minutes, not hours. Fail fast, fix fast.
TDD in Practice
TDD Example
Step 1 — Write failing test (RED):
test('calculateRentalCost works', () => {
expect(calculateRentalCost(3, 50)).toBe(150);
});
// Test fails — function doesn't exist yet
Step 2 — Write minimum code (GREEN):
function calculateRentalCost(days, dailyRate) {
return days * dailyRate;
}
// Test passes!
Step 3 — Refactor:
// Add validation, handle edge cases,
// clean up names — tests still pass
function calculateRentalCost(days, dailyRate) {
if (days <= 0 || dailyRate < 0) {
throw new Error('Invalid input');
}
return days * dailyRate;
}
// Write another failing test for the
// edge case, then make it pass.
TDD Benefits & Challenges
TDD — Worth It?
Benefits:
- Bugs caught before they're written
- Forces thinking about requirements first
- Produces naturally testable, modular code
- Refactoring is safe — tests prevent regressions
- Fast feedback loop: fix bugs in seconds
Challenges:
- Requires a mindset shift — hard at first
- Upfront investment: tests take time to write
- Test suite needs maintenance as code evolves
- Complex dependencies are tricky to test in isolation
Topic 5
Using AI in Coding & Testing
AI tools (Copilot, ChatGPT, Cursor, Claude) are now part of the developer toolkit.
Use them — but use them with brains.
Good uses for AI:
- Generate boilerplate and scaffolding
- Draft a first set of test cases from a requirement
- Suggest edge cases you might have missed
- Explain code you didn't write
- Speed up refactoring — first pass
Where AI fails silently:
- It doesn't know your requirements
- It doesn't understand your system's context
- Generated tests are often shallow — happy path only
- Code looks clean but logic can be subtly wrong
- It confidently produces wrong answers
AI-generated code compiles. It might even pass basic tests. That doesn't make it correct.
Trust But Verify
TDD + AI — The Perfect Safety Net
TDD is the ideal framework for working with AI. You define correctness before AI touches the code.
🔴 YOU write the test
You define what "correct" means.
AI has no say here.
→
🟢 AI writes the code
Let it generate the implementation.
Read what it produced.
→
🔵 Tests judge the result
Does it pass? Are edge cases covered?
Trust the tests, not the AI.
AI Ground Rules
Using AI Responsibly
✅ Do:
- Use AI to speed up boilerplate and first drafts
- Use AI to suggest test cases — then review them
- Read everything AI writes before you commit it
- Run your full test suite against AI output
- Ask AI to explain its reasoning — gaps appear fast
❌ Don't:
- Submit AI output without understanding it
- Trust AI-generated tests as complete coverage
- Skip code review because "AI wrote it"
- Accept a passing test as proof of correctness
- Let AI define your requirements for you
AI is your junior developer who works fast and never complains — but always needs review.
Homework
💻 Homework — Rental Car
Task 1 — Refactor:
- Read the existing Rental Car code
- Apply Clean Code principles — meaningful names, small functions, etc.
- Add missing functionality
- All business requirements must work after the refactoring
Task 2 — Unit Tests:
- Write unit tests for all exported functions
- Use AAA pattern and descriptive test names
- Achieve 100% line and branch coverage
- Run to verify:
npm test -- --coverage
Task 3 — TDD:
- Find yourself a partner or do it by yourself
- Use TDD to add one new functionality (e.g., weekend surcharge): write the failing test first, then implement it
- Verify the Red → Green → Refactor cycle
Submit a Pull Request with all the changes:
🔗 GitHub Link
Pull Request has automatic checks in place (Linter, Requirements Check, Test Coverage)