← Resources

Testing Mindset: AC Green or It Didn't Happen

15 min readTeam Onboarding Series #6
#onboarding#testing#quality

"It compiled. Ship it."

You're Kara. You just finished implementing a feature. You ran npm run build. No errors. You deployed to Vercel. The page loads. You tell the team it's ready.

But did you test it? Did you try signing up with an invalid email? Did you check if the OTP actually arrives? Did you verify the session expires after 7 days?

No. You assumed it works because it compiled.

Bigbosexf tests it. Finds 3 bugs. Client sees broken demo. You wasted 4 hours.

The Iron Law of ScopeLock

🔒 AC Green or It Didn't Happen

If the acceptance tests don't pass, the feature doesn't exist.

Not "mostly works." Not "works on my machine." Not "I manually tested it once."

Green tests = Feature exists.
Red/missing tests = Feature doesn't exist.

Why This Matters

  • Client trust: Green tests = objective proof. Client sees we delivered what we promised.
  • Payment: We only get paid when tests pass. No green = no money.
  • Future you: When you touch this code again in 3 months, tests prevent breaking existing features.
  • Team coordination: Green tests mean Bigbosexf can verify quickly without guessing what to test.
  • Reputation: Shipping broken code = client complains on Upwork = harder to win future jobs.

❌ False Equivalences (Don't Fall for These)

"It compiled" ≠ "It works"

TypeScript catches type errors. It doesn't verify business logic, API calls, or user flows.

"I tested it once" ≠ "It's tested"

Manual testing is unreliable. You forget edge cases. Automated tests run every time.

"The page loads" ≠ "The feature works"

The page can load but the form doesn't submit, the email doesn't send, the session doesn't persist.

"No errors in console" ≠ "It's correct"

Silent failures are common. Email might not send. Database write might fail. No error shown.

The 4 Testing Levels

Each level catches different types of bugs. You need all 4 for AC Green.

🔬

Unit Tests

Scope: Individual functions/components

Who: Rafael generates + Kara verifies

Example: Does signupWithEmail() return error for invalid email?

Speed: Fast (milliseconds)

When: After writing each function

🔗

Integration Tests

Scope: Multiple components working together

Who: Rafael generates + Kara verifies

Example: Does auth flow work with database + Redis?

Speed: Medium (seconds)

When: After connecting components

🎭

E2E Tests

Scope: Full user flows in real browser

Who: Inna defines + Rafael generates + Sofia runs

Example: Can user sign up → verify OTP → see dashboard?

Speed: Slow (10-30 seconds per test)

When: Before claiming AC Green

👤

Manual Testing

Scope: Human verification of UX/edge cases

Who: Bigbosexf (QA)

Example: Does the app feel responsive? Any visual bugs?

Speed: Slowest (minutes)

When: Final verification before delivery

The Testing Pyramid (Quantity)

👤 Manual (5-10 tests)
🎭 E2E (10-20 tests)
🔗 Integration (20-40 tests)
🔬 Unit (50-100 tests)

Bottom-heavy pyramid: Many fast unit tests, fewer slow E2E tests, minimal manual testing.

Real Scenarios: What Can Go Wrong

❌ Scenario 1: "It works on my machine"

❌ Scenario 2: "I tested the happy path"

❌ Scenario 3: "Tests pass but feature is broken"

✅ Scenario 4: "AC Green done right"

How Sofia Verifies Your Work

Sofia is your QA citizen. She verifies AC Green BEFORE you claim "ready for delivery."

1

Read AC.md

Sofia reads the acceptance criteria from Inna's BEHAVIOR_SPEC. She knows exactly what "done" means.

2

Run Automated Tests

Sofia runs npm run test (unit + integration) and npm run test:e2e (end-to-end). All must be green.

3

Verify Deployment

Sofia checks the production URL. Is it accessible? Does the feature exist? Are there console errors?

4

Check Performance Thresholds

Sofia verifies non-functional criteria from AC.md. Is API response p95 < 200ms? Is page load < 2s?

5

Verify DoD Checklist

Sofia checks Inna's DoD checklist. Are all items ✅? Documentation updated? Env vars set? Tests written?

6

Return Verdict

AC Green: All criteria met → Handoff to NLR for delivery.
Not Green: Specific issues listed → Back to developer for fixes.

Example Sofia Output

## Sofia's Verification Report

**Mission:** User Authentication (auth-feature-001)
**Developer:** Kara
**Date:** 2025-11-06

### AC.md Criteria (3/3 ✅)
✅ AC#1: User can signup with email (E2E test: auth/signup.spec.ts)
✅ AC#2: User can login with email (E2E test: auth/login.spec.ts)
✅ AC#3: Session expires after 7 days (Integration test: auth.test.ts:42)

### Non-Functional Criteria (3/3 ✅)
✅ OTP delivery p95 < 5s (measured: 2.3s avg via SendGrid logs)
✅ Login API p95 < 200ms (measured: 145ms p95 via Vercel analytics)
✅ No console errors (verified in Chrome DevTools)

### Test Results
✅ Unit tests: 12/12 passed (coverage: 94%)
✅ Integration tests: 8/8 passed
✅ E2E tests: 5/5 passed (production environment)

### DoD Checklist (6/6 ✅)
✅ All AC.md criteria verified
✅ Deployment accessible (https://client-project.vercel.app)
✅ Environment variables set (DATABASE_URL, REDIS_URL, SENDGRID_API_KEY, JWT_SECRET)
✅ Documentation updated (AC.md, GUIDE.md)
✅ Tests written and passing
✅ Performance thresholds met

### VERDICT: ✅ AC GREEN

Ready for delivery. No issues found.

— Sofia

Common Testing Mistakes (And How to Fix)

❌ Mistake: "I'll write tests later"

Reality: You won't. Later never comes. You ship untested code.

Fix: Write tests BEFORE claiming done. Make it part of your implementation workflow.

❌ Mistake: "Testing is QA's job"

Reality: Bigbosexf finds bugs you should have caught. Wastes 2+ hours.

Fix: YOU write automated tests. Bigbosexf does final manual verification.

❌ Mistake: "Tests are too slow"

Reality: Fixing bugs in production is 10x slower than writing tests.

Fix: Write fast unit tests first. Slow E2E tests only for critical flows.

❌ Mistake: "100% coverage = good tests"

Reality: You can have 100% coverage with useless tests that don't verify behavior.

Fix: Test behavior, not code. Ask: "Does this test catch real bugs?"

❌ Mistake: "Tests are passing, ship it"

Reality: Tests might be wrong. Or mocking too much. Or missing edge cases.

Fix: Run tests in production environment. Verify with manual testing too.

❌ Mistake: "I don't know how to write tests"

Reality: Rafael generates tests for you. You just need to run them.

Fix: Ask Rafael: "Generate unit + E2E tests per VALIDATION.md"

Quiz: Test Your Testing Mindset

Scenario 1: You deployed your feature. npm run build passed. The page loads. Are you done?

Scenario 2: You tested the signup flow manually. It works. Do you need automated tests?

Scenario 3: Your E2E tests pass locally but fail on Vercel. What's the problem?

Scenario 4: You're stuck writing tests. What do you do?

Testing Checklist (Print & Keep)

Before You Say "Ready for QA"

□ Automated Tests Written

  • □ Unit tests for all functions
  • □ Integration tests for component interactions
  • □ E2E tests for critical user flows
  • □ Tests match VALIDATION.md criteria

□ All Tests Passing

  • □ Local: npm run test → all green
  • □ Local: npm run test:e2e → all green
  • □ Production: E2E tests against deployed URL → all green
  • □ No console errors in browser DevTools

□ AC.md Criteria Verified

  • □ Functional criteria: All user flows work
  • □ Non-functional: Performance thresholds met
  • □ Edge cases: Error handling tested
  • □ Verification commands from AC.md executed

□ Deployment Verified

  • □ Production URL accessible
  • □ All environment variables set
  • □ Feature works in production (not just local)
  • □ No errors in production logs

□ DoD Checklist Complete

  • □ All items from Inna's DoD marked ✅
  • □ Documentation updated
  • □ Code reviewed (if required)
  • □ Performance measured and meets targets

GO / NO-GO Decision

GO (Hand off to Sofia): ALL checkboxes above are ✅

NO-GO (Keep working): ANY checkbox is ❌

If in doubt, it's NO-GO. Better to spend 30 more minutes testing than waste 4 hours fixing bugs later.

Next Steps

Now you understand AC Green testing mindset. Related resources:

  1. The Complete Mission Flow - Where testing fits in the delivery process
  2. What Good Documentation Looks Like - How VALIDATION.md defines test criteria
  3. How to Talk to AI Citizens - How to ask Rafael to generate tests

Practice: Take your next feature. Write tests BEFORE you claim done. Run Sofia's verification. See how much cleaner the handoff is.