Strong APIs are not safeguarded by flawless code alone—they are protected by well-designed tests that anticipate and prevent failure. Mature engineering teams understand that chasing 100% test coverage purely for vanity is misleading. Instead, they invest in layered testing strategies that reflect real-world scenarios, mitigate risks, and provide confidence in every deployment.
This article presents a comprehensive testing workflow for NestJS applications using Jest (testing framework) and Supertest (HTTP request library). We will also explore database mocking, E2E testing practices, snapshot testing, and CI/CD integrations.
The objective is to establish a testing framework that is:
- Simple to write and maintain.
- Scalable as the system grows.
- Reliable enough to detect critical issues before reaching production.
Why Testing Strategy Matters
APIs fail most often in unpredictable real-world conditions, such as invalid input data, unexpected third-party behavior, or client misuse. Without a structured approach to testing:
- Bugs pass through because only “happy path” scenarios are validated.
- Test suites lose credibility due to flakiness.
- Development slows down as engineers spend more time troubleshooting than shipping features.
A modern testing strategy strikes a balance: utilizing fast unit tests for rapid feedback and slower, high-confidence tests for realistic end-to-end workflows.
Two Real-World Use Cases of Testing
Testing is not about validating that your code works at a single point in time. It is about ensuring it continues to work as requirements evolve. Consider the following scenarios:
Use Case 1: Introducing a New Feature Without Breaking Existing Ones
Imagine you are developing an e-commerce API. Initially, you define a placeOrder() service that:
- Stores orders in the database.
- Returns a confirmation response.
This works reliably for regular purchases. Later, the business team requests a new feature:
“Let’s introduce discount coupons to support promotional sales.”
You update your service logic to handle coupons, validate expiration dates, and recalculate totals. Suddenly, multiple risks appear:
- Incorrect discount rules may cause customers to be overcharged or undercharged.
- Existing non-coupon orders may break due to altered logic.
- A minor bug could interrupt the entire payment flow.
Without structured testing, such problems could slip into production, only to be discovered by customers.
However, with unit and integration tests already in place:
- Existing tests will immediately flag if the standard checkout flow is broken.
- New tests will validate discount scenarios, expired coupon handling, and valid coupon workflows.
This ensures that newly added functionality does not compromise existing, reliable features.
Use Case 2: Detecting Unexpected Side Effects
Now consider a user authentication API. Initially, it verifies the username and password, authenticating users upon correct credentials.
After a security review, a new requirement is introduced:
“Lock an account for security reasons after three failed login attempts.”
This appears simple, but without testing, hidden risks emerge:
- Users may get locked out after a single failed attempt due to faulty logic.
- The account lock may never expire or reset properly.
- The new feature could interfere with password reset workflows.
Without effective testing, these issues can cause major disruptions. For a banking or healthcare application, this could result in financial losses, compliance breaches, or reputational damage.
With end-to-end (E2E) tests, you immediately validate that:
- Three incorrect login attempts trigger account lock.
- Correct login still functions prior to lockout.
- Password reset successfully restores account access.
Testing prevents regressions and preserves both security and user experience.
The Four-Layer Testing Strategy
Think of testing as building a safety net across layers. Each type of test is designed to mitigate risks from a different perspective.
Layer | Purpose | Example Tools |
Unit | Validate small blocks of logic in isolation (pure functions, core services, edge cases). | Jest |
Integration | Verify that services, repositories, and helpers interact correctly. | Jest, Supertest |
E2E (End-to-End) | Simulate actual HTTP requests and validate full workflows. | Jest, Supertest |
Snapshot | Track and compare API responses to avoid silent regressions. | Jest |
A balanced approach ensures effectiveness. Excessive mocking leads to tests with little real-world value, while a lack of tests results in fragile and slow pipelines.
Unit Testing: Beyond the Happy Path
Purpose: Validate business logic in isolation.
Speed: Extremely fast (milliseconds).
Focus: Catch incorrect behavior before services touch the database or external APIs.
Bad practice (happy path only):
it(“should process order”, () => { expect(service.processOrder({ id: 1, items: [1, 2] })).toBe(true); }); |
Improved practice (covering realistic scenarios):
it(“should throw error if items array is empty”, () => { expect(() => service.processOrder({ id: 1, items: [] })).toThrow(); }); it(“should process orders with a large number of items”, () => { expect(service.processOrder({ id: 1, items: new Array(1000).fill(1) })).toBe(true); }); |
Guidelines for Service Layer Unit Tests:
- 4–6 test cases per service function.
- Include happy path, failures, boundary values, and edge cases.
Integration Testing: Validating Context
Purpose: Ensure services function when combined with repositories and helper utilities.
Speed: Moderate.
Focus: Interactions across the module layer.
Incorrect approach (real DB dependency):
it(“should get user profile”, async () => { const result = await userService.getProfile(1); expect(result.name).toBe(“Alex”); }); |
Correct approach (using mocks):
const mockRepo = { findOne: jest.fn().mockResolvedValue({ name: “Alex” }), }; const module = await Test.createTestingModule({ providers: [{ provide: UserRepository, useValue: mockRepo }, UserService], }).compile(); const userService = module.get(UserService); it(“should return correct profile”, async () => { const result = await userService.getProfile(1); expect(mockRepo.findOne).toHaveBeenCalledWith(1); expect(result.name).toBe(“Alex”); }); |
Guidelines for Service Layer Integration Tests:
- 5–8 test cases per service.
- Validate correct interactions, error propagation, and data consistency.
E2E Testing: Recreating Real API Flows
Purpose: Validate that the entire application stack works as a real user would expect.
Speed: Slowest, but highest in scope and confidence.
Focus: Authentication flows, payment workflows, and permission enforcement.
Weak example:
it(“/login returns 200”, () => { return request(app.getHttpServer()) .post(“/login”) .send({ user: “A” }) .expect(200); }); |
Improved example with setup and teardown:
describe(“Login E2E”, () => { let app: INestApplication; beforeAll(async () => { const module = await Test.createTestingModule({ imports: [AppModule], }).compile(); app = module.createNestApplication(); await app.init(); }); it(“rejects login with invalid password”, () => { return request(app.getHttpServer()) .post(“/login”) .send({ username: “userA”, password: “wrongpass” }) .expect(403); }); afterAll(async () => { await app.close(); }); }); |
Guidelines for Controller E2E Tests:
- 3–5 tests per endpoint.
- Cover success cases, validation errors, authentication, authorization, and edge inputs.
Snapshot Testing: Preventing Silent Breaks
Purpose: Detect invisible alterations in response data structures.
Use Case: APIs returning complex or evolving responses.
Example:
expect(response.body).toMatchSnapshot(); |
This ensures client integrations are not unexpectedly broken by unplanned changes.
Automation: Bringing Testing Into CI/CD
Manual habits are unreliable. Consistency comes from automation:
- Execute all tests during CI/CD pipeline runs (GitHub Actions, GitLab CI, Jenkins).
- Use pre-commit hooks for lightweight checks before code is merged.
- Maintain a separate .env.test configuration to ensure isolation.
- Monitor metrics such as coverage, test duration, and flakiness rate.
Common Pitfalls and How to Avoid Them
- Over-mocking: Excessive reliance reduces test realism. Use integration and E2E for mission-critical features.
- Testing implementation details: Tests should validate behavior, not private code structure.
- Flaky tests: Reset databases, isolate state, seed data consistently.
- Slow test suites: Parallelize unit tests, optimize database containers for E2E runs.
How Many Tests to Write?
Focus on balance rather than arbitrary coverage.
- Controllers (API Endpoints):
- 3–5 E2E tests per endpoint.
- Cover success, validation errors, failed authentication, and edge cases.
- Services (Business Logic + Repository Interaction):
- 4–6 unit tests per method.
- 5–8 integration tests per service.
Rule of Thumb:
- Controllers validate requests and responses.
- Services validate business rules and workflows.
Final Takeaway
Testing APIs is not about achieving a perfect coverage metric—it is about building confidence.
With a layered approach of unit, integration, E2E, and snapshot tests, you gain:
- Reliability in core business logic.
- Assurance that components interact correctly.
- Confidence in real-world user flows.
- Protection from silent regressions.
For new teams or projects:
- Begin by writing unit tests for service logic.
- Add integration tests for critical workflows.
- Layer in E2E tests for controllers and primary endpoints.
- Automate everything within CI/CD pipelines to maintain consistency.
When executed well, your test suite becomes an unseen protector—allowing you to evolve, ship, and scale NestJS APIs with confidence.