jsmanifest logojsmanifest

Build Robust Tests with Vitest: A Modern Testing Guide

Build Robust Tests with Vitest: A Modern Testing Guide

Learn how to write robust, maintainable tests with Vitest. From setup to advanced patterns, discover why this modern testing framework is changing how we approach JavaScript testing in 2026.

While I was looking over some legacy test suites the other day, I realized something that made me chuckle: I was spending more time waiting for tests to run than actually writing code. The feedback loop was so slow that I'd start checking emails between test runs. Sound familiar?

This is exactly why I decided to give Vitest a serious look, and I cannot stress this enough—it completely transformed how I approach testing. Let me show you why this matters and how you can start using it today.

Why Vitest Is the Testing Framework You Need in 2026

I was once guilty of accepting slow test suites as "just part of development." Little did I know that modern tooling could give me near-instant feedback. Vitest leverages Vite's transformation pipeline, which means your tests run at the same blazing speed as your dev server.

But speed isn't the only reason to care. Vitest gives you a Jest-compatible API, so if you're coming from Jest (like I was), you already know most of the syntax. No massive rewrite required. The migration path is smooth, and you get immediate benefits.

Here's what sold me: Vitest runs tests in parallel by default, supports ESM natively, and has watch mode that actually feels instant. When I finally decided to migrate one of my projects, test execution time dropped from 45 seconds to 3 seconds. That's not a typo.

Setting Up Vitest: From Zero to First Test

Let's get you up and running. I'm assuming you have a Node.js project ready. If you're starting fresh, create a new directory and run npm init -y.

First, install Vitest:

npm install -D vitest

Add a test script to your package.json:

{
  "scripts": {
    "test": "vitest",
    "test:ui": "vitest --ui"
  }
}

That's it for basic setup. Now create your first test file. I like to keep tests next to the code they're testing, but you can also use a __tests__ directory if that's your preference.

Create src/calculator.ts:

export function add(a: number, b: number): number {
  return a + b;
}
 
export function divide(a: number, b: number): number {
  if (b === 0) {
    throw new Error('Cannot divide by zero');
  }
  return a / b;
}

Now create src/calculator.test.ts:

import { describe, it, expect } from 'vitest';
import { add, divide } from './calculator';
 
describe('Calculator', () => {
  it('adds two numbers correctly', () => {
    expect(add(2, 3)).toBe(5);
    expect(add(-1, 1)).toBe(0);
    expect(add(0, 0)).toBe(0);
  });
 
  it('divides two numbers correctly', () => {
    expect(divide(10, 2)).toBe(5);
    expect(divide(7, 2)).toBe(3.5);
  });
 
  it('throws error when dividing by zero', () => {
    expect(() => divide(10, 0)).toThrow('Cannot divide by zero');
  });
});

Run npm test and watch Vitest work its magic. You'll see results almost instantly, and if you keep the process running, it'll re-run tests as you save files.

Vitest test output showing passing tests

Writing Robust Unit Tests: Core Patterns and Assertions

The key to good tests isn't just getting them to pass—it's making them maintainable and meaningful. I came across this pattern early in my career: tests that checked implementation details rather than behavior. Those tests broke constantly during refactoring.

Vitest gives you a rich assertion library. Here are the ones I use most frequently:

  • toBe() for primitive values (numbers, strings, booleans)
  • toEqual() for objects and arrays (deep equality)
  • toThrow() for error handling
  • toBeTruthy() and toBeFalsy() for boolean-ish checks
  • toContain() for arrays and strings

Let me show you a real-world example. I was building a user validation service and needed to test various edge cases:

import { describe, it, expect } from 'vitest';
 
interface User {
  email: string;
  age: number;
  roles: string[];
}
 
function validateUser(user: User): { valid: boolean; errors: string[] } {
  const errors: string[] = [];
 
  if (!user.email.includes('@')) {
    errors.push('Invalid email format');
  }
 
  if (user.age < 18) {
    errors.push('User must be 18 or older');
  }
 
  if (user.roles.length === 0) {
    errors.push('User must have at least one role');
  }
 
  return {
    valid: errors.length === 0,
    errors,
  };
}
 
describe('validateUser', () => {
  it('validates a correct user', () => {
    const user: User = {
      email: 'test@example.com',
      age: 25,
      roles: ['user'],
    };
 
    const result = validateUser(user);
    expect(result.valid).toBe(true);
    expect(result.errors).toEqual([]);
  });
 
  it('catches multiple validation errors', () => {
    const user: User = {
      email: 'invalid-email',
      age: 15,
      roles: [],
    };
 
    const result = validateUser(user);
    expect(result.valid).toBe(false);
    expect(result.errors).toHaveLength(3);
    expect(result.errors).toContain('Invalid email format');
    expect(result.errors).toContain('User must be 18 or older');
  });
 
  it('validates email format specifically', () => {
    const user: User = {
      email: 'test.example.com',
      age: 20,
      roles: ['admin'],
    };
 
    const result = validateUser(user);
    expect(result.valid).toBe(false);
    expect(result.errors).toEqual(['Invalid email format']);
  });
});

Notice how each test focuses on one specific behavior. This makes failures easy to diagnose and tests resistant to refactoring.

Mocking Made Simple: Spies, Stubs, and Module Mocks

Mocking used to intimidate me. I avoided it until I absolutely had to test code with external dependencies. Luckily we can use Vitest's straightforward mocking API to handle these scenarios elegantly.

Here's a practical example I dealt with recently—testing a function that fetches user data from an API:

import { describe, it, expect, vi, beforeEach } from 'vitest';
 
// The actual implementation
async function fetchUserProfile(userId: string) {
  const response = await fetch(`https://api.example.com/users/${userId}`);
  if (!response.ok) {
    throw new Error('Failed to fetch user');
  }
  return response.json();
}
 
// Our tests
describe('fetchUserProfile', () => {
  beforeEach(() => {
    // Clear all mocks before each test
    vi.clearAllMocks();
  });
 
  it('fetches user data successfully', async () => {
    const mockUser = { id: '123', name: 'John Doe', email: 'john@example.com' };
 
    // Mock the global fetch function
    global.fetch = vi.fn().mockResolvedValue({
      ok: true,
      json: async () => mockUser,
    });
 
    const result = await fetchUserProfile('123');
 
    expect(fetch).toHaveBeenCalledWith('https://api.example.com/users/123');
    expect(result).toEqual(mockUser);
  });
 
  it('throws error when fetch fails', async () => {
    global.fetch = vi.fn().mockResolvedValue({
      ok: false,
      status: 404,
    });
 
    await expect(fetchUserProfile('999')).rejects.toThrow('Failed to fetch user');
  });
});

Test coverage report in Vitest

The vi utility from Vitest gives you everything you need. vi.fn() creates a mock function, vi.spyOn() lets you spy on existing functions, and vi.mock() handles module-level mocking.

Vitest vs Jest: What Makes Vitest Different

I get asked about this constantly: "Should I migrate from Jest to Vitest?" Let me give you the honest answer based on my experience.

Jest is mature and battle-tested. It works great if you're in a Create React App setup or using CommonJS modules. But I was hitting walls with ESM support and waiting too long for tests to run.

Vitest shines in modern JavaScript projects. It's built on top of Vite, so if you're already using Vite for development, you get perfect alignment. Your test environment uses the same transformation pipeline as your dev server. No more "works in dev but not in tests" surprises.

The API compatibility means you're not learning a new framework from scratch. In other words, Jest knowledge transfers directly to Vitest. I migrated a medium-sized project in about two hours, and most of that time was updating imports.

Speed is the most noticeable difference. Vitest's watch mode uses Vite's HMR, giving you almost instant feedback. When I save a file, tests re-run before I can even switch windows.

Advanced Testing Patterns: Async Code, Timers, and Edge Cases

Real applications aren't simple functions—they deal with async operations, timers, and complex state. Let me show you patterns I wish I knew earlier.

Testing async code with Vitest is straightforward, but there's a gotcha I was once guilty of: forgetting to await promises in tests. This causes false positives:

import { describe, it, expect, vi } from 'vitest';
 
async function retryOperation(operation: () => Promise<any>, maxRetries = 3) {
  let lastError;
  
  for (let i = 0; i < maxRetries; i++) {
    try {
      return await operation();
    } catch (error) {
      lastError = error;
      await new Promise(resolve => setTimeout(resolve, 1000 * (i + 1)));
    }
  }
  
  throw lastError;
}
 
describe('retryOperation', () => {
  it('succeeds on first attempt', async () => {
    const operation = vi.fn().mockResolvedValue('success');
    
    const result = await retryOperation(operation);
    
    expect(result).toBe('success');
    expect(operation).toHaveBeenCalledTimes(1);
  });
 
  it('retries on failure then succeeds', async () => {
    const operation = vi.fn()
      .mockRejectedValueOnce(new Error('fail'))
      .mockRejectedValueOnce(new Error('fail'))
      .mockResolvedValue('success');
 
    vi.useFakeTimers();
    
    const promise = retryOperation(operation);
    
    // Fast-forward through the retry delays
    await vi.runAllTimersAsync();
    
    const result = await promise;
    
    expect(result).toBe('success');
    expect(operation).toHaveBeenCalledTimes(3);
    
    vi.useRealTimers();
  });
 
  it('throws after max retries', async () => {
    const operation = vi.fn().mockRejectedValue(new Error('persistent failure'));
    
    vi.useFakeTimers();
    
    const promise = retryOperation(operation, 2);
    await vi.runAllTimersAsync();
    
    await expect(promise).rejects.toThrow('persistent failure');
    expect(operation).toHaveBeenCalledTimes(2);
    
    vi.useRealTimers();
  });
});

The vi.useFakeTimers() API is wonderful for testing time-dependent code without actually waiting. Just remember to clean up with vi.useRealTimers() after your test.

Test Coverage and CI Integration: Building a Complete Testing Pipeline

Coverage reports tell you what you're not testing. Vitest makes this trivial to set up. Add to your package.json:

{
  "scripts": {
    "test:coverage": "vitest run --coverage"
  }
}

Install the coverage provider:

npm install -D @vitest/coverage-v8

Now npm run test:coverage generates detailed reports. I aim for 80% coverage as a baseline, but I focus more on testing critical paths than hitting arbitrary numbers.

For CI integration, Vitest works beautifully with GitHub Actions, GitLab CI, or any other platform. Here's a GitHub Actions workflow I use:

name: Tests
on: [push, pull_request]
 
jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - uses: actions/setup-node@v3
        with:
          node-version: 18
      - run: npm ci
      - run: npm test

The key is running vitest run instead of vitest in CI—this runs tests once and exits, rather than staying in watch mode.

Testing Best Practices That Will Save Your Project

Let me share mistakes I've made so you don't have to. First, test behavior, not implementation. If you're testing that a function calls another specific function, you're probably testing the wrong thing. Test the outcome instead.

Second, keep tests independent. Each test should set up its own state and not rely on execution order. Use beforeEach for setup, not global state that accumulates.

Third, write tests that fail for the right reasons. When I finally decided to review my test suite, I found tests that passed even when the implementation was broken. That's worse than no tests at all.

Make your test names descriptive. "should work" tells me nothing. "throws error when user is under 18" tells me exactly what to expect.

Finally, don't skip edge cases. Test empty arrays, null values, and boundary conditions. These are where bugs hide in production.

And that concludes the end of this post! I hope you found this valuable and look out for more in the future!