jsmanifest logojsmanifest

Optimistic vs Pessimistic Locking Explained

Optimistic vs Pessimistic Locking Explained

Learn the critical differences between optimistic and pessimistic locking strategies, with practical TypeScript examples for handling database concurrency in Node.js applications.

While I was looking over some production logs the other day, I came across a fascinating bug that had been silently corrupting user data for weeks. Two users were editing the same record simultaneously, and our application was happily saving both changes—with the last write winning and obliterating the first user's work. I was once guilty of thinking "this probably won't happen often enough to matter." Little did I know that even with modest traffic, race conditions are practically guaranteed.

This experience pushed me to finally understand database locking strategies properly. The two main approaches—optimistic and pessimistic locking—solve the same problem in fundamentally different ways, and choosing the wrong one can cost you in performance, user experience, or data integrity.

Understanding Database Locking: Why Concurrency Control Matters

When multiple users or processes try to modify the same data simultaneously, you need a strategy to prevent conflicts. Without proper concurrency control, you risk lost updates where one transaction overwrites another's changes, dirty reads where uncommitted data gets exposed, or phantom reads where query results change mid-transaction.

I cannot stress this enough! These aren't theoretical problems. In real applications with real users, concurrent access is the norm, not the exception. Even a small e-commerce site will have multiple customers viewing and updating inventory at the same time.

The choice between optimistic and pessimistic locking fundamentally comes down to this: Do you assume conflicts will happen (pessimistic), or do you assume they're rare (optimistic)?

What Is Pessimistic Locking? Lock First, Update Later

Pessimistic locking takes a defensive stance. It assumes conflicts are likely, so it locks the data before making any changes. While a transaction holds the lock, no other transaction can read or modify that data. It's like putting a "Do Not Disturb" sign on your database row.

This approach guarantees consistency because it prevents conflicts from happening in the first place. But it comes with a cost—other transactions must wait, which can create bottlenecks and reduce throughput.

When I finally decided to implement pessimistic locking for a critical payment processing system, the performance impact was noticeable. But you know what? The peace of mind knowing that financial transactions would never conflict was worth every millisecond of latency.

Pessimistic locking flow diagram

Implementing Pessimistic Locking in Node.js with PostgreSQL

Let's look at a practical example using PostgreSQL's SELECT ... FOR UPDATE statement. This locks the selected rows until the transaction commits:

import { Pool } from 'pg';
 
const pool = new Pool({
  connectionString: process.env.DATABASE_URL,
});
 
async function updateInventoryPessimistic(productId: string, quantity: number) {
  const client = await pool.connect();
  
  try {
    await client.query('BEGIN');
    
    // Lock the row for this transaction
    const result = await client.query(
      'SELECT stock FROM products WHERE id = $1 FOR UPDATE',
      [productId]
    );
    
    const currentStock = result.rows[0].stock;
    
    if (currentStock < quantity) {
      throw new Error('Insufficient stock');
    }
    
    // Update the stock - no one else can modify this row until we commit
    await client.query(
      'UPDATE products SET stock = stock - $1 WHERE id = $2',
      [quantity, productId]
    );
    
    await client.query('COMMIT');
    return { success: true, remainingStock: currentStock - quantity };
    
  } catch (error) {
    await client.query('ROLLBACK');
    throw error;
  } finally {
    client.release();
  }
}

The FOR UPDATE clause is the magic here. It tells PostgreSQL to lock those rows so no other transaction can modify them. Other transactions attempting to read with FOR UPDATE will wait until our transaction completes.

This works wonderfully for high-stakes operations where conflicts would be catastrophic. But what if conflicts are rare and you don't want to sacrifice throughput?

What Is Optimistic Locking? Version-Based Validation

Optimistic locking takes the opposite approach. It assumes conflicts are rare, so it doesn't lock anything upfront. Instead, it uses a version number (or timestamp) to detect whether the data changed between reading and writing.

Here's the flow: You read a record along with its version number. You make your changes locally. When you're ready to save, you attempt to update the record only if the version number matches what you read. If someone else modified the record in the meantime, the version number will have changed, and your update fails.

Luckily we can handle this gracefully by retrying the operation or notifying the user. The key advantage? No locks, no waiting, better throughput. The trade-off? You need to handle conflicts when they occur.

When I was working on a collaborative document editor, optimistic locking was the clear winner. Most of the time, users were editing different parts of the document, so conflicts were genuinely rare. Pessimistic locking would have created unnecessary bottlenecks.

Optimistic locking version control flow

Implementing Optimistic Locking with TypeScript and Prisma

Here's how you can implement optimistic locking using Prisma, which makes version management elegant:

import { PrismaClient } from '@prisma/client';
 
const prisma = new PrismaClient();
 
// Your Prisma schema would include:
// model Product {
//   id      String @id @default(uuid())
//   name    String
//   stock   Int
//   version Int    @default(0)
// }
 
async function updateInventoryOptimistic(
  productId: string, 
  quantity: number,
  maxRetries: number = 3
): Promise<{ success: boolean; message?: string }> {
  
  for (let attempt = 0; attempt < maxRetries; attempt++) {
    try {
      // Read the current state
      const product = await prisma.product.findUnique({
        where: { id: productId },
      });
      
      if (!product) {
        throw new Error('Product not found');
      }
      
      if (product.stock < quantity) {
        return { success: false, message: 'Insufficient stock' };
      }
      
      // Attempt to update only if version hasn't changed
      const updated = await prisma.product.updateMany({
        where: {
          id: productId,
          version: product.version, // This is the key condition
        },
        data: {
          stock: product.stock - quantity,
          version: product.version + 1, // Increment version
        },
      });
      
      // If count is 0, someone else modified the record
      if (updated.count === 0) {
        if (attempt < maxRetries - 1) {
          // Wait a bit before retrying
          await new Promise(resolve => setTimeout(resolve, 50 * (attempt + 1)));
          continue;
        }
        return { success: false, message: 'Conflict detected, please retry' };
      }
      
      return { success: true };
      
    } catch (error) {
      if (attempt === maxRetries - 1) throw error;
    }
  }
  
  return { success: false, message: 'Max retries exceeded' };
}

The beauty of this approach is in the where clause. We're updating the record only if the version matches. If updateMany returns a count of 0, we know someone beat us to it, and we can retry with fresh data.

Optimistic vs Pessimistic: Performance, Trade-offs, and When to Use Each

So which should you use? In other words, what factors should drive your decision?

Use pessimistic locking when:

  • Conflicts are likely (multiple users frequently modify the same data)
  • The cost of a conflict is high (financial transactions, critical state changes)
  • You can tolerate reduced throughput for guaranteed consistency
  • Your transactions are short-lived

Use optimistic locking when:

  • Conflicts are rare (users typically work on different records)
  • You need maximum throughput and responsiveness
  • You can gracefully handle retry logic
  • Read operations vastly outnumber writes

Performance-wise, optimistic locking usually wins in low-contention scenarios. There's no lock overhead, no blocking, and reads are fast. But when conflicts become frequent, all those retries add up, and pessimistic locking can actually become more efficient.

I've seen applications try to use optimistic locking for inventory management in flash sales. Wonderful idea in theory! But when thousands of users are fighting over limited stock, the retry storms can bring your system to its knees. That's when pessimistic locking's blocking behavior actually becomes a feature, not a bug.

Real-World Scenarios: E-commerce Inventory and Collaborative Editing

Let me give you two concrete scenarios from projects I've worked on.

E-commerce inventory during normal traffic: We used optimistic locking. Most customers were browsing different products, and even when multiple people viewed the same item, they rarely checked out simultaneously. The occasional retry was invisible to users, and we maintained excellent throughput.

E-commerce inventory during flash sales: We switched to pessimistic locking with FOR UPDATE SKIP LOCKED. This PostgreSQL feature lets transactions skip locked rows instead of waiting. In other words, if a row is locked, the next transaction grabs the next available item. This prevented retry storms and gave us predictable performance under extreme load.

Collaborative document editing: Pure optimistic locking. Users edited different sections most of the time. When conflicts did occur, we showed a friendly message: "Someone else edited this section. Please review their changes." Users understood this, and it was far better than locking the entire document.

Choosing the Right Locking Strategy for Your Application

The real skill isn't memorizing which approach is "better"—it's recognizing which pattern fits your specific use case. I've learned to ask these questions:

What's the likelihood of simultaneous updates to the same record? If you're building a commenting system, probably low. If you're building a seat reservation system, probably high.

What's the user experience impact of a conflict? Can you retry transparently, or does the user need to be involved?

What's your performance profile? Are you optimizing for throughput or for consistency guarantees?

And here's something I wish someone had told me earlier: You don't have to choose just one strategy for your entire application. Use pessimistic locking for your checkout flow and optimistic locking for user profile updates. Match the tool to the job.

And that concludes the end of this post! I hope you found this valuable and look out for more in the future!