jsmanifest logojsmanifest

5 Advanced Async/Await Patterns for Complex Workflows

5 Advanced Async/Await Patterns for Complex Workflows

Master advanced async/await patterns for building resilient, high-performance JavaScript applications. Learn retry strategies, rate limiting, concurrent task management, and more.

5 Advanced Async/Await Patterns for Complex Workflows

While I was looking over some production code the other day, I came across a function that was making API calls in a loop without any error handling, rate limiting, or retry logic. The function would occasionally fail and bring down the entire data pipeline. I was once guilty of this myself—thinking that slapping async/await on everything was enough to handle asynchronous workflows.

Little did I know that basic async/await only scratches the surface of what's possible. When you're building production applications that need to handle unreliable networks, rate-limited APIs, or complex data pipelines, you need more sophisticated patterns in your toolkit.

In this post, I'm going to share five advanced async/await patterns that I've used to build resilient, high-performance applications. These aren't theoretical exercises—these are battle-tested solutions to real problems you'll encounter in production code.

Pattern 1: Retry with Exponential Backoff

The first pattern addresses one of the most common issues in production: transient failures. Networks hiccup, APIs temporarily go down, and services get overwhelmed. Instead of failing immediately, we want to retry with increasing delays between attempts.

Here's the naive approach I used to write (and I cringe looking back at it):

async function fetchData(url: string) {
  try {
    return await fetch(url);
  } catch (error) {
    // Try one more time
    return await fetch(url);
  }
}

The problem? No delay between retries, and only one retry attempt. When I finally decided to implement proper retry logic with exponential backoff, everything changed:

async function fetchWithRetry<T>(
  fn: () => Promise<T>,
  maxRetries: number = 3,
  baseDelay: number = 1000
): Promise<T> {
  let lastError: Error;
 
  for (let attempt = 0; attempt <= maxRetries; attempt++) {
    try {
      return await fn();
    } catch (error) {
      lastError = error as Error;
 
      if (attempt === maxRetries) {
        throw new Error(
          `Failed after ${maxRetries} retries: ${lastError.message}`
        );
      }
 
      // Exponential backoff: 1s, 2s, 4s, 8s...
      const delay = baseDelay * Math.pow(2, attempt);
      // Add jitter to prevent thundering herd
      const jitter = Math.random() * 1000;
 
      console.log(
        `Attempt ${attempt + 1} failed. Retrying in ${delay + jitter}ms...`
      );
 
      await new Promise((resolve) => setTimeout(resolve, delay + jitter));
    }
  }
 
  throw lastError!;
}
 
// Usage
const data = await fetchWithRetry(() => fetch('https://api.example.com/data'));

The exponential backoff gives the service time to recover, and the jitter prevents multiple clients from retrying simultaneously. I cannot stress this enough! This simple pattern has saved me from countless production incidents.

Pattern 2: Promise Queue for Rate-Limited APIs

Many APIs have rate limits—say, 10 requests per second. If you naively fire off 100 async requests, you'll hit that limit immediately and get throttled or banned. I learned this the hard way when I accidentally DDoS'd my own API during a data migration.

Async workflow visualization

The solution? A promise queue that respects rate limits:

class PromiseQueue {
  private queue: Array<() => Promise<any>> = [];
  private running: number = 0;
 
  constructor(
    private concurrency: number,
    private minInterval: number = 0
  ) {}
 
  async add<T>(fn: () => Promise<T>): Promise<T> {
    return new Promise((resolve, reject) => {
      this.queue.push(async () => {
        try {
          const result = await fn();
          resolve(result);
        } catch (error) {
          reject(error);
        }
      });
 
      this.process();
    });
  }
 
  private async process(): Promise<void> {
    if (this.running >= this.concurrency || this.queue.length === 0) {
      return;
    }
 
    this.running++;
    const fn = this.queue.shift()!;
 
    try {
      await fn();
    } finally {
      this.running--;
 
      if (this.minInterval > 0) {
        await new Promise((resolve) => setTimeout(resolve, this.minInterval));
      }
 
      this.process();
    }
  }
}
 
// Usage: Limit to 5 concurrent requests, minimum 100ms between each
const queue = new PromiseQueue(5, 100);
 
const userIds = Array.from({ length: 100 }, (_, i) => i);
const promises = userIds.map((id) =>
  queue.add(() => fetch(`https://api.example.com/users/${id}`))
);
 
const results = await Promise.all(promises);

This pattern ensures you never exceed your rate limits while still maximizing throughput. In other words, you get the best of both worlds: speed and compliance with API constraints.

Pattern 3: Concurrent Task Pool with Worker Management

When you need to process a large number of tasks but want to limit concurrency (maybe to avoid overwhelming your database or memory), you need a worker pool. I came across this need while building a thumbnail generator that processed thousands of images.

Here's the pattern I developed:

async function processWithPool<T, R>(
  items: T[],
  poolSize: number,
  processor: (item: T) => Promise<R>
): Promise<R[]> {
  const results: R[] = [];
  const executing: Promise<void>[] = [];
 
  for (const [index, item] of items.entries()) {
    const promise = processor(item).then((result) => {
      results[index] = result;
    });
 
    executing.push(promise);
 
    if (executing.length >= poolSize) {
      await Promise.race(executing);
      // Remove completed promises
      const completed = executing.filter((p) => {
        return Promise.race([p, Promise.resolve('done')]).then(
          (v) => v === 'done'
        );
      });
      executing.splice(0, completed.length);
    }
  }
 
  await Promise.all(executing);
  return results;
}
 
// Usage
const imageUrls = [...]; // Array of 1000 URLs
const thumbnails = await processWithPool(
  imageUrls,
  10, // Process 10 at a time
  async (url) => {
    const image = await loadImage(url);
    return generateThumbnail(image);
  }
);

This approach processes items concurrently up to your pool size, then waits for the fastest worker to finish before starting a new task. Luckily we can control memory usage while maintaining good performance.

Pattern 4: Graceful Timeout and Cancellation

Sometimes operations take too long and you need to bail out. But here's the catch: JavaScript doesn't have built-in promise cancellation. When I finally decided to implement proper timeouts, I realized I needed AbortController:

async function withTimeout<T>(
  promise: Promise<T>,
  timeoutMs: number,
  abortController?: AbortController
): Promise<T> {
  let timeoutId: NodeJS.Timeout;
 
  const timeoutPromise = new Promise<never>((_, reject) => {
    timeoutId = setTimeout(() => {
      abortController?.abort();
      reject(new Error(`Operation timed out after ${timeoutMs}ms`));
    }, timeoutMs);
  });
 
  try {
    return await Promise.race([promise, timeoutPromise]);
  } finally {
    clearTimeout(timeoutId!);
  }
}
 
// Usage with fetch
const controller = new AbortController();
 
try {
  const response = await withTimeout(
    fetch('https://api.example.com/slow-endpoint', {
      signal: controller.signal,
    }),
    5000 // 5 second timeout
  );
  const data = await response.json();
} catch (error) {
  if (error.message.includes('timed out')) {
    console.log('Request took too long, moving on...');
  }
}

Concurrent async operations diagram

The AbortController integration is crucial—it actually cancels the underlying network request instead of just ignoring the result. This prevents resource leaks and wasted bandwidth.

Pattern 5: Waterfall vs Parallel Execution Strategy

One mistake I see constantly is running operations sequentially when they could run in parallel, or vice versa. Let me show you the difference.

Waterfall (sequential) - use when operations depend on each other:

// Each step needs the previous result
const user = await fetchUser(userId);
const preferences = await fetchPreferences(user.id);
const recommendations = await generateRecommendations(preferences);

Parallel - use when operations are independent:

// All can run simultaneously
const [user, orders, reviews] = await Promise.all([
  fetchUser(userId),
  fetchOrders(userId),
  fetchReviews(userId),
]);

But here's where it gets interesting: sometimes you have a mix. Some operations can run in parallel, but others need to wait:

async function loadDashboard(userId: string) {
  // Phase 1: Get user data (required for everything)
  const user = await fetchUser(userId);
 
  // Phase 2: Fetch multiple independent resources in parallel
  const [profile, settings, stats] = await Promise.all([
    fetchProfile(user.id),
    fetchSettings(user.id),
    fetchStats(user.id),
  ]);
 
  // Phase 3: Generate data that depends on phase 2
  const [summary, charts] = await Promise.all([
    generateSummary(stats),
    generateCharts(stats),
  ]);
 
  return { user, profile, settings, stats, summary, charts };
}

This hybrid approach minimizes total execution time while respecting dependencies. I realized this pattern reduced my dashboard load time from 8 seconds to under 2 seconds.

Real-World Application: Building a Resilient Data Pipeline

Let me tie these patterns together with a real example. I recently built a data pipeline that fetches data from multiple APIs, transforms it, and stores it in a database. Here's how I combined these patterns:

class DataPipeline {
  private queue = new PromiseQueue(5, 200);
 
  async processRecords(records: string[]) {
    const results = await processWithPool(
      records,
      10,
      async (recordId) => {
        return await this.queue.add(async () => {
          const controller = new AbortController();
 
          return await withTimeout(
            fetchWithRetry(
              () =>
                fetch(`https://api.example.com/records/${recordId}`, {
                  signal: controller.signal,
                }),
              3
            ),
            10000,
            controller
          );
        });
      }
    );
 
    return results;
  }
}

This pipeline:

  • Limits concurrent processing to 10 records (worker pool)
  • Rate limits API calls to 5/second (promise queue)
  • Retries failed requests with exponential backoff
  • Times out requests after 10 seconds
  • Actually cancels timed-out requests

The result? A bulletproof pipeline that handles network issues, rate limits, and slow responses without breaking or overwhelming external services.

Choosing the Right Pattern for Your Workflow

Not every situation needs every pattern. Here's my rule of thumb:

Use retry with backoff when dealing with unreliable networks or services. Use promise queues when APIs have rate limits. Use worker pools when you need to limit memory or resource usage. Use timeouts when operations might hang indefinitely. Mix sequential and parallel execution based on your dependencies.

The key is understanding your constraints and choosing patterns that address your specific bottlenecks. I've seen developers over-engineer simple tasks and under-engineer complex ones. Start simple, measure performance, and add complexity only where it provides real value.

And that concludes the end of this post! I hope you found this valuable and look out for more in the future!