JavaScript Patterns
13 min read

Async Queue Pattern in JavaScript

Build resilient, scalable asynchronous task processing systems—from basic in-memory queues to advanced distributed patterns—using Node.js. This article covers the design reasoning behind queue architectures, concurrency control mechanisms, and resilience patterns for production systems.

Executors

Task Queue

Task 1

Task 2

Task 3

Task 4

Task 5

Executor 1

Executor 2

Executor 3

Asynchronous task queue distributing work across multiple executors with bounded concurrency

Async queues solve a fundamental tension: how to maximize throughput while respecting resource constraints. The core mental model:

Producer → Queue (buffer) → Consumer(s)
↑ ↓
Backpressure ←──── Concurrency Control

Key design decisions:

DecisionIn-Memory QueueDistributed Queue
PersistenceNone—process crash loses all jobsRedis/DB-backed—survives restarts
ScalabilitySingle process onlyCompeting consumers across nodes
Failure handlingCaller’s responsibilityBuilt-in retries, DLQ, stalled detection
When to useLocal concurrency control, rate limitingCross-process coordination, durability required

Resilience fundamentals:

  • Idempotency: Design consumers to handle duplicate delivery safely
  • Backpressure: Bound queue size to prevent memory exhaustion
  • Exponential backoff + jitter: delay = min(cap, base × 2^attempt) + random() prevents thundering herd
  • Dead Letter Queue (DLQ): Isolate poison messages that exceed retry attempts

Node.js uses a single-threaded, event-driven architecture. This model excels at I/O-bound operations but blocks the main thread during CPU-intensive work. Understanding the event loop phases is essential for designing queue processors that remain responsive.

Microtask Queues

Event Loop Phases

After each phase

After nextTick drains

1. Timers
2. Pending Callbacks
3. Idle/Prepare
4. Poll
5. Check
6. Close Callbacks

nextTick Queue

Promise Microtasks

Node.js event loop phases with microtask queues processed between each phase

Event Loop Phases (as of Node.js 20+):

The event loop executes in six phases: timers → pending callbacks → idle/prepare → poll → check → close callbacks. Each phase has a FIFO queue of callbacks.

  • Microtask processing: After each phase completes, Node.js drains the nextTick queue first, then the Promise microtask queue. Both complete before the next phase begins.
  • Priority order: process.nextTick() > Promise microtasks > macrotasks (setTimeout, setImmediate)

Node.js 20 change (libuv 1.45.0): Timers now run only after the poll phase, not before and after as in earlier versions. This affects callback ordering for code that depends on precise timer/I/O interleaving.

Why this matters for queues: A queue processor that performs synchronous CPU work starves the event loop, preventing lock renewals (in distributed queues) and incoming I/O from being handled. Design processors to yield control regularly via await or setImmediate().

In-memory queues throttle async operations within a single process—useful for rate limiting API calls or controlling database connection usage.

Library comparison (as of January 2025):

LibraryWeekly DownloadsDesign FocusConcurrency Control
fastq v1.2060M+Raw performance, minimal overheadSimple concurrency count
p-queue v9.110M+Feature-rich, priority supportFixed/sliding window rate limiting

p-queue provides priority scheduling, per-operation timeouts, and sliding-window rate limiting. fastq uses object pooling via reusify for reduced GC pressure—6x higher adoption for performance-critical scenarios.

Naive implementation (for understanding the pattern):

code-sample.ts
1 collapsed line
type Task<T = void> = () => Promise<T>
export class AsyncTaskQueue {
private queue = new TaskQueue<Task>()
private activeCount = 0
private concurrencyLimit: number
constructor(concurrencyLimit: number) {
this.concurrencyLimit = concurrencyLimit
}
addTask<T>(promiseFactory: Task<T>): Promise<T> {
const { promise, resolve, reject } = Promise.withResolvers<T>()
const task: Task = async () => {
try {
const result = await promiseFactory()
resolve(result)
} catch (error) {
reject(error)
}
}
this.queue.enqueue(task)
this.processQueue()
return promise
}
private async processQueue(): Promise<void> {
if (this.activeCount >= this.concurrencyLimit || this.queue.isEmpty()) {
return
}
const task = this.queue.dequeue()
if (task) {
this.activeCount++
try {
await task()
} finally {
this.activeCount--
this.processQueue()
}
}
}
}
51 collapsed lines
// Queue implementation using a linked list
class TaskNode<T> {
value: T
next: TaskNode<T> | null
constructor(value: T) {
this.value = value
this.next = null
}
}
export class TaskQueue<T> {
head: TaskNode<T> | null
tail: TaskNode<T> | null
size: number
constructor() {
this.head = null
this.tail = null
this.size = 0
}
// Enqueue: Add an element to the end of the queue
enqueue(value: T) {
const newNode = new TaskNode(value)
if (this.tail) {
this.tail.next = newNode
}
this.tail = newNode
if (!this.head) {
this.head = newNode
}
this.size++
}
// Dequeue: Remove an element from the front of the queue
dequeue(): T {
if (!this.head) {
throw new Error("Queue is empty")
}
const value = this.head.value
this.head = this.head.next
if (!this.head) {
this.tail = null
}
this.size--
return value
}
isEmpty() {
return this.size === 0
}
}
17 collapsed lines
// Example usage
const queue = new AsyncTaskQueue(3)
const createTask = (id: number, delay: number) => () =>
new Promise<void>((resolve) => {
console.log(`Task ${id} started`)
setTimeout(() => {
console.log(`Task ${id} completed`)
resolve()
}, delay)
})
queue.addTask(createTask(1, 1000))
queue.addTask(createTask(2, 500))
queue.addTask(createTask(3, 1500))
queue.addTask(createTask(4, 200))
queue.addTask(createTask(5, 300))

Note: This uses Promise.withResolvers() (Baseline 2024).

Critical limitations of in-memory queues:

LimitationImpactMitigation
No persistenceProcess crash loses all queued jobsUse distributed queue for durability
Single processCannot scale horizontallyDistributed queue with competing consumers
No backpressure by defaultUnbounded memory growth under loadMonitor queue size, reject new tasks when saturated
Caller handles errorsSilent failures if not awaitedAlways handle returned promises

Backpressure pattern with p-queue:

backpressure.ts
2 collapsed lines
import PQueue from "p-queue"
const queue = new PQueue({ concurrency: 10 })
async function addWithBackpressure<T>(task: () => Promise<T>): Promise<T> {
// Block producer when queue exceeds threshold
if (queue.size > 100) {
await queue.onSizeLessThan(50) // Wait until queue drains
}
return queue.add(task)
}

For persistence, horizontal scaling, and cross-process coordination, tasks must be managed by a distributed queue system.

Failure Handling

Workers

Message Broker

Producers

Max retries exceeded

Max retries exceeded

Max retries exceeded

API Server

Cron Job

Event Handler

Redis

Worker 1

Worker 2

Worker 3

Dead Letter Queue

Distributed queue with producers, Redis broker, competing consumers, and DLQ for failed jobs

Component responsibilities:

  • Producers: Enqueue jobs with payload, priority, delay, and retry configuration
  • Message Broker: Persistent store (Redis for BullMQ) providing at-least-once delivery
  • Workers (Competing Consumers): Dequeue and process jobs; multiple workers increase throughput
  • Dead Letter Queue: Captures jobs exceeding retry attempts for manual inspection

Why Redis for BullMQ? Redis provides atomic operations (MULTI/EXEC), sorted sets for delayed jobs, and pub/sub for worker coordination—all with sub-millisecond latency.

LibraryBackendDesign PhilosophyBest For
BullMQ v5.xRedisModern, feature-rich, production-gradeMost distributed queue needs
AgendaMongoDBCron-style schedulingRecurring jobs with complex schedules
TemporalPostgreSQL/MySQLWorkflow orchestration with state machinesLong-running, multi-step workflows

BullMQ (v5.67+) is the production standard for Node.js distributed queues. Key design decisions:

Stalled job detection: Workers acquire a lock when processing a job. If the lock expires (default: 30s) without renewal, BullMQ marks the job as stalled and either requeues it or moves it to failed. This handles worker crashes but also triggers if CPU-bound work starves the event loop.

producer.ts
5 collapsed lines
import { Queue } from "bullmq"
const connection = { host: "localhost", port: 6379 }
const emailQueue = new Queue("email-processing", { connection })
async function queueEmailJob(userId: number, template: string) {
await emailQueue.add(
"send-email",
{ userId, template },
{
attempts: 5,
backoff: { type: "exponential", delay: 1000 },
removeOnComplete: { count: 1000 }, // Keep last 1000 completed jobs
removeOnFail: { age: 7 * 24 * 3600 }, // Keep failed jobs for 7 days
},
)
}

Worker with concurrency tuning:

worker.ts
2 collapsed lines
import { Worker } from "bullmq"
const emailWorker = new Worker(
"email-processing",
async (job) => {
const { userId, template } = job.data
// I/O-bound: email API call
await sendEmail(userId, template)
},
{
connection: { host: "localhost", port: 6379 },
concurrency: 100, // High concurrency for I/O-bound work
lockDuration: 30000, // 30s lock, must complete or renew
},
)

Concurrency tuning guidance:

  • I/O-bound jobs (API calls, DB queries): concurrency 100–300
  • CPU-bound jobs: Use sandboxed processors (separate process) with low concurrency
  • Mixed: Start low, measure, increase—monitor for stalled jobs

Sandboxed processors for CPU-intensive work:

sandboxed-worker.ts
// Main process
const worker = new Worker("cpu-intensive", "./processor.js", {
connection,
useWorkerThreads: true, // Node.js Worker Threads (BullMQ 3.13+)
})
// processor.js (separate file)
export default async function (job) {
// CPU work here doesn't block main process
return heavyComputation(job.data)
}

Job flows for dependencies (parent waits for all children):

flow.ts
2 collapsed lines
import { FlowProducer } from "bullmq"
const flowProducer = new FlowProducer({ connection })
await flowProducer.add({
name: "process-order",
queueName: "orders",
data: { orderId: "123" },
children: [
{ name: "validate", queueName: "validation", data: { orderId: "123" } },
{ name: "reserve", queueName: "inventory", data: { orderId: "123" } },
{ name: "charge", queueName: "payments", data: { orderId: "123" } },
],
})
// Parent job waits in 'waiting-children' state until all children complete

Rate limiting (global across all workers):

rate-limited-worker.ts
const worker = new Worker("api-calls", processor, {
connection,
limiter: {
max: 10, // 10 jobs
duration: 1000, // per second
},
})

Naive immediate retries cause thundering herd—all failed jobs retry simultaneously, overwhelming the recovering service.

Exponential Backoff + Jitter

1s ± random(0.5s)

2s ± random(1s)

4s ± random(2s)

8s (cap) ± random(4s)

Exponential backoff with jitter desynchronizes retries, preventing load spikes

Formula: delay = min(cap, base × 2^attempt) + random(0, delay × jitterFactor)

BullMQ implementation:

retry-config.ts
await queue.add("api-call", payload, {
attempts: 5,
backoff: {
type: "exponential",
delay: 1000, // Base: 1s, 2s, 4s, 8s, 16s
},
})

Why jitter? Without jitter, jobs that failed at the same time retry at the same time. Jitter spreads retries uniformly, smoothing load on downstream services.

Some jobs are inherently unprocessable: malformed payloads, missing dependencies, or bugs in consumer logic. These “poison messages” must be isolated.

Success

Retry

Max attempts exceeded

Manual review

Main Queue

Worker

DLQ

Fix & Replay

DLQ isolates poison messages, allowing main queue processing to continue

BullMQ automatically moves jobs to “failed” state after exhausting retries. Query failed jobs for manual inspection:

dlq-inspection.ts
const failedJobs = await queue.getFailed(0, 100) // Get first 100 failed jobs
for (const job of failedJobs) {
console.log(`Job ${job.id} failed: ${job.failedReason}`)
// Optionally: fix data and retry
await job.retry()
}

Distributed queues provide at-least-once delivery. Network partitions, worker crashes, or lock expiration can cause duplicate delivery. Consumers must handle this safely.

Idempotency strategies:

StrategyImplementationTrade-off
Unique constraintDB unique index on job IDSimple; fails fast on duplicate
Idempotency keyStore processed keys in Redis/DB with TTLAllows explicit duplicate check
Conditional writeUPDATE ... WHERE version = ?Handles concurrent execution
idempotent-consumer.ts
3 collapsed lines
import { Worker } from "bullmq"
import { db } from "./database"
const worker = new Worker("user-registration", async (job) => {
const { userId, userData } = job.data
// Check if already processed using job ID
const existing = await db.processedJobs.findByPk(job.id)
if (existing) {
console.log(`Job ${job.id} already processed, skipping`)
return
}
// Atomic: create user + mark job processed
await db.transaction(async (t) => {
await db.users.create(userData, { transaction: t })
await db.processedJobs.create({ jobId: job.id, processedAt: new Date() }, { transaction: t })
})
})

Problem: How do you atomically update a database AND publish an event? If you publish first and the DB write fails, you’ve sent an invalid event. If you write first and publishing fails, the event is lost.

Relay Process

Application

1. Single Transaction

Write

Write

2. Poll
3. Publish
4. Mark sent

Service

Database

Business Table

Outbox Table

Message Relay

Message Broker

Transactional outbox ensures atomic DB writes and event publishing via relay process

Solution: Write events to an “outbox” table in the same transaction as business data. A separate relay process polls the outbox and publishes to the message broker.

transactional-outbox.ts
async function createUserWithEvent(userData: UserData) {
return await db.transaction(async (t) => {
const user = await db.users.create(userData, { transaction: t })
// Event written atomically with business data
await db.outbox.create(
{
eventType: "USER_CREATED",
aggregateId: user.id,
payload: JSON.stringify({ userId: user.id, ...userData }),
status: "PENDING",
},
{ transaction: t },
)
return user
})
}
// Separate relay process (runs continuously)
async function relayOutboxEvents() {
const pending = await db.outbox.findAll({ where: { status: "PENDING" }, limit: 100 })
for (const event of pending) {
await messageBroker.publish(event.eventType, JSON.parse(event.payload))
await event.update({ status: "SENT" })
}
}

Trade-offs:

  • Pros: Guarantees consistency between DB and events; survives broker outages
  • Cons: Adds latency (polling interval); requires relay process; eventual consistency only

When a business operation spans multiple services (each with its own database), traditional ACID transactions don’t apply. The Saga pattern coordinates distributed operations through a sequence of local transactions with compensating actions for rollback.

Compensation - On Failure

Order Saga - Happy Path

Failure

1. Reserve Inventory
2. Charge Payment
3. Ship Order
4. Confirm

Cancel Shipment

Refund Payment

Release Inventory

Saga with compensating transactions: each step has a corresponding rollback action

Choreography vs. Orchestration:

AspectChoreographyOrchestration
CoordinationServices react to events autonomouslyCentral orchestrator directs steps
CouplingLoose—services don’t know about each otherTighter—orchestrator knows all steps
DebuggingHarder—flow distributed across servicesEasier—single point of visibility
Failure pointDistributedOrchestrator (must be resilient)
saga-orchestrator.ts
class OrderSagaOrchestrator {
async execute(orderData: OrderData) {
const steps: SagaStep[] = [
{ action: () => this.reserveInventory(orderData), compensate: () => this.releaseInventory(orderData) },
{ action: () => this.chargePayment(orderData), compensate: () => this.refundPayment(orderData) },
{ action: () => this.createShipment(orderData), compensate: () => this.cancelShipment(orderData) },
]
const completed: SagaStep[] = []
try {
for (const step of steps) {
await step.action()
completed.push(step)
}
} catch (error) {
// Compensate in reverse order
for (const step of completed.reverse()) {
await step.compensate()
}
7 collapsed lines
throw error
}
}
private async reserveInventory(order: OrderData) {
/* ... */
}
private async releaseInventory(order: OrderData) {
/* ... */
}
// ... other steps
}

Event Sourcing stores state changes as an immutable sequence of events rather than current state. Queues distribute these events to consumers that build read models (CQRS).

Read Side

Event Distribution

Write Side

Append Event

Publish

Command Handler

Event Store

Message Queue

Read Model 1

Read Model 2

Analytics

Event Sourcing with CQRS: events flow from write side through queue to multiple read models

Why use queues with Event Sourcing?

  • Decoupling: Read model consumers don’t need to poll the event store
  • Replay: Queue consumers can replay from a point in time (Kafka log compaction)
  • Scaling: Multiple consumer groups process events independently

Kafka is commonly used because its durable, replayable log naturally fits the event store pattern. Log compaction retains the last event per key, enabling efficient state reconstruction.

Async queue patterns form the backbone of scalable Node.js systems. The progression from in-memory queues to distributed systems follows increasing requirements:

  1. In-memory (p-queue, fastq): Local concurrency control, no durability
  2. Distributed (BullMQ): Cross-process coordination, persistence, at-least-once delivery
  3. Workflow orchestration (Temporal, Sagas): Complex multi-step operations with compensation

Key design principles apply at every level:

  • Bound your queues: Implement backpressure to prevent memory exhaustion
  • Design for failure: Exponential backoff, idempotent consumers, dead letter queues
  • Monitor everything: Queue depth, processing latency, retry rate, stalled job count
  • Match concurrency to workload: High for I/O-bound, low (with sandboxing) for CPU-bound
  • Node.js event loop fundamentals (phases, microtask queue)
  • JavaScript Promises and async/await
  • Redis basics (for BullMQ sections)
  • Database transactions (for outbox pattern)
  • Backpressure: Mechanism to slow producers when consumers can’t keep up
  • Competing Consumers: Pattern where multiple workers pull from the same queue
  • DLQ (Dead Letter Queue): Queue for messages that fail processing repeatedly
  • Idempotency: Property where repeated operations produce the same result
  • Jitter: Random delay added to prevent synchronized retries
  • Stalled Job: Job whose lock expired without completion (worker crash or CPU starvation)
  • In-memory queues (p-queue v9.x, fastq v1.20.x) control local concurrency with no persistence
  • BullMQ v5.x provides Redis-backed distributed queues with retries, rate limiting, flows, and stalled detection
  • Exponential backoff with jitter prevents thundering herd on transient failures
  • Idempotent consumers are mandatory for at-least-once delivery systems
  • Transactional outbox ensures atomic DB writes and event publishing
  • Saga pattern coordinates distributed transactions through compensating actions
  • Monitor queue depth, latency, retry rate, and stalled job count in production

Read more