Publish-Subscribe Pattern in JavaScript
Architectural principles, implementation trade-offs, and production patterns for event-driven systems. Covers the three decoupling dimensions, subscriber ordering guarantees, error isolation strategies, and when Pub/Sub is the wrong choice.
Note
This article is the in-process JavaScript view of the pattern — building, embedding, and operating an event bus inside a single Node.js or browser runtime. For the distributed messaging view (Kafka, RabbitMQ, Redis Streams, delivery semantics, partitioning), see Queues and Pub/Sub: Decoupling and Backpressure. For the work-queue view (concurrency caps, retries, DLQs with p-queue, fastq, and BullMQ), see Async Queue Pattern in JavaScript.
Abstract
Mental Model: Pub/Sub trades explicit control flow for decoupling. Publishers fire events into a broker without knowing who (if anyone) receives them. Subscribers register interest without knowing who produces events. The broker is the single point of coupling.
Core Trade-off: Loose coupling enables independent component evolution and many-to-many communication. The cost is implicit control flow—debugging requires tracing events across the system rather than following function calls.
Key Implementation Decisions:
| Decision | Recommendation | Why |
|---|---|---|
| Subscriber storage | Map<string, Set<Function>> |
O(1) add/delete, prevents duplicates, preserves insertion order (ES6+ guarantee) |
| Error isolation | try/catch per subscriber | One failing subscriber must not break others |
| Async publish | Return Promise.allSettled() |
Await completion without short-circuiting on errors |
| Cleanup | Return unsubscribe function | Prevents memory leaks; ties to component lifecycle |
When NOT to Use:
- Request-response patterns (use direct calls)
- Single known recipient (Observer pattern suffices)
- When you need delivery guarantees (Pub/Sub doesn’t guarantee delivery)
- Simple systems unlikely to scale (unnecessary indirection)
The Three Dimensions of Decoupling
Eugster et al.’s foundational paper “The Many Faces of Publish/Subscribe” defines pub/sub as providing “full decoupling of the communicating entities in time, space, and synchronization”:
-
Space Decoupling: Publishers emit events without knowledge of which subscribers (if any) will receive them. Subscribers register without knowing which publishers produce events. Neither knows the other’s identity, location, or count.
-
Time Decoupling: Publishers and subscribers need not be active simultaneously. In distributed systems (MQTT, AMQP), a subscriber can receive messages published while it was offline. In-process implementations typically lack this—events are lost if no subscriber exists at publish time.
-
Synchronization Decoupling: Publishing is non-blocking. The publisher hands the event to the broker and continues immediately. This contrasts with RPC, which the paper notes has “synchronous nature… [which] introduces a strong time, synchronization, and also space coupling.”
Subscription Schemes
Eugster et al. classify pub/sub variants by how subscribers express interest, not by transport. The three schemes are not mutually exclusive — production systems usually pick one as the primary model and layer the others on top.
| Scheme | Subscriber declares | Routing cost | Examples |
|---|---|---|---|
| Topic-based | A topic / channel name (string) | O(1) hash lookup | Redis Pub/Sub, Kafka topics, NATS subjects, Node EventEmitter |
| Content-based | A predicate over event payload (price > 100) |
O(N) match unless an index exists | RabbitMQ headers exchange, Solace, Siena, Gryphon |
| Type-based | A class / schema; matches by structural inheritance | O(depth of type hierarchy) | CORBA Notification, Scala Akka typed channels, distributed-OO RT |
Topic-based is the dominant production model because the broker can route a publish in constant time — every other scheme either evaluates a predicate per subscriber (content-based) or walks a type lattice (type-based). The broker compares strings; that is the whole reason the model scales1.
The implementation in this article is topic-based. Most of what you build in JavaScript — EventEmitter, mitt, the production class below — is topic-based with optional wildcard matching layered on top. Reach for content-based filtering only when subscribers genuinely need server-side selection (e.g., “trades > $1M from US exchanges”) and the broker supports an index for the predicate; otherwise filter on the consumer.
Pub/Sub vs Observer Pattern
| Aspect | Observer | Pub/Sub |
|---|---|---|
| Coupling | Subject knows observers directly | Publishers and subscribers unknown to each other |
| Intermediary | None—Subject IS the dispatcher | Broker/Event Bus required |
| Cardinality | One-to-many | Many-to-many |
| Typical scope | In-process, single component | Cross-component, potentially distributed |
| Use case | UI state binding, reactive streams | Module communication, microservices |
Observer is appropriate when a single subject owns the state and notifies dependents. Pub/Sub is appropriate when multiple independent components need to communicate without direct references.
Basic Implementation
A minimal implementation using ES6+ Set for O(1) subscriber management:
type Callback<T> = (data: T) => voidexport class PubSub<T> { #subscribers = new Set<Callback<T>>() subscribe(callback: Callback<T>) { this.#subscribers.add(callback) return () => this.#subscribers.delete(callback) } publish(data: T) { for (const subscriber of this.#subscribers) { subscriber(data) } }}Why Set over Array:
- O(1) add/delete vs O(n) for array splice
- Automatic deduplication—same callback added twice is stored once
- Insertion order guaranteed per ECMA-262: “Set objects iterate through elements in insertion order”
If you need the same callback to fire multiple times per event (rare), use an array instead.
Usage:
const events = new PubSub<{ userId: string; action: string }>()const unsubscribe = events.subscribe((data) => console.log(data.action))events.publish({ userId: "123", action: "login" })events.publish({ userId: "123", action: "logout" })unsubscribe() // Critical: prevents memory leaksDOM EventTarget Alternative
The browser’s EventTarget provides native pub/sub via CustomEvent:
// Publisherconst event = new CustomEvent("user:login", { detail: { userId: "123" } })document.dispatchEvent(event)// Subscriberdocument.addEventListener("user:login", (e: CustomEvent) => { console.log(e.detail.userId)})Limitations:
- Main thread only—doesn’t work in Web Workers or Node.js
- No TypeScript generics—
detailisany - Global namespace—all events share
document, risking collisions
Custom implementation adds ~16 lines but works across all JavaScript runtimes.
Production-Grade Implementation
Production systems require: topic-based routing, error isolation, async support, and proper typing. Key design decisions explained inline:
type Callback<T = unknown> = (data: T) => void | Promise<void>interface Subscription { token: number unsubscribe: () => void}// Using Map<string, Map<number, Callback>> for:// - O(1) topic lookup// - O(1) subscriber add/remove by token// - Stable iteration order (subscribers called in registration order)class PubSub { private topics = new Map<string, Map<number, Callback>>() private nextToken = 0 subscribe<T>(topic: string, callback: Callback<T>): Subscription { const token = this.nextToken++ if (!this.topics.has(topic)) { this.topics.set(topic, new Map()) } this.topics.get(topic)!.set(token, callback as Callback) return { token, unsubscribe: () => { const subscribers = this.topics.get(topic) if (subscribers) { subscribers.delete(token) // Clean up empty topics to prevent memory growth if (subscribers.size === 0) this.topics.delete(topic) } }, } } // Synchronous publish with error isolation // Returns: whether any subscribers existed publish<T>(topic: string, data: T): boolean { const subscribers = this.topics.get(topic) if (!subscribers || subscribers.size === 0) return false for (const callback of subscribers.values()) { // Critical: try/catch per subscriber // One failing subscriber must not break others try { callback(data) } catch (error) { console.error(`[PubSub] Error in subscriber for "${topic}":`, error) // Optional: emit to error topic for centralized handling // this.publish('pubsub:error', { topic, error, data }) } } return true } // Async publish: waits for all subscribers, doesn't short-circuit on errors async publishAsync<T>(topic: string, data: T): Promise<PromiseSettledResult<void>[]> { const subscribers = this.topics.get(topic) if (!subscribers || subscribers.size === 0) return [] // Promise.allSettled (ES2020) ensures all subscribers complete // even if some reject—critical for reliable event handling const promises = Array.from(subscribers.values()).map(async (callback) => { await callback(data) }) return Promise.allSettled(promises) }}// Singleton for cross-module communicationexport const pubsub = new PubSub()Design Decisions:
| Choice | Rationale |
|---|---|
Map<string, Map<number, Callback>> |
Nested maps give O(1) operations at both topic and subscriber level |
| Numeric tokens | Monotonic IDs avoid collision; simpler than UUID |
Promise.allSettled over Promise.all |
Doesn’t short-circuit on first rejection—all subscribers complete |
| Empty topic cleanup | Prevents unbounded memory growth from stale topics |
| Per-subscriber try/catch | Isolates failures; one bad subscriber doesn’t break others |
Memory Leak Prevention
Memory leaks in pub/sub arise when subscribers outlive their intended scope. Common patterns:
- Anonymous functions can’t be removed —
removeEventListenerrequires the same function reference - Closures capture component state — subscriber holds references preventing garbage collection
- Missing cleanup on unmount — React components, Angular services, etc.
Node.js warns at 11+ listeners per event: MaxListenersExceededWarning: Possible EventEmitter memory leak detected.
React Pattern (useEffect cleanup):
import { useEffect, useState } from "react"import { pubsub } from "./pubsub"interface StatusEvent { userId: string isOnline: boolean}function UserStatus({ userId }: { userId: string }) { const [isOnline, setIsOnline] = useState(false) useEffect(() => { const { unsubscribe } = pubsub.subscribe<StatusEvent>("user:status", (data) => { if (data.userId === userId) setIsOnline(data.isOnline) }) // Critical: cleanup on unmount or userId change return unsubscribe }, [userId]) return <span>{isOnline ? "Online" : "Offline"}</span>}Key points:
- Return
unsubscribedirectly fromuseEffect— it’s already a cleanup function - Include
userIdin deps array — re-subscribes when prop changes - Named function isn’t needed since we use the returned
unsubscribe
Subscriber Ordering and Error Handling
Execution Order Guarantees
Are subscribers called in registration order? It depends on the implementation.
Per ECMA-262, Map and Set iterate in insertion order. Our implementation using Map<number, Callback> guarantees subscribers execute in registration order.
Per Node.js EventEmitter docs: “All listeners attached to it at the time of emitting are called in order.”
Per WHATWG DOM Standard: EventTarget listeners are called in registration order within each phase.
Caveat: Not all libraries guarantee order. If order matters for your use case, either:
- Use a single subscriber that orchestrates the sequence
- Verify your library’s implementation
Async Subscriber Pitfalls
Node.js docs warn: “Using async functions with event handlers is problematic, because it can lead to an unhandled rejection in case of a thrown exception.”
// Dangerous: unhandled rejection if async handler throwspubsub.subscribe("data", async (data) => { await saveToDatabase(data) // If this throws, rejection is unhandled})Solutions:
- Use
publishAsyncwithPromise.allSettled(shown in production implementation) - Wrap async subscribers in try/catch:
pubsub.subscribe("data", async (data) => { try { await saveToDatabase(data) } catch (error) { // Handle locally or emit to error topic pubsub.publish("error", { source: "data", error }) }})- For Node’s built-in
EventEmitter, enablecaptureRejections. Node ≥ 13.4 ships an opt-in mode that auto-installs.then(undefined, handler)on the promise an async listener returns and forwards the rejection to the emitter’s'error'event (orSymbol.for('nodejs.rejection')if defined). It is opt-in because changing the default would break existing emitters whose handlers rely on rejections silently bubbling out.
import { EventEmitter } from "node:events"const ee = new EventEmitter({ captureRejections: true })ee.on("data", async () => { throw new Error("kaboom") // Now routed to 'error' below, not unhandled})ee.on("error", (err) => console.error("[ee]", err))Do not use an async function for the 'error' listener itself — Node deliberately leaves that path uncaught to avoid infinite error loops.
Advanced Capabilities
Hierarchical Topics and Wildcard Subscriptions
Topic naming convention: domain.entity.action (e.g., user.profile.updated, cart.item.added).
Important
Wildcard syntax differs across protocols. The example below follows the AMQP 0-9-1 topic exchange convention (. separator, * for one word, # for zero or more). MQTT 5.0 §4.7 is different: / separator, + for a single level, # for multi-level (must be the trailing token). NATS uses * for one token and > for the rest. Pick one convention per system and document it; do not mix.
| Protocol | Separator | Single-level | Multi-level | Multi-level placement |
|---|---|---|---|---|
| AMQP 0-9-1 (RabbitMQ topics) | . |
* |
# |
Anywhere |
| MQTT 5.0 | / |
+ |
# |
Trailing token only |
| NATS | . |
* |
> |
Trailing token only |
A minimal AMQP-style matcher in JavaScript:
// Matches subscription patterns like "user.*.login" or "user.#"// against published topics like "user.123.login"function topicMatches(pattern: string, topic: string): boolean { const patternParts = pattern.split(".") const topicParts = topic.split(".") for (let i = 0; i < patternParts.length; i++) { const p = patternParts[i] if (p === "#") return true // Multi-level: match rest if (p !== "*" && p !== topicParts[i]) return false } return patternParts.length === topicParts.length}Backpressure Considerations
Standard pub/sub has no built-in backpressure. Publishers emit as fast as they can regardless of subscriber capacity. The naïve in-process broker has no upper bound on queue depth, no consumer credit, and no signal back to the publisher when subscribers are slow.
In-process strategies, in increasing order of safety:
- Debounce / throttle at the publisher — lossy but bounds publish rate. Use when individual events are fungible (mouse-move, scroll position, dirty-marker).
- Bounded buffer with drop policy — wrap the broker in a ring buffer; on overflow drop oldest, drop newest, or drop random. Match the policy to the consumer (
drop-oldestfor telemetry,drop-newestfor command-like signals you must not reorder). Promise.allSettledwith a concurrency cap — gatepublishAsyncbehind a semaphore (e.g.p-limit) so the publisher actually waits when subscribers stall. This converts implicit unbounded fan-out into explicit cooperative backpressure.- Pull-based async iterators — switch the contract: subscribers
for awaitevents, the broker only computes the next event on demand. Libraries likeevent-iteratorwrap a callback API with ahighWaterMark; once the buffer fills, the broker stops calling the source until the consumer drains. This is the pattern Node uses for readable streams in object mode.
When in-process broker usage moves beyond UI events into work that must not be lost, escalate out of pub/sub to a real broker with acknowledgments and durable storage — see the Async Queue Pattern and the Queues and Pub/Sub articles for the operational model.
Broker vs Brokerless
The classical pub/sub model puts a broker in the middle: every publish lands at the broker, which owns subscription state and dispatch. This is what every example in this article assumes, what EventEmitter does in-process, and what Kafka, RabbitMQ, NATS, and Redis Pub/Sub do over the network.
A brokerless model removes the dispatcher: publishers send directly to known subscribers, often via multicast, gossip, or a distributed routing table. ZeroMQ’s PUB/SUB sockets are the canonical example — there is no central process; the library lets each peer maintain its own subscription table and the publisher writes once per subscriber socket. The trade-off is operational simplicity (no broker to deploy, scale, or operate) for weaker guarantees (no durability, no replay, no central observability) and a heavier client2.
In practice, choose brokerless only when you control all participants, the network is trusted, and persistence is genuinely not needed. Everything else benefits from the broker — even if it is only a single-node Redis or NATS server.
Broker Comparison: Redis Pub/Sub vs Kafka vs NATS Core
When the in-process broker is no longer enough, pick the distributed broker that matches your delivery model. The three most common topic-based defaults differ in exactly the dimensions that bite in production:
| Property | Redis Pub/Sub | Kafka | NATS Core |
|---|---|---|---|
| Persistence | None (fire-and-forget) | Durable, replicated commit log | None in core; JetStream adds durability |
| Delivery semantics | At-most-once | At-least-once (with offsets / commits) | At-most-once core; at-least-once via JS |
| Replay / time-travel | No | Yes (offset seek, retention.ms) | JetStream only |
| Slow-consumer behavior | Drops messages above client-output-buffer-limit pubsub |
Reader lags, log retains data | Disconnects slow client (max_pending) |
| Ordering | Per-channel, best-effort | Per-partition, strict | Per-subject, best-effort |
| Routing model | Topic + glob patterns | Topic + partition key | Subject + token wildcards (*, >) |
| Throughput target | Low-mid (single-node memory) | High (sequential disk IO, batches) | Very high (in-memory, no ack default) |
Pick Redis Pub/Sub for ephemeral notifications inside a system that already runs Redis (cache invalidation, websocket fan-out) and where loss on restart is fine3. Pick Kafka when consumers need replay, exactly-once semantics, or independent scaling on a durable log — the Kafka docs describe it as “a distributed, replicated commit log”, which is exactly what changes the operational model. Pick NATS Core for the lowest-latency fan-out path; add JetStream when you need persistence on top, accepting the extra ops cost4.
Caution
Redis Pub/Sub and NATS Core both drop messages for slow consumers — Redis disconnects the subscriber once client-output-buffer-limit pubsub is exceeded; NATS disconnects when max_pending is hit. If your system cannot tolerate loss, you need Redis Streams, Kafka, NATS JetStream, or an MQTT broker with QoS ≥ 1, not bare pub/sub.
When NOT to Use Pub/Sub (Antipatterns)
Understanding when to avoid pub/sub is as important as knowing when to use it. The two-question filter:
1. Forcing Commands into Pub/Sub
Per CodeOpinion: “Commands do not use the publish-subscribe pattern. Trying to force everything into publish-subscribe when that’s not the pattern you want will lead you to apply more patterns incorrectly.”
Command vs Event:
- Command: “CreateOrder” — directed to a specific handler, expects execution
- Event: “OrderCreated” — notification of something that happened, no expectation of specific handler
2. Request-Reply Disguised as Events
If your publisher expects a specific response event back, you’ve recreated synchronous RPC with extra complexity. Use direct function calls or actual RPC instead.
3. CRUD Events Lacking Intent
CustomerChanged doesn’t indicate why something changed. Consumers must diff the data to infer intent. Prefer intent-revealing events: CustomerAddressUpdated, CustomerDeactivated.
4. Simple Systems
Pub/sub adds indirection. For a small app with straightforward component communication, direct function calls or context/props are simpler and more debuggable.
5. When Delivery Guarantees Matter
In-process pub/sub doesn’t guarantee delivery. If no subscriber exists when an event fires, it’s lost. For critical events, use message queues with persistence (Redis Streams, RabbitMQ, Kafka).
Debugging Challenges
Martin Fowler in What do you mean by “Event-Driven”? names this directly: “It can become problematic if there really is a logical flow that runs over various event notifications. The problem is that it can be hard to see such a flow as it’s not explicit in any program text.”
The implicit control flow that provides loose coupling also makes debugging harder. Distributed tracing and careful logging are essential for production pub/sub systems.
Real-World Applications
Frontend: Cross-Component Communication
Pub/sub solves “prop drilling” when unrelated components need to react to the same events.
E-commerce cart example:
ProductCardpublishescart.item.addedon button clickCartIconsubscribes → updates badge countToastsubscribes → shows confirmationAnalyticssubscribes → tracks conversion
Each subscriber is independent. Adding a new reaction requires no changes to ProductCard.
Backend: Event-Driven Microservices
External brokers (Redis Pub/Sub, RabbitMQ, Google Cloud Pub/Sub, Kafka) provide distributed pub/sub with durability.
User registration flow:
UserServicepublishesuser.createdEmailService→ sends welcome emailAnalyticsService→ tracks signup metricsOnboardingService→ queues tutorial sequence
Services deploy independently. Adding a new reaction (e.g., CRM sync) requires no changes to UserService.
Build vs Buy Decision
For production, prefer battle-tested libraries unless you need custom semantics.
| Library | Size | Wildcards | TypeScript | API | Maintained |
|---|---|---|---|---|---|
| mitt | 200B | * (all events) |
Yes | on(), off(), emit() |
Yes (11k+ stars) |
| nanoevents | ~108B | No | Yes | Returns unbind from on() |
Yes |
| EventEmitter3 | 1.5KB | No | Yes | Node.js-compatible | Yes |
| EventEmitter2 | Larger | Yes (*, **) |
Yes | Extended EE API | Yes |
Recommendations:
- Minimal footprint: mitt or nanoevents (both under 200B)
- Node.js API compatibility: EventEmitter3
- Wildcards needed: EventEmitter2 (or MQTT/AMQP for distributed)
- Learning: Build your own—the 20-line implementation teaches the core
mitt caveat: The * handler receives all events but is a listener, not a wildcard pattern. Publishing to * directly causes double-triggering issues.
Best Practices Summary
Implementation Checklist
| Practice | Why |
|---|---|
| Return unsubscribe function | Enables cleanup; prevents memory leaks |
Use Map/Set not plain objects |
O(1) operations, guaranteed iteration order |
| try/catch per subscriber | Isolates failures; one bad handler doesn’t break others |
Promise.allSettled for async |
Waits for all handlers; doesn’t short-circuit on rejection |
| Clean up empty topics | Prevents unbounded memory growth |
| Hierarchical topic names | domain.entity.action enables wildcards, organization |
Error Handling Strategies
- Per-subscriber isolation: Wrap each callback in try/catch (shown in production implementation)
- Error topic: Emit to
pubsub:errorfor centralized handling - Dead-letter queue: For distributed systems, track repeatedly failing messages
- Dev mode exceptions: Optionally rethrow in development for stack traces
Conclusion
Pub/Sub trades explicit control flow for decoupling. Publishers emit without knowing receivers; subscribers react without knowing sources. This enables independent component evolution and many-to-many communication at the cost of implicit, harder-to-trace control flow.
Use pub/sub when:
- Multiple independent components need to react to the same events
- Components should evolve independently (add/remove reactions without changing publishers)
- You’re building event-driven architecture (frontend cross-component communication, backend microservices)
Avoid pub/sub when:
- You need request-response semantics
- There’s a single known recipient (use Observer or direct calls)
- Delivery guarantees are required (use message queues)
- The system is simple and unlikely to need the decoupling
Implementation is straightforward: Map<string, Set<Function>>, return unsubscribe, try/catch per subscriber. The challenge is architectural—knowing when the indirection is worth the debuggability cost.
Appendix
Prerequisites
- JavaScript ES6+ (Map, Set, Promise, async/await)
- Basic understanding of event-driven programming
- Familiarity with React hooks (for framework examples)
Terminology
| Term | Definition |
|---|---|
| Publisher | Component that emits events; unaware of subscribers |
| Subscriber | Component that registers callbacks for events; unaware of publishers |
| Broker / Event Bus | Intermediary that routes events from publishers to subscribers |
| Topic / Channel | Named category for events; subscribers register interest by topic |
| Backpressure | Mechanism for consumers to signal producers to slow down |
Summary
- Pub/Sub provides three-dimensional decoupling: space, time, synchronization
- Use
Map<string, Set<Function>>for O(1) operations and guaranteed order - Always return unsubscribe function; tie cleanup to component lifecycle
- Isolate subscriber errors with per-callback try/catch
- Use
Promise.allSettledfor async publish to avoid short-circuiting - Avoid for request-response, single recipients, or when delivery guarantees matter
- Libraries: mitt (~200 B), nanoevents (~108 B) for minimal footprint
References
Specifications & Standards:
- The Many Faces of Publish/Subscribe - Eugster et al., ACM Computing Surveys 2003. Foundational paper defining the three dimensions of decoupling.
- ECMA-262: Map and Set Objects - Guarantees insertion order for iteration.
- WHATWG DOM Standard: EventTarget - Browser’s native pub/sub mechanism.
- OASIS MQTT 5.0 Specification - Distributed pub/sub protocol.
Official Documentation:
- Node.js Events Documentation - EventEmitter ordering guarantees, async handler pitfalls.
- MDN: CustomEvent - Browser native event API.
- MDN: WeakRef - Memory management considerations.
Libraries:
- mitt - 200B functional event emitter.
- nanoevents - 107B with TypeScript support.
- EventEmitter3 - Node.js-compatible API.
- EventEmitter2 - Wildcard support.
Architectural Guidance:
- Enterprise Integration Patterns: Publish-Subscribe Channel - Hohpe & Woolf, the standard messaging-patterns reference.
- Martin Fowler — What do you mean by “Event-Driven”? - On event notification, event-carried state transfer, event sourcing, CQRS, and the implicit-flow trade-off.
- CodeOpinion: Antipatterns in Event-Driven Architecture - Practitioner walkthrough of when not to use pub/sub.
Footnotes
-
Eugster, Felber, Guerraoui, Kermarrec — “The Many Faces of Publish/Subscribe”, ACM Computing Surveys 35(2), §2.3 (“Subscription schemes”). The paper formalises the three schemes and the routing-cost argument. ↩
-
ZeroMQ Guide — Chapter 5: Advanced Pub-Sub Patterns describes the brokerless
PUB/SUBmodel, the slow-subscriber problem, and the lack of persistence. ↩ -
Redis docs — Pub/Sub explicitly state Redis Pub/Sub is fire-and-forget and that “messages that were published while the client was disconnected are lost”. The slow-consumer behavior is governed by the
client-output-buffer-limit pubsubdirective in redis.conf. ↩ -
NATS docs — Core NATS covers at-most-once semantics and slow-consumer disconnects; JetStream layers durable streams, replay, and at-least-once on top. ↩