#Event Loop
-
Libuv Internals
35 min read • Published on • Last Updated OnExplore libuv’s event loop architecture, asynchronous I/O capabilities, thread pool management, and how it enables Node.js’s non-blocking, event-driven programming model.TLDRLibuv is a cross-platform asynchronous I/O library that provides Node.js with its event-driven, non-blocking architecture through a sophisticated event loop, thread pool, and platform abstraction layer.Core Architecture ComponentsEvent Loop: Central orchestrator managing all I/O operations and event notifications in phasesHandles: Long-lived objects representing persistent resources (TCP sockets, timers, file watchers)Requests: Short-lived operations for one-shot tasks (file I/O, DNS resolution, custom work)Thread Pool: Worker threads for blocking operations that can’t be made asynchronousEvent Loop PhasesTimers: Execute expired setTimeout/setInterval callbacksPending: Handle deferred I/O callbacks from previous iterationIdle/Prepare: Low-priority background tasks and pre-I/O preparationPoll: Block for I/O events or timers (most critical phase)Check: Execute setImmediate callbacks and post-I/O tasksClose: Handle cleanup for closed resourcesAsynchronous I/O StrategiesNetwork I/O: True kernel-level asynchronicity using epoll (Linux), kqueue (macOS), IOCP (Windows)File I/O: Thread pool emulation for blocking filesystem operationsDNS Resolution: Thread pool for getaddrinfo/getnameinfo callsCustom Work: User-defined CPU-intensive tasks via uv_queue_workPlatform Abstraction LayerLinux (epoll): Readiness-based model with efficient file descriptor pollingmacOS/BSD (kqueue): Expressive event notification for files, signals, timersWindows (IOCP): Completion-based model with native async file I/O supportUnified API: Consistent callback-based interface across all platformsThread Pool ArchitectureGlobal Shared Pool: Single pool shared across all event loops in a processConfigurable Size: UV_THREADPOOL_SIZE environment variable (default: 4, max: 1024)Work Distribution: Automatic load balancing across worker threadsPerformance Tuning: Size optimization based on CPU cores and workload characteristicsAdvanced FeaturesInter-thread Communication: uv_async_send for thread-safe event loop wakeupSynchronization Primitives: Mutexes, read-write locks, semaphores, condition variablesSignal Handling: Cross-platform signal abstraction with event loop integrationMemory Management: Reference counting with uv_ref/uv_unref for loop lifecycle controlPerformance CharacteristicsNetwork Scalability: Single thread can handle thousands of concurrent connectionsFile I/O Bottlenecks: Thread pool saturation can limit disk-bound applicationsContext Switching: Minimal overhead for network operations, higher for file operationsMemory Efficiency: External buffer allocation to reduce V8 GC pressureFuture EvolutionDynamic Thread Pool: Runtime resizing capabilities for better resource managementio_uring Integration: Linux completion-based I/O for unified network and file operationsPerformance Optimization: Continued platform-specific enhancements and optimizationsAPI Extensions: New primitives for emerging use cases and requirements
-
JavaScript Event Loop
11 min read • Published on • Last Updated OnMaster the JavaScript event loop architecture across browser and Node.js environments, understanding task scheduling, microtasks, and performance optimization techniques.TLDRJavaScript Event Loop is the core concurrency mechanism that enables single-threaded JavaScript to handle asynchronous operations through a sophisticated task scheduling system with microtasks and macrotasks.Core Architecture PrinciplesSingle-threaded Execution: JavaScript runs on one thread with a call stack and run-to-completion guaranteeEvent Loop: Central mechanism orchestrating asynchronous operations around the engineTwo-tier Priority System: Microtasks (high priority) and macrotasks (lower priority) with strict execution orderHost Environment Integration: Different implementations for browsers (UI-focused) and Node.js (I/O-focused)Universal Priority SystemSynchronous Code: Executes immediately on the call stackMicrotasks: Promise callbacks, queueMicrotask, MutationObserver (processed after each macrotask)Macrotasks: setTimeout, setInterval, I/O operations, user events (processed in event loop phases)Execution Order: Synchronous → nextTick → Microtasks → Macrotasks → Event Loop PhasesBrowser Event LoopRendering Integration: Integrated with 16.7ms frame budget for 60fpsTask Source Prioritization: User interaction (high) → DOM manipulation (medium) → networking (medium) → timers (low)requestAnimationFrame: Executes before repaint for smooth animationsMicrotask Starvation: Potential issue where microtasks block macrotasks indefinitelyNode.js Event Loop (libuv)Phased Architecture: Six phases (timers → pending → idle → poll → check → close)Poll Phase Logic: Blocks for I/O or timers, exits early for setImmediateThread Pool: CPU-intensive operations (fs, crypto, DNS) use worker threadsDirect I/O: Network operations handled asynchronously on main threadNode.js-specific APIs: process.nextTick (highest priority), setImmediate (check phase)Performance OptimizationKeep Tasks Short: Avoid blocking the event loop with long synchronous operationsProper Scheduling: Choose microtasks vs macrotasks based on priority needsAvoid Starvation: Prevent microtask flooding that blocks macrotasksEnvironment-specific: Use requestAnimationFrame for animations, worker_threads for CPU-intensive tasksTrue ParallelismWorker Threads: Independent event loops for CPU-bound tasksMemory Sharing: Structured clone, transferable objects, SharedArrayBufferCommunication: Message passing with explicit coordinationSafety: Thread isolation prevents race conditionsMonitoring & DebuggingEvent Loop Lag: Measure time between event loop iterationsBottleneck Identification: CPU-bound vs I/O-bound vs thread pool issuesPerformance Tools: Event loop metrics, memory usage, CPU profilingBest Practices: Environment-aware scheduling, proper error handling, resource management