Unveiling NGINX's Proxy Cache Lock for SSR-Generated Content

Saves compute cost by significantly reducing origin server load

Nginx Proxy with proxy_cache_lock enabled
Table of Contents

Introduction

In the complex landscape of web applications, optimizing server performance while keeping costs in check remains an ongoing challenge. Recently, I faced the task of streamlining the delivery of both static and Server-Side Rendered (SSR) content. In this blog post, I’ll delve into how I harnessed the power of NGINX’s Proxy Cache Lock to significantly reduce server loads and cut costs, extending its capabilities from static content to SSR-generated responses.

Caching Process Overview

Let’s briefly review the caching process facilitated by NGINX’s Proxy Cache Lock:

Understanding the Challenge

Modern web applications utilizing Server-Side Rendering can lead to complex server-side computations for each user request. Without careful optimization, these computations result in redundant, resource-intensive processes, affecting server response times and operational costs.

Consider the scenario where NGINX receives multiple simultaneous requests for the same content, all resulting in cache misses. This situation could potentially overload the origin server, especially for content that takes a long time to generate, such as SSR.

Expanding NGINX’s Proxy Cache Lock

The proxy_cache_lock directive plays a pivotal role in addressing this challenge. It ensures that when a piece of content is being refreshed, only one request at a time is sent to the upstream server. This control prevents sudden overloads on the origin server when handling multiple requests for the same content.

NGINX’s Proxy Cache Lock, initially designed for static content, proves to be adaptable for efficiently handling SSR-generated content. By preventing simultaneous requests for the same SSR-generated resource from reaching the origin server, NGINX optimizes server resources.

When enabled, only one request at a time will be allowed to populate a new cache element… Docs

Implementation

Implementing Proxy Cache Lock for SSR-generated content requires a nuanced configuration to account for dynamic responses. Below is a simplified NGINX configuration demonstrating how to extend this feature for SSR-generated content:

http {
    proxy_cache_path /path/to/cache levels=1:2 keys_zone=my_cache:10m max_size=10g inactive=60m use_temp_path=off;

    server {
        location / {
            proxy_cache my_cache;
            proxy_cache_lock on;
            proxy_cache_lock_age 5s;
            proxy_cache_valid 200 302 10m;
            proxy_pass http://backend_server;
        }
    }
}

Benefits and Cost Savings

The implementation of NGINX’s Proxy Cache Lock for SSR-generated content brings several advantages:

Further Enhancements (In future)

As part of ongoing developments, we plan to move the locking mechanism from the cache layer to the proxy layer. When a non-cached request is received, a mutex lock is established, and additional requests for the same file are then locked with the same mutex. Once the origin responds, all waiting requests are connected to the response stream of the origin.

Note: The above changes are not yet deployed live and were implemented as a Proof of Concept (PoC). Active development is underway, and deployment is planned in the coming months when we go live with SSR.

Reference


Topics