Observability & Reliability
11 min read

CSP Violation Reporting Pipeline at Scale

CSP-Sentinel is a centralized system designed to collect, process, and analyze Content Security Policy (CSP) violation reports from web browsers at scale. The system handles baseline 50k RPS with burst capacity to 500k+ RPS while maintaining sub-millisecond response times and near-zero impact on client browsers.

Storage Layer

Processing Layer

Ingestion Layer

Clients

CSP Reports

Async

Batch Write

Browsers

Load Balancer

WebFlux API

Kafka

Consumer

Redis

Snowflake

CSP-Sentinel high-level architecture: browsers send violation reports through a non-blocking API to Kafka, where consumers deduplicate and batch-write to Snowflake

CSP violation reporting presents a specific scaling challenge: browsers generate reports unpredictably, often in bursts during incidents, and the data is high-volume but low-value per individual event. The core insight is to treat reports as a streaming analytics problem, not a transactional one.

Mental Model

Browser Reports

(Unpredictable Bursts)

Fire-and-Forget API

(Sub-ms Response)

Kafka Buffer

(Absorbs Spikes)

Deduplication

(70-90% Noise Reduction)

OLAP Storage

(Analytics, Not OLTP)

Mental model: CSP reports flow through a pipeline optimized for eventual consistency and noise reduction, not transactional guarantees

Key design trade-offs:

DecisionOptimizes ForSacrifices
acks=1 (leader only)Latency (~1ms vs ~10ms)Durability (rare message loss on leader failure)
10-minute dedup windowStorage cost, query noisePrecision (identical violations merged)
24-hour Kafka retentionCostReplay capability beyond 24h
Stateless APIHorizontal scalingSession-based rate limiting

When this design fits: High-volume telemetry where individual events are expendable, eventual consistency is acceptable, and the goal is aggregate insights rather than per-event guarantees.

When it doesn’t: Audit logs requiring strict durability, real-time alerting on individual violations, or compliance scenarios requiring guaranteed delivery.

Modern browsers send CSP violation reports as JSON payloads when a webpage violates defined security policies. Aggregating these reports allows our security and development teams to:

  • Identify misconfigurations and false positives.
  • Detect malicious activity (XSS attempts).
  • Monitor policy rollout health across all properties.

Key Objectives:

  • High Throughput: Handle massive bursts of report traffic during incidents.
  • Low Latency: Return 204 No Content immediately to clients.
  • Noise Reduction: Deduplicate repetitive reports from the same user/browser.
  • Actionable Insights: Provide dashboards and alerts for developers.
  • Future-Proof: Built on the latest LTS technologies available for Q1 2026.
  • Ingestion API: Expose a POST /csp/report endpoint accepting both CSP report formats:
    • Legacy format (report-uri): Content-Type: application/csp-report with csp-report wrapper object
    • Modern format (report-to + Reporting API): Content-Type: application/reports+json with array of report objects

Browser support note: Firefox does not support report-to as of early 2026. Safari has partial support. Chromium-based browsers fully support both. Accept both formats to maximize coverage.

  • Immediate Response: Always respond with HTTP 204 without waiting for processing.
  • Deduplication: Suppress identical violations from the same browser within a short window (e.g., 10 minutes) using Redis.
  • Storage: Store detailed violation records (timestamp, directive, blocked URI, etc.) for querying.
  • Analytics: Support querying by directive, blocked host, and full-text search on resource URLs.
  • Visualization: Integration with Grafana for trends, top violators, and alerting.
  • Retention: Retain production data for 90 days.
  • Scalability: Horizontal scaling from 50k RPS to 1M+ RPS.
  • Reliability: “Fire-and-forget” ingestion with durable buffering in Kafka. At-least-once delivery.
  • Flexibility: Plug-and-play storage layer (Snowflake for Prod, Postgres for Dev).
  • Security: Stateless API, standardized TLS, secure access to dashboards.

We have selected the latest Long-Term Support (LTS) and stable versions projected for the build timeframe.

ComponentChoiceVersion (Target)Justification
LanguageJava25 LTSLatest LTS as of late 2025. Performance & feature set.
FrameworkSpring Boot4.0 (Framework 7)Built for Java 25. Native support for Virtual Threads & Reactive.
API StyleSpring WebFluxNon-blocking I/O essential for high-concurrency ingestion.
MessagingApache Kafka3.8+ (AWS MSK)Durable buffer, high throughput, decoupling.
CachingValkey8.x (ElastiCache)Redis-compatible, low-latency deduplication (AWS migrated to Valkey after Redis licensing changes).
Primary StorageSnowflakeSaaSCloud-native OLAP, separates storage/compute, handles massive datasets.
Dev StoragePostgreSQL18.xEasy local setup, sufficient for dev/test volumes.
VisualizationGrafana12.xRich ecosystem, native Snowflake plugin.

The system follows a Streaming Data Pipeline pattern.

Storage

AWS_Infrastructure

Kubernetes Cluster (EKS)

Clients

POST /csp/report

Async Produce

Consume Batch

Check Dedup

Write Batch

Write (Dev)

Browsers

CSP Reports

Load Balancer

Ingestion Service

Spring WebFlux

Consumer Service

Spring Boot

Kafka / MSK

Topic: csp-violations

Redis / ElastiCache

Snowflake DW

Postgres Dev

  • Role: Entry point for all reports.
  • Implementation: Spring WebFlux (Netty).
  • Behavior:
    • Validates JSON format.
    • Asynchronously sends to Kafka (csp-violations).
    • Returns 204 immediately.
    • No DB interaction to ensure sub-millisecond response time.
  • Topic: csp-violations.
  • Partitions: Scaled per throughput (e.g., 48 partitions for 50k RPS).
  • Role: Buffers spikes. If DB is slow, Kafka holds data, preventing data loss or API latency.
  • Role: Processor.
  • Implementation: Spring Boot (Reactor Kafka).
  • Logic:
    1. Polls batch from Kafka.
    2. Computes Dedup Hash (e.g., SHA1(document + directive + blocked_uri + ua)).
    3. Checks Redis: If exists, skip. If new, set in Redis (EXPIRE 10m).
    4. Buffers unique records.
    5. Batch writes to Storage (Snowflake/Postgres).
    6. Commits Kafka offsets.
  • Production (Snowflake): Optimized for OLAP query patterns. Table clustered by Date/Directive.
  • Development (Postgres): Standard relational table with GIN indexes for text search simulation.

The ingestion API must parse two distinct JSON formats. Per W3C CSP3, the report-uri directive is deprecated but remains widely used due to Firefox’s lack of report-to support.

Legacy format (Content-Type: application/csp-report):

legacy-csp-report.json
{
"csp-report": {
"document-uri": "https://example.com/page",
"blocked-uri": "https://evil.com/script.js",
"violated-directive": "script-src 'self'",
"effective-directive": "script-src",
"original-policy": "script-src 'self'; report-uri /csp",
"disposition": "enforce",
"status-code": 200
2 collapsed lines
}
}

Modern format (Content-Type: application/reports+json):

modern-reporting-api.json
[
{
"type": "csp-violation",
"age": 53531,
"url": "https://example.com/page",
"user_agent": "Mozilla/5.0 ...",
"body": {
"documentURL": "https://example.com/page",
"blockedURL": "https://evil.com/script.js",
"effectiveDirective": "script-src-elem",
"originalPolicy": "script-src 'self'; report-to csp-endpoint",
"disposition": "enforce",
"statusCode": 200,
"sample": "console.log(\"ma"
3 collapsed lines
}
}
]

Key differences:

AspectLegacy (report-uri)Modern (Reporting API)
Wrappercsp-report objectArray of report objects
Field namingkebab-casecamelCase
Directive fieldviolated-directiveeffectiveDirective
Code sampleNot includedsample (first 40 chars, requires 'report-sample')
BatchingSingle report per POSTMay batch multiple reports

The consumer normalizes both formats to a unified internal schema before deduplication.

FieldTypeDescription
EVENT_IDUUIDUnique Event ID
EVENT_TSTIMESTAMPTime of violation
DOCUMENT_URISTRINGPage where violation occurred
VIOLATED_DIRECTIVESTRINGe.g., script-src
BLOCKED_URISTRINGThe resource blocked
BLOCKED_HOSTSTRINGDomain of blocked resource (derived)
USER_AGENTSTRINGBrowser UA
ORIGINAL_POLICYSTRINGFull CSP string
VIOLATION_HASHSTRINGDeduplication key
snowflake/csp_violations.sql
CREATE TABLE CSP_VIOLATIONS (
EVENT_ID STRING DEFAULT UUID_STRING(),
EVENT_TS TIMESTAMP_LTZ NOT NULL,
5 collapsed lines
EVENT_DATE DATE AS (CAST(EVENT_TS AS DATE)) STORED,
DOCUMENT_URI STRING,
VIOLATED_DIRECTIVE STRING,
BLOCKED_URI STRING,
BLOCKED_HOST STRING,
USER_AGENT STRING,
-- ... other fields
VIOLATION_HASH STRING
)
CLUSTER BY (EVENT_DATE, VIOLATED_DIRECTIVE);
postgres/csp_violations.sql
CREATE TABLE csp_violations (
5 collapsed lines
event_id UUID PRIMARY KEY,
event_ts TIMESTAMPTZ NOT NULL,
-- ... same fields as Snowflake
blocked_uri TEXT
);
-- Trigram index for text search on blocked URIs
CREATE INDEX idx_blocked_uri_trgm ON csp_violations USING gin (blocked_uri gin_trgm_ops);

The system is designed to scale horizontally. We use specific formulas to determine the required infrastructure based on our target throughput.

We use the following industry-standard formulas to estimate resources for strict SLAs.

To avoid bottlenecks, partition count (PP) is calculated based on the slower of the producer (TpT_p) or consumer (TcT_c) throughput per partition.

P=max(TtargetTp,TtargetTc)×GrowthFactorP = \max \left( \frac{T_{target}}{T_p}, \frac{T_{target}}{T_c} \right) \times \text{GrowthFactor}
  • Target (TtargetT_{target}): 50 MB/s (50k RPS ×\times 1KB avg message size).
  • Producer Limit (TpT_p): ~10 MB/s (standard Kafka producer on commodity hardware).
  • Consumer Limit (TcT_c): ~5 MB/s (assuming deserialization + dedup logic).
  • Growth Factor: 1.5x - 2x.

Calculation for 50k RPS: P=max(5,10)×1.5=15 partitions (min)P = \max(5, 10) \times 1.5 = 15 \text{ partitions (min)} Recommendation: We will provision 48 partitions to allow for massive burst capacity (up to ~240k RPS without resizing) and to match the parallelism of our consumer pod fleet.

Npods=RPStargetRPSper_pod×HeadroomN_{pods} = \frac{RPS_{target}}{RPS_{per\_pod}} \times \text{Headroom}
  • 50k RPS Target: 50,0005,000×1.3=13\lceil \frac{50,000}{5,000} \times 1.3 \rceil = 13 Pods.
TierRPSThroughputAPI PodsConsumer PodsKafka Partitions
Baseline50k~50 MB/s412-1448
Growth100k~100 MB/s824-2896
High Scale500k~500 MB/s36130+512
  • API: CPU-bound (JSON parsing) and Network I/O bound. Scale HPA based on CPU usage (Target 60%).
  • Consumers: Bound by DB write latency and processing depth. Scale HPA based on Kafka Consumer Lag.
  • Storage:
    • Continuous Loading: Use Snowpipe for steady streams.
    • Batch Loading: Use COPY INTO with file sizes between 100MB - 250MB (compressed) for optimal warehouse utilization.
  • Dashboards (Grafana):
    • Overview: Total violations/min, Breakdown by Directive.
    • Top Offenders: Top Blocked Hosts, Top Violating Pages.
    • System Health: Kafka Lag, API 5xx rates, End-to-end latency.
  • Alerting:
    • Spike Alert: > 50% increase in violations over 5m moving average.
    • Lag Alert: Consumer lag > 1 million messages (indication of stalled consumers).

To ensure durability while maintaining high throughput:

  • Replication Factor: 3 (Survives 2 broker failures).
  • Min In-Sync Replicas (min.insync.replicas): 2 (Ensures at least 2 writes before ack).
  • Producer Acks: acks=1 (Leader only) for lowest latency (Fire-and-forget), or acks=all for strict durability. Recommended: acks=1 for CSP reports to minimize browser impact.
  • Compression: lz4 or zstd (Low CPU overhead, high compression ratio for JSON).
  • Log Retention: 24 Hours (Cost optimization; strictly a buffer).

Optimizing the Netty engine for 50k+ RPS:

  • Memory Allocation: Enable Pooled Direct ByteBufs to reduce GC pressure.
    • -Dio.netty.leakDetection.level=DISABLED (Production only)
    • -Dio.netty.allocator.type=pooled
  • Threads: limiting the Event Loop threads to CPU Core Count prevents context switching.
  • Garbage Collection: Use Generational ZGC which is optimized for sub-millisecond pauses on large heaps (available in Java 21+, production-ready).
    • -XX:+UseZGC -XX:+ZGenerational
  • File Sizing: Snowflake micro-partitions are most efficient when loaded from files sized 100MB - 250MB (compressed).
  • Batch Buffering: Consumers should buffer writes to S3 until this size is reached OR a time window (e.g., 60s) passes.
  • Snowpipe vs COPY:
    • For < 50k RPS: Direct Batch Inserts (JDBC) or small batch COPY.
    • For > 50k RPS: Write to S3 -> Trigger Snowpipe. This decouples consumer logic from warehouse loading latency.

Browsers implement CSP reporting with subtle differences. The pipeline must handle:

VariationSourceHandling
Missing blocked-uriSome inline violationsDefault to "inline"
Truncated sampleReporting API limitAccept up to 40 chars per spec
Extension violationsblocked-uri starts with moz-extension://, chrome-extension://Filter out (not actionable)
Empty referrerPrivacy settingsNormalize to null
Query strings in URIsStandard behaviorStrip for deduplication, preserve for storage

Kafka unavailable: The WebFlux API returns 204 regardless—reports are dropped silently. This is acceptable for CSP telemetry but would need circuit breakers and local buffering for stricter guarantees.

Redis unavailable: Consumer continues without deduplication. All reports flow to storage, increasing costs but preserving data. Alert on Redis connection failures.

Snowflake ingestion lag: Kafka acts as a buffer (24-hour retention). If lag exceeds retention, oldest messages are lost. Monitor consumer lag as primary SLI.

Dedup hash collisions: SHA-1 collisions are theoretically possible but practically irrelevant at this scale (~10^-18 probability). If strict deduplication is required, use SHA-256.

  • No per-user rate limiting: Stateless API cannot throttle abusive clients without external infrastructure (e.g., WAF)
  • No replay capability: Once Kafka retention expires, data cannot be reprocessed from source
  • Browser timing variability: Reporting API does not guarantee batch delivery—expect single-report POSTs in many cases
  • Cross-origin restrictions: Report endpoints must be HTTPS and cannot redirect to different origins

CSP-Sentinel balances throughput, cost, and operational simplicity through three key design decisions:

  1. Fire-and-forget ingestion - WebFlux returns 204 without database interaction, ensuring sub-millisecond response times regardless of downstream load
  2. Kafka as a buffer - Decouples ingestion from processing, allowing the system to absorb 10x traffic spikes without backpressure reaching browsers
  3. Redis deduplication - Reduces storage costs and noise by 70-90% in typical scenarios where browsers retry identical violations

The technology choices (Java 25 with ZGC, Valkey for Redis compatibility, Snowflake for OLAP) prioritize operational simplicity over marginal performance gains. The system handles 50k-500k+ RPS with infrastructure that a single engineer can maintain.

The design explicitly trades durability for latency—acceptable for telemetry where aggregate trends matter more than individual events. For audit-grade reporting, add acks=all, increase Kafka retention, and implement client-side retry with idempotency keys.

  • Familiarity with streaming data pipelines (Kafka producer/consumer model)
  • Understanding of CSP headers and browser security policies
  • Basic knowledge of OLAP vs OLTP workloads
  • Experience with Kubernetes horizontal pod autoscaling
TermDefinition
CSPContent Security Policy - browser security mechanism that restricts resource loading
Fire-and-forgetPattern where the sender does not wait for acknowledgment
HPAHorizontal Pod Autoscaler - Kubernetes component for scaling based on metrics
OLAPOnline Analytical Processing - optimized for aggregate queries over large datasets
SnowpipeSnowflake’s continuous data ingestion service
ZGCZ Garbage Collector - low-latency GC for JVM with sub-millisecond pauses
  • CSP violation reports are high-volume, low-value telemetry best handled as a streaming analytics problem
  • Fire-and-forget ingestion (204 immediate response) isolates browser impact from backend performance
  • Kafka buffers absorb traffic spikes and decouple ingestion from processing
  • Redis-based deduplication reduces storage by 70-90% using hash keys with TTL
  • Snowflake clustering by date/directive optimizes the dominant query pattern
  • Scale consumers on Kafka lag, not CPU—processing depth is the bottleneck

Read more

  • Previous

    SLOs, SLIs, and Error Budgets

    Platform Engineering / Observability & Reliability 13 min read

    Service Level Objectives (SLOs), Service Level Indicators (SLIs), and error budgets form a framework for quantifying reliability and balancing it against development velocity. This is not just monitoring—it is a business tool that aligns engineering effort with user experience. SLIs measure what users care about, SLOs set explicit targets, and error budgets convert those targets into actionable resource constraints. The framework originated at Google’s SRE practice and has become the industry standard for reliability management. This article covers the design reasoning behind each concept, the mathematics of error budget consumption and burn rate alerting, and the operational practices that make SLOs effective in production.

  • Next

    Logging, Metrics, and Tracing Fundamentals

    Platform Engineering / Observability & Reliability 13 min read

    Observability in distributed systems rests on three complementary signals: logs capture discrete events with full context, metrics quantify system behavior over time, and traces reconstruct request paths across service boundaries. Each signal answers different questions, and choosing wrong defaults for cardinality, sampling, or retention can render your observability pipeline either useless or prohibitively expensive. This article covers the design reasoning behind each signal type, OpenTelemetry’s unified data model, and the operational trade-offs that determine whether your system remains debuggable at scale.