Media Systems
24 min read

Web Video Playback Architecture: HLS, DASH, and Low Latency

The complete video delivery pipeline from codecs and compression to adaptive streaming protocols, DRM systems, and ultra-low latency technologies. Covers protocol internals, design trade-offs, and production failure modes for building resilient video applications.

Playback

Delivery

Content Preparation

Source Video

Encoding

H.264/H.265/AV1

Packaging

fMP4/CMAF

DRM Encryption

CENC

CDN Edge

Global PoPs

Manifest

HLS/DASH

ABR Selection

Decryption

+ Decoding

Playback

End-to-end video playback pipeline from source encoding through CDN delivery to adaptive client playback

Video streaming solves a fundamental tension: high-quality video requires high bitrates, but network conditions are unpredictable and heterogeneous. The solution is Adaptive Bitrate (ABR) streaming—pre-encode multiple quality variants, segment them into small chunks, and let the client dynamically select based on real-time conditions.

The core mental model:

  1. Codecs compress by removing spatial/temporal redundancy. The trade-off is compression efficiency vs. decode complexity and hardware support. H.264 is universal but inefficient; AV1 achieves 30% better compression than HEVC (High Efficiency Video Coding) but requires modern hardware.

  2. Containers package compressed streams into addressable segments. CMAF (Common Media Application Format) unifies HLS and DASH delivery by standardizing fragmented MP4 (fMP4) structure, eliminating the need for duplicate storage.

  3. Manifests map available segments and quality levels. HLS uses text-based .m3u8 playlists; DASH uses XML-based MPD (Media Presentation Description) files. Both describe the same underlying content differently.

  4. The client drives quality selection. The player monitors buffer levels, network throughput, and device capabilities, then requests appropriate segments. The server is stateless—it serves whatever is requested.

  5. Latency is a spectrum. Traditional HLS/DASH: 6-15 seconds (segment duration × buffer depth). Low-Latency HLS/DASH: 2-4 seconds (partial segments + preload hints). WebRTC: <500ms (UDP + no buffering). Each step down in latency trades scalability and error resilience.

  6. DRM (Digital Rights Management) is ecosystem-fragmented. Widevine (Chrome/Android), FairPlay (Apple), PlayReady (Windows). CENC (Common Encryption) allows a single encrypted file to work with multiple DRM systems, but license acquisition remains platform-specific.

Latency trade-off summary:

ApproachLatencyWhy
Traditional HLS/DASH6-15sSegment duration (4-6s) × buffer depth (2-3 segments)
LL-HLS/LL-DASH2-4sPartial segments (~200ms) + HTTP long-polling
WebRTC<500msUDP transport + minimal buffering + no HTTP overhead

Initial attempts at web video playback were straightforward but deeply flawed. The most basic method involved serving a complete video file, such as an MP4, directly from a server. While modern browsers can begin playback before the entire file is downloaded (progressive download), this approach is brittle. It offers no robust mechanism for seeking to un-downloaded portions of the video, fails completely upon network interruption, and locks the user into a single, fixed quality.

A slightly more advanced method, employing HTTP Range Requests, addressed the issues of seekability and resumability by allowing the client to request specific byte ranges of the file. This enabled a player to jump to a specific timestamp or resume a download after an interruption.

However, both of these early models shared a fatal flaw: they were built around a single, monolithic file with a fixed bitrate. This “one-size-fits-all” paradigm was economically and experientially unsustainable. Serving a high-quality, high-bitrate file to a user on a low-speed mobile network resulted in constant buffering and a poor experience, while simultaneously incurring high bandwidth costs for the provider.

This pressure gave rise to ABR streaming, the foundational technology of all modern video platforms. ABR inverted the delivery model. Instead of the server pushing a single file, the video is pre-processed into multiple versions at different quality levels. Each version is then broken into small, discrete segments. The client player is given a manifest file—a map to all available segments—and is empowered to dynamically request the most appropriate segment based on its real-time assessment of network conditions, screen size, and CPU capabilities.

Design rationale for segmentation: Small segments (typically 2-10 seconds) enable several critical capabilities:

  • Fast quality switching: Client can change bitrate at segment boundaries without seeking
  • Parallel CDN caching: Each segment is independently cacheable with unique URLs
  • Error recovery: A corrupted segment affects only a few seconds of playback
  • HTTP compatibility: Standard web servers and CDNs handle segments as regular files

The trade-off is manifest overhead and increased request count. For a 2-hour movie with 6-second segments, the player must fetch ~1,200 segments plus periodic manifest updates.

At the most fundamental layer of the video stack lies the codec (coder-decoder), the compression algorithm that makes transmission of high-resolution video over bandwidth-constrained networks possible. Codecs work by removing spatial and temporal redundancy from video data, dramatically reducing file size.

How codecs achieve compression:

  • Intra-frame (I-frames): Compress individual frames independently using spatial redundancy (similar adjacent pixels)
  • Inter-frame (P/B-frames): Encode only the differences between frames using temporal redundancy (motion vectors)
  • Transform coding: Convert pixel blocks to frequency domain (DCT/Discrete Cosine Transform), quantize, and entropy-code

A typical 1080p raw video at 30fps requires ~1.5 Gbps. H.264 compresses this to 5-10 Mbps (150-300x reduction). HEVC achieves similar quality at 3-5 Mbps; AV1 at 2-3 Mbps.

Standardized in 2003 by ITU-T/ISO (ITU-T H.264 | ISO/IEC 14496-10), H.264 remains the most widely deployed video codec. Its dominance is not due to superior compression but to unparalleled compatibility. For two decades, hardware manufacturers have built dedicated H.264 decoding chips into virtually every device.

Key Characteristics:

AttributeValue
Compression EfficiencyBaseline (reference point)
Ideal Use CaseUniversal compatibility, live streaming, ads
LicensingMPEG LA patent pool (reasonable rates)
Hardware SupportUbiquitous (100% of devices)
Typical Bitrate (1080p30)5-10 Mbps

Design trade-off: H.264 prioritized decode simplicity over compression efficiency. The spec defines multiple profiles (Baseline, Main, High) with increasing complexity. Baseline profile can be decoded on the most constrained hardware; High profile enables better compression but requires more capable decoders.

Gotcha: The avc1 codec string in manifests encodes the profile and level (e.g., avc1.640028 = High profile, level 4.0). Mismatched codec strings cause playback failures even when the container is valid.

Standardized in 2013 (ITU-T H.265 | ISO/IEC 23008-2), HEVC was designed for 4K and HDR (High Dynamic Range) content. Version 10 was approved in July 2024. As of late 2024, 92% of browsers support HEVC hardware decode.

Key Characteristics:

AttributeValue
Compression Efficiency~50% better than H.264
Ideal Use Case4K/UHD & HDR streaming
LicensingMultiple patent pools (MPEG LA, HEVC Advance, Velos Media)
Hardware SupportWidespread (NVIDIA, AMD, Intel, Apple Silicon, Qualcomm)
Typical Bitrate (1080p30)3-5 Mbps

Design trade-off: HEVC achieves better compression through larger coding tree units (CTU, up to 64×64 vs. H.264’s 16×16 macroblocks) and more sophisticated prediction modes. This increases encoder complexity by 5-10x, making real-time encoding expensive.

Licensing complexity: Three separate patent pools with inconsistent terms drove the industry toward AV1. A single 4K stream may owe royalties to all three pools, making cost calculation non-trivial.

Released in 2018 by the Alliance for Open Media (AOM)—Google, Netflix, Amazon, Microsoft, Meta, and others—AV1 was a direct response to HEVC’s licensing fragmentation.

Key Characteristics:

AttributeValue
Compression Efficiency~30% better than HEVC
Ideal Use CaseHigh-volume VOD, bandwidth savings
LicensingRoyalty-free (AOM patent commitment)
Hardware Support~10% of smartphones (Q2 2024), 88% of Netflix-certified TVs
Typical Bitrate (1080p30)2-3 Mbps

Adoption status (2024-2025):

  • YouTube: 75%+ of videos encoded in AV1
  • Netflix: 30% of streams
  • Hardware decode: iPhone 15 Pro (A17 chip), Snapdragon 8 Gen 2+, Intel Arc, NVIDIA RTX 40-series

Design trade-off: AV1’s superior compression comes from computationally expensive encoding. Software encoding is 10-20x slower than H.264. Hardware encoders (NVIDIA NVENC, Intel QuickSync) are now available but still slower than HEVC hardware encoding.

When to use AV1: High-volume VOD where encoding cost amortizes across many views. Not yet practical for low-latency live streaming without hardware acceleration.

AAC is the de facto standard for audio in video streaming. Standardized in MPEG-2 Part 7 and MPEG-4 Part 3, it’s the default audio codec for MP4 containers and supported by nearly every device.

AttributeValue
Primary Use CaseVOD, music streaming
Low Bitrate (<96kbps)Fair; quality degrades noticeably
High Bitrate (>128kbps)Excellent; industry standard
Latency~100-200ms (not ideal for real-time)
CompatibilityNear-universal
LicensingMPEG LA patent pool

Profile variants: AAC-LC (Low Complexity) is most common. HE-AAC (High Efficiency) adds spectral band replication for better low-bitrate performance. HE-AACv2 adds parametric stereo.

Opus (IETF RFC 6716) is a royalty-free codec developed for real-time communication. Its standout feature is exceptional performance at low bitrates while maintaining sub-20ms algorithmic latency.

AttributeValue
Primary Use CaseWebRTC, VoIP, low-latency streaming
Low Bitrate (<96kbps)Excellent; maintains intelligibility
High Bitrate (>128kbps)Competitive with AAC
Latency2.5-60ms (configurable)
CompatibilityAll modern browsers, limited hardware support
LicensingRoyalty-free, BSD-licensed

Design rationale: Opus combines two compression technologies—SILK (speech-optimized, from Skype) and CELT (music-optimized). The encoder dynamically switches based on content characteristics, achieving good quality across voice, music, and mixed content.

Once the audio and video have been compressed by their respective codecs, they must be packaged into a container format and segmented into small, deliverable chunks. This intermediate stage is critical for enabling adaptive bitrate streaming.

MPEG-TS is the traditional container for HLS. Its origins lie in digital broadcast (DVB), where its structure of small, fixed-size 188-byte packets was designed for resilience against transmission errors over unreliable networks.

Design characteristics:

  • Fixed packet size enables recovery from partial data loss
  • Self-synchronizing (sync byte 0x47 every 188 bytes)
  • Designed for continuous streams, not random access

Limitation: MPEG-TS has significant overhead (~8-15%) compared to fMP4, and each segment requires redundant metadata. This is why modern deployments prefer fMP4/CMAF.

Fragmented MP4 is the modern, preferred container for both HLS (since 2016) and DASH. It’s a variant of the ISO Base Media File Format (ISOBMFF, ISO/IEC 14496-12).

Box structure for streaming:

fMP4 File Structure:
├── ftyp (file type)
├── moov (movie metadata)
│ ├── mvhd (movie header)
│ └── trak (track info, codec config)
├── moof (movie fragment header) ─┐
│ └── traf (track fragment) │ Repeats for
└── mdat (media data) ─┘ each segment

The moov box must appear before any mdat for playback to begin without full download (“fast start”). For live streaming, the moov contains initialization data, and each segment is a moof + mdat pair.

Initialization segment: Contains ftyp + moov with codec configuration but no media samples. Must be fetched before any media segments. Typically 1-5 KB.

CMAF (ISO/IEC 23000-19:2024) is not a new container but a standardization of fMP4 for streaming. Its introduction was a watershed moment for the industry.

The problem CMAF solves: Before CMAF, supporting both Apple devices (HLS with .ts) and other devices (DASH with .mp4) required encoding, packaging, and storing two complete sets of video files. This doubled storage costs and reduced CDN cache efficiency.

CMAF architecture:

Single Source → CMAF Segments (fMP4) → [HLS Manifest (.m3u8)]
→ [DASH Manifest (.mpd)]

A provider creates one set of CMAF segments and serves them with two different manifest files. Storage cost: 1x instead of 2x.

CMAF chunks for low-latency: CMAF defines “chunks”—the smallest addressable unit containing a moof + mdat pair. For low-latency streaming, each chunk can be independently transferred via HTTP chunked encoding as soon as it’s encoded, without waiting for the full segment.

Example: A 4-second segment at 30fps contains 120 frames. With one frame per CMAF chunk, the first chunk is available ~33ms after encoding starts, versus waiting 3.97 seconds for the full segment.

ffmpeg is the workhorse of video processing. Here’s a multi-bitrate HLS encoding pipeline:

hls.bash
4 collapsed lines
# Create output directory for HLS files
mkdir -p video/hls
# HLS Transcoding for multiple resolutions and fps
ffmpeg -i ./video/big-buck-bunny.mp4 \
-filter_complex \
"[0:v]split=7[v1][v2][v3][v4][v5][v6][v7]; \
[v1]scale=640:360[v1out]; \
[v2]scale=854:480[v2out]; \
[v3]scale=1280:720[v3out]; \
[v4]scale=1920:1080[v4out]; \
[v5]scale=1920:1080[v5out]; \
[v6]scale=3840:2160[v6out]; \
[v7]scale=3840:2160[v7out]" \
-map "[v1out]" -c:v:0 h264 -r 30 -b:v:0 800k \
-map "[v2out]" -c:v:1 h264 -r 30 -b:v:1 1400k \
-map "[v3out]" -c:v:2 h264 -r 30 -b:v:2 2800k \
-map "[v4out]" -c:v:3 h264 -r 30 -b:v:3 5000k \
-map "[v5out]" -c:v:4 h264 -r 30 -b:v:4 8000k \
-map "[v6out]" -c:v:5 h264 -r 15 -b:v:5 16000k \
-map "[v7out]" -c:v:6 h264 -r 30 -b:v:6 20000k \
-map a:0 -map a:0 -map a:0 -map a:0 -map a:0 -map a:0 -map a:0 \
-c:a aac -b:a 128k \
-var_stream_map "v:0,a:0 v:1,a:1 v:2,a:2 v:3,a:3 v:4,a:4 v:5,a:5 v:6,a:6" \
-master_pl_name master.m3u8 \
-f hls \
-hls_time 6 \
-hls_list_size 0 \
-hls_segment_filename "video/hls/v%v/segment%d.ts" \
video/hls/v%v/playlist.m3u8

Key parameters explained:

ParameterPurpose
split=7Creates 7 parallel encoding pipelines from one input
scale=WxHResizes to target resolution
-c:v:N h264 -b:v:N XkSets codec and target bitrate for variant N
-hls_time 6Target segment duration (actual duration varies by keyframe)
-var_stream_mapGroups video/audio variants for ABR playlist generation
-master_pl_nameGenerates master playlist referencing all variants

Gotcha: Keyframe alignment. Segments can only split at keyframes (I-frames). If your source has keyframes every 10 seconds but you request 6-second segments, actual segment duration will be 10 seconds. Always set keyframe interval at encode time: -g 180 for 6-second GOP (Group of Pictures) at 30fps.

Gotcha: Segment duration consistency. The HLS spec requires segment duration to match EXT-X-TARGETDURATION ±0.5 seconds. Variable segment durations (common with scene-change keyframes) can cause buffer underruns if the player’s buffer model assumes consistent duration.

The protocols for adaptive bitrate streaming define the rules of communication between the client and server. They specify the manifest format and segment addressing scheme.

Created by Apple and documented in RFC 8216 (with draft RFC8216bis describing protocol version 13), HLS is the most widely deployed streaming protocol. Its dominance stems from mandatory support on Apple’s ecosystem—Safari will only play HLS natively.

Design philosophy: HLS was designed to work with standard HTTP infrastructure. Segments are regular files; playlists are text files. Any HTTP server or CDN can serve HLS content without modification.

HLS uses a two-level playlist structure:

Master Playlist (entry point, lists all variants):

master.m3u8
#EXTM3U
#EXT-X-VERSION:3
#EXT-X-STREAM-INF:BANDWIDTH=2200291,AVERAGE-BANDWIDTH=995604,RESOLUTION=640x360,CODECS="avc1.640016,mp4a.40.2"
v0/playlist.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=3366885,AVERAGE-BANDWIDTH=1598631,RESOLUTION=854x480,CODECS="avc1.64001e,mp4a.40.2"
v1/playlist.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=5901087,AVERAGE-BANDWIDTH=3070517,RESOLUTION=1280x720,CODECS="avc1.64001f,mp4a.40.2"
v2/playlist.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=10987497,AVERAGE-BANDWIDTH=5306897,RESOLUTION=1920x1080,CODECS="avc1.640028,mp4a.40.2"
v3/playlist.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=16867619,AVERAGE-BANDWIDTH=8399048,RESOLUTION=1920x1080,CODECS="avc1.640028,mp4a.40.2"
v4/playlist.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=33220766,AVERAGE-BANDWIDTH=16615212,RESOLUTION=3840x2160,CODECS="avc1.640033,mp4a.40.2"
v5/playlist.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=41078907,AVERAGE-BANDWIDTH=20719432,RESOLUTION=3840x2160,CODECS="avc1.640033,mp4a.40.2"
v6/playlist.m3u8

Key tags explained:

TagPurpose
EXT-X-VERSIONHLS protocol version (3 is widely compatible; 7+ for fMP4)
EXT-X-STREAM-INFDescribes a variant stream
BANDWIDTHPeak bitrate in bits/second (used for ABR selection)
AVERAGE-BANDWIDTHAverage bitrate (more accurate for buffer estimation)
CODECSRFC 6381 codec string (critical for playback capability check)

Media Playlist (lists segments for one variant):

playlist.m3u8
#EXTM3U
#EXT-X-VERSION:3
#EXT-X-TARGETDURATION:15
#EXT-X-MEDIA-SEQUENCE:0
#EXTINF:12.000000,
segment0.ts
#EXTINF:3.850000,
segment1.ts
#EXTINF:7.300000,
segment2.ts
#EXTINF:12.500000,
segment3.ts
#EXTINF:12.200000,
segment4.ts
#EXTINF:8.350000,
segment5.ts
#EXTINF:4.600000,
segment6.ts
#EXTINF:3.100000,
segment7.ts
#EXTINF:5.650000,
segment8.ts
#EXTINF:1.850000,
segment9.ts
.... More segments
#EXT-X-ENDLIST

Critical tags for playback:

TagPurpose
EXT-X-TARGETDURATIONMaximum segment duration (player uses for buffer calculations)
EXT-X-MEDIA-SEQUENCESequence number of first segment (critical for live edge tracking)
EXTINFActual segment duration
EXT-X-ENDLISTIndicates VOD content (no more segments will be added)

Live streaming behavior: For live streams, EXT-X-ENDLIST is absent. The player periodically re-fetches the playlist to discover new segments. The refresh interval is typically target duration / 2 per the spec.

Gotcha: Sequence number discontinuities. If the origin server restarts and resets EXT-X-MEDIA-SEQUENCE to 0, players may incorrectly seek to the wrong position or refuse to play. Production systems must persist sequence numbers across restarts.

Dynamic Adaptive Streaming over HTTP (DASH) is standardized as ISO/IEC 23009-1 (5th edition: 2022). Unlike HLS, which was created by Apple, DASH was developed through an open, international standardization process.

Key differentiator: DASH is codec-agnostic. The spec defines manifest structure and segment addressing but makes no assumptions about codecs. This enables delivery of any format: H.264, HEVC, AV1, VP9, and future codecs.

Design philosophy: DASH prioritizes flexibility and expressiveness. The XML-based MPD can describe complex presentations (multiple periods, dynamic ad insertion, multiple audio languages, accessibility tracks) more precisely than HLS’s text-based format.

<MPD type="static" mediaPresentationDuration="PT600S"
profiles="urn:mpeg:dash:profile:isoff-on-demand:2011">
<Period duration="PT600S">
<AdaptationSet contentType="video" mimeType="video/mp4" codecs="avc1.640028">
<Representation id="video-1080p" bandwidth="5000000" width="1920" height="1080">
<BaseURL>video/1080p/</BaseURL>
<SegmentTemplate media="segment-$Number$.m4s" initialization="init.mp4" startNumber="1" />
</Representation>
<Representation id="video-720p" bandwidth="2800000" width="1280" height="720">
<BaseURL>video/720p/</BaseURL>
<SegmentTemplate media="segment-$Number$.m4s" initialization="init.mp4" startNumber="1" />
</Representation>
</AdaptationSet>
<AdaptationSet contentType="audio" mimeType="audio/mp4" codecs="mp4a.40.2" lang="en">
<Representation id="audio-en" bandwidth="128000">
<BaseURL>audio/en/</BaseURL>
<SegmentTemplate media="segment-$Number$.m4s" initialization="init.mp4" startNumber="1" />
</Representation>
</AdaptationSet>
</Period>
</MPD>

Hierarchy: MPDPeriodAdaptationSetRepresentation

  • Period: Content divisions (pre-roll ad, main content, mid-roll ad). Enables seamless ad insertion.
  • AdaptationSet: Groups switchable representations (all 1080p variants, or all audio languages).
  • Representation: One specific variant (1080p at 5Mbps H.264).

Segment addressing methods:

  1. SegmentTemplate: Constructs URLs from templates with variable substitution ($Number$, $Time$). Most common.
  2. SegmentList: Explicit enumeration of segment URLs. Useful for variable-duration segments.
  3. SegmentBase: Single segment with byte-range addressing via sidx box. Used for VOD with in-file index.

Gotcha: Period transitions. Playback can stutter at period boundaries if the transition isn’t handled correctly. The periodContinuity signaling in DASH-IF guidelines enables seamless appends, but short periods (<5 seconds) with network latency can still cause stalls.

FeatureHLSMPEG-DASH
Spec OwnerApple (RFC 8216)ISO/IEC 23009-1
Manifest FormatText-based (.m3u8)XML-based (.mpd)
Manifest SizeSmaller (10-50 KB live)Larger (20-100 KB complex)
Codec SupportH.264, HEVC, limited othersAny codec (agnostic)
Container SupportMPEG-TS, fMP4/CMAFfMP4/CMAF, WebM
Native DRMFairPlayWidevine, PlayReady
Safari/iOSNativeNot supported
Manifest ExpressivenessLimited (extensions for advanced features)Rich (periods, segment timelines, etc.)
Low-Latency ExtensionLL-HLSLL-DASH
Industry AdoptionDominant (Apple ecosystem requirement)Strong (Android, smart TVs, web)

Practical implication: Most production systems support both. CMAF enables shared media segments; only the manifest differs. The choice often comes down to DRM requirements and target platform mix.

For premium content, preventing unauthorized copying and distribution is a business necessity. DRM provides content protection through encryption and controlled license issuance.

Three major DRM systems dominate, each tied to a specific ecosystem:

DRM SystemEcosystemBrowserMobileTV/STB
Widevine (Google)Chrome, AndroidChrome, Firefox, EdgeAndroidAndroid TV, Chromecast
FairPlay (Apple)AppleSafariiOSApple TV
PlayReady (Microsoft)WindowsEdgeXbox, Smart TVs

Why three systems? Each platform vendor controls the secure execution environment (TEE/Trusted Execution Environment) on their hardware. DRM requires hardware-backed security for premium content (4K, HDR). No vendor will trust another vendor’s TEE implementation.

DRM systems define security levels based on where decryption occurs:

Widevine Levels:

  • L1 (Hardware): Decryption in TEE; keys never exposed to main CPU. Required for HD/4K on most services.
  • L2 (Hybrid): Partial hardware protection. Uncommon in practice.
  • L3 (Software): Decryption in browser process. No hardware protection. Limited to SD resolution on premium services.

FairPlay: Leverages Apple’s Secure Enclave for hardware-backed security on all modern devices.

PlayReady Levels:

  • SL3000: TEE-based, hardware protection (introduced with PlayReady 3.0 in 2015)
  • SL2000: Software-based protection

Content policy implication: Netflix, Disney+, and other premium services enforce L1/SL3000 for 4K content. A Chrome user on Linux typically gets L3 Widevine and is limited to 720p.

CENC (ISO/IEC 23001-7:2023) enables a single encrypted file to work with multiple DRM systems. This is the key technology making multi-DRM practical.

How CENC works:

  1. Content is encrypted once with AES-128 (same encrypted bytes for all DRM systems)
  2. Each DRM system’s metadata (PSSH box) is embedded in the init segment
  3. At playback, the player detects available DRM systems, requests a license from the appropriate server, and decrypts using the returned key

Encryption modes:

  • cenc (CTR mode): Default. Parallelizable, suitable for streaming. No padding required.
  • cbcs (CBC mode with subsample patterns): Required by FairPlay. Encrypts only a subset of bytes, leaving NAL headers in plaintext for codec inspection.

Subsample encryption: For NAL-structured video (H.264, HEVC), only the coded slice data is encrypted. NAL headers remain in plaintext, allowing the player to parse codec configuration without decryption.

Encrypted Media Extensions (EME) is the W3C API that connects JavaScript to the platform’s Content Decryption Module (CDM).

Flow:

  1. Player detects encrypted content (via encrypted event)
  2. Calls navigator.requestMediaKeySystemAccess() to check DRM availability
  3. Creates MediaKeys and MediaKeySession
  4. Sends license request to server (challenge from CDM)
  5. Passes license response to CDM
  6. CDM decrypts content; HTMLMediaElement plays

Gotcha: EME is not DRM. EME is the API; Widevine/FairPlay/PlayReady are the DRM systems. EME standardizes the interface but doesn’t define the security properties. A browser can implement EME with a software CDM (low security) or hardware TEE (high security).

Traditional HLS/DASH has 6-15+ seconds of latency (segment duration × buffer depth). For live sports, auctions, and interactive content, this is unacceptable. Two approaches address this: Low-Latency HLS/DASH (HTTP-based, 2-4 seconds) and WebRTC (UDP-based, <500ms).

ComponentTraditionalLow-LatencyWebRTC
Encoding1-2s (segment duration)200-500ms (partial segment)20-100ms (per-frame)
Packaging~1s<100msN/A
CDN Edge1-2s500ms-1sN/A (SFU direct)
Manifest Update2-3s200-500ms (blocking reload)N/A
Player Buffer6-12s (2-3 segments)1-3s (parts + safety)50-200ms (jitter buffer)
Total10-20s2-5s100-500ms

Apple introduced LL-HLS in 2019 (WWDC 2019) to reduce latency while preserving HTTP scalability. It achieves 2-4 second latency through three mechanisms:

Instead of waiting for a full 6-second segment, LL-HLS publishes smaller “parts” (200ms-2s) as soon as they’re encoded.

#EXTM3U
#EXT-X-TARGETDURATION:4
#EXT-X-PART-INF:PART-TARGET=0.5
#EXT-X-MEDIA-SEQUENCE:100
#EXT-X-PART:DURATION=0.5,URI="segment100_part0.m4s"
#EXT-X-PART:DURATION=0.5,URI="segment100_part1.m4s"
#EXT-X-PART:DURATION=0.5,URI="segment100_part2.m4s"
#EXTINF:2.0,
segment100.m4s

Design rationale: Parts are published incrementally but roll up into full segments. Legacy players ignore EXT-X-PART tags and fetch full segments. LL-HLS-aware players fetch parts for lower latency.

Traditional HLS requires the player to poll for playlist updates (every target duration / 2). This polling introduces latency—the player might check just after an update and wait until the next poll.

LL-HLS uses HTTP long-polling: the player requests the playlist with a query parameter indicating the last seen media sequence. The server holds the connection until new content is available, then responds immediately.

GET /playlist.m3u8?_HLS_msn=100&_HLS_part=3

The server blocks until segment 100, part 3 (or later) exists, then responds with the updated playlist.

The server tells the player the URI of the next part before it exists:

#EXT-X-PRELOAD-HINT:TYPE=PART,URI="segment100_part4.m4s"

The player can issue a request immediately. The server holds the request open until the part is ready. This eliminates the round-trip between playlist update and segment request.

Latency budget:

  • Part generation: ~500ms
  • CDN propagation: ~200ms
  • Blocking playlist: ~0ms (pre-positioned)
  • Player buffer: ~1-2 seconds (safety margin)
  • Total: 2-3 seconds

LL-DASH achieves similar latency through different mechanisms:

  1. CMAF Chunked Transfer: Each CMAF chunk is transferred via HTTP chunked encoding as it’s generated. The client receives data before the segment is complete.

  2. Service Description: The MPD includes ServiceDescription with target latency and playback rate adjustments:

<ServiceDescription>
<Latency target="3500" min="2000" max="6000" />
<PlaybackRate min="0.96" max="1.04" />
</ServiceDescription>
  1. CMSD (Common Media Server Data): Server-to-client signaling of real-time latency targets, enabling dynamic adjustment.

L3D Profile (2024): The 6th edition of DASH introduces Low Latency, Low Delay DASH with discretely addressable partial segments, aligning more closely with LL-HLS’s partial segment model.

WebRTC is fundamentally different from HTTP streaming. It’s designed for true real-time, bidirectional communication with sub-second latency.

Key architectural differences:

AspectHLS/DASHWebRTC
TransportTCP (HTTP)UDP (SRTP)
Connection ModelStateless request/responseStateful peer connections
Error HandlingRetransmit on lossSkip or interpolate lost data
BufferingSeconds of bufferMilliseconds (jitter buffer)
ScalingCDN (millions of viewers)SFU/MCU (hundreds per server)

Why UDP for low latency? TCP’s reliability (retransmission, ordering) introduces head-of-line blocking. A lost packet blocks all subsequent packets until retransmitted. For live video, it’s better to skip a frame than delay the entire stream.

SFU (Selective Forwarding Unit) architecture: For group calls and broadcasting, WebRTC uses SFUs. Each participant sends once to the SFU; the SFU forwards streams to all receivers without transcoding. This scales better than mesh (N² connections) or MCU (transcoding bottleneck).

Use CaseTechnologyLatencyTrade-off
VODTraditional HLS/DASH6-15sMaximum compatibility, CDN caching
Live SportsLL-HLS/LL-DASH2-5sScalable, some latency
Live AuctionsLL-HLS/LL-DASH2-3sHTTP infrastructure
Video ConferencingWebRTC<500msRequires SFU infrastructure
Gaming/InteractiveWebRTC<200msLimited scale per server

Hybrid approaches are emerging: Some systems use WebRTC for the first hop (ingest) and LL-HLS for distribution. This combines low-latency capture with CDN scalability.

The browser’s native <video> element can only handle progressive download or a single HLS/DASH stream (limited browser support). For adaptive streaming with quality switching, players use Media Source Extensions (MSE).

MSE (W3C Recommendation, actively updated through 2024) provides a JavaScript API to feed media data to <video>:

const mediaSource = new MediaSource()
video.src = URL.createObjectURL(mediaSource)
mediaSource.addEventListener("sourceopen", () => {
const sourceBuffer = mediaSource.addSourceBuffer('video/mp4; codecs="avc1.640028"')
// Append segments as they're fetched
fetch("segment.m4s")
.then((r) => r.arrayBuffer())
.then((data) => {
sourceBuffer.appendBuffer(data)
})
})

Key components:

  • MediaSource: Container that connects to HTMLMediaElement
  • SourceBuffer: Buffer for a single track (video, audio, or text). Handles segment parsing and decode.
  • SourceBufferList: Collection of active buffers

The player must balance competing concerns:

  1. Sufficient buffer for smooth playback (2-30 seconds depending on use case)
  2. Responsive quality switching (smaller buffer = faster adaptation)
  3. Memory constraints (5-20 MB typical; mobile is more constrained)

ManagedMediaSource (2024): A new addition where the browser manages buffer eviction automatically. The player receives bufferedchange events when the browser evicts data, enabling simpler player implementations.

Gotcha: Buffer eviction timing. If the player doesn’t track eviction, it may request already-evicted segments, causing playback stalls. Production players must handle the bufferedchange event or implement their own eviction tracking.

MSE’s changeType() method enables mid-stream codec changes without replacing the SourceBuffer:

sourceBuffer.changeType('video/mp4; codecs="hev1.1.6.L93.B0"')

Use case: Start with H.264 for immediate playback (universal decode), then switch to HEVC for bandwidth savings once the player confirms hardware support.

FailureSymptomCause
QuotaExceededErrorappendBuffer failsBuffer full; implement eviction
Decode errorVideo freezesCorrupted segment or codec mismatch
Gap in timelineAudio/video desyncDiscontinuity not handled
Infinite bufferingNever startsInit segment missing or codec string wrong

Building a production-grade video streaming service requires robust system design. A modern video pipeline should be viewed as a high-throughput, real-time data pipeline with strict latency and availability requirements.

A CDN is non-negotiable for any streaming service at scale:

Origin offload: Without a CDN, a live stream to 1 million concurrent viewers at 5 Mbps requires 5 Pbps from origin. With a CDN, each edge PoP (Point of Presence) fetches once, caches, and serves thousands locally. Origin load: ~100 Mbps (PoPs × bitrate variants).

Latency reduction: Physical distance determines minimum latency (~1ms per 200km for fiber). A global CDN with 100+ PoPs ensures most users are within 50ms of an edge server.

Manifest caching considerations: Live manifests change every few seconds. CDN TTL must be shorter than update frequency, or viewers see stale playlists. Common pattern: TTL = 1 second for live manifests, origin-controlled cache-control headers.

The bitrate ladder determines which quality levels are available. Poor ladder design causes:

  • Wasted bandwidth: Steps too small for perceptible quality difference
  • Unnecessary buffering: Steps too large, causing oscillation
  • Poor mobile experience: No low-bitrate option for constrained networks

Example production ladder (1080p max):

ResolutionBitrateUse Case
1920×10808 MbpsHigh-quality fixed connections
1920×10805 MbpsGood broadband
1280×7203 MbpsAverage broadband
1280×7201.8 MbpsMobile on good LTE
854×4801.1 MbpsMobile on average LTE
640×360600 kbpsConstrained mobile
426×240300 kbpsEdge case fallback

Per-title encoding: Advanced platforms analyze each video’s complexity and generate custom ladders. A static talking-head video needs lower bitrates than an action scene. Netflix’s per-title encoding reduced bandwidth by 20% without quality loss.

Player-side metrics (client telemetry):

  • Startup time (time to first frame)
  • Rebuffering ratio (time buffering / time playing)
  • Average bitrate
  • Quality switches per minute
  • Error rate by type

Server-side metrics:

  • Origin request rate and latency
  • CDN cache hit ratio
  • Manifest generation latency
  • Segment availability latency (time from encode to CDN edge)

Alerting thresholds (example):

  • Rebuffering ratio > 1%: Investigate
  • Startup time p95 > 3 seconds: Investigate
  • CDN cache hit ratio < 90%: Check TTL configuration
  • Origin 5xx rate > 0.1%: Incident

Problem: WebVTT captions drift out of sync during playback.

Cause: X-TIMESTAMP-MAP header maps WebVTT cue times to media timestamps. Different calculation bases for offset (PTS vs. wall clock) cause drift. Works correctly with MPEG-TS but problematic with fMP4.

Mitigation: Use EXT-X-PROGRAM-DATE-TIME for wall-clock synchronization instead of direct PTS mapping.

Problem: Player requests a segment that doesn’t exist yet.

Cause: Aggressive manifest caching (CDN TTL too long) or clock skew between origin and player.

Mitigation:

  • Set manifest TTL shorter than segment duration
  • Include EXT-X-PROGRAM-DATE-TIME for clock synchronization
  • Implement retry with exponential backoff for 404s on expected segments

Problem: Playback stops mid-stream when license expires.

Cause: License expiration not handled, or renewal request blocked by ad blocker.

Mitigation:

  • Request license renewal before expiration (typically at 80% of license duration)
  • Implement graceful degradation (continue playing cached content while renewing)
  • Monitor license acquisition failures as a key metric

Problem: Multiple devices watching the same “live” stream diverge over time.

Cause: Clock drift, variable network latency, different buffer depths.

Mitigation:

  • Use EXT-X-PROGRAM-DATE-TIME as sync anchor
  • Implement periodic re-sync (every 30 seconds) with allowed drift tolerance
  • Consider server-side time signaling (CMSD in DASH)

Video streaming architecture is a study in layered abstractions, each solving a specific problem:

  1. Codecs solve compression (H.264 for compatibility, AV1 for efficiency)
  2. Containers solve addressability (CMAF unifies HLS/DASH delivery)
  3. Manifests solve discovery (client-driven quality selection)
  4. DRM solves content protection (CENC enables multi-platform encryption)
  5. Low-latency extensions solve real-time delivery (LL-HLS/LL-DASH for HTTP, WebRTC for UDP)

The key architectural insight: the client drives everything. The server provides options (quality variants, segments, licenses); the client makes decisions based on local conditions (bandwidth, buffer, capability). This stateless server model enables CDN scaling—every request is independent.

Production systems must support the full matrix: multiple codecs × multiple protocols × multiple DRM systems × multiple latency profiles. CMAF and CENC make this tractable by sharing the encoded content and encryption. Only manifests and license servers differ per platform.

The future is hybrid: AV1 for bandwidth efficiency where hardware supports it, H.264 as universal fallback; LL-HLS for scalable low-latency, WebRTC for interactive applications; hardware DRM for premium content, software for broader reach. The winning architecture is the one that dynamically selects the right tool for each user’s context.

  • Understanding of HTTP request/response model
  • Familiarity with video concepts (resolution, bitrate, frame rate)
  • Basic knowledge of encryption (symmetric encryption, key exchange)
  • Understanding of CDN caching principles
TermDefinition
ABRAdaptive Bitrate Streaming—dynamically selecting quality based on conditions
CDMContent Decryption Module—browser component that handles DRM decryption
CMAFCommon Media Application Format—standardized fMP4 for HLS/DASH
CENCCommon Encryption—standard for multi-DRM file encryption
CTUCoding Tree Unit—basic processing unit in HEVC (up to 64×64 pixels)
EMEEncrypted Media Extensions—W3C API connecting JavaScript to CDM
fMP4Fragmented MP4—streaming-optimized MP4 with separate init and media segments
GOPGroup of Pictures—sequence of frames from one I-frame to the next
I-frameIntra-frame—independently decodable frame (keyframe)
ISOBMFFISO Base Media File Format—foundation for MP4/fMP4 containers
LL-HLSLow-Latency HLS—Apple’s extension for 2-4 second latency
MPDMedia Presentation Description—DASH’s XML manifest format
MSEMedia Source Extensions—W3C API for feeding media to HTMLMediaElement
NALNetwork Abstraction Layer—framing structure in H.264/HEVC bitstreams
PoPPoint of Presence—CDN edge location
PSSHProtection System Specific Header—DRM metadata in MP4 files
PTSPresentation Timestamp—when a frame should be displayed
SFUSelective Forwarding Unit—WebRTC server that routes without transcoding
TEETrusted Execution Environment—hardware-secured processing area
  • Codecs trade compression efficiency vs. hardware support. H.264 is universal; AV1 offers 30% better compression but requires modern hardware (10% smartphone coverage as of 2024).
  • CMAF unifies HLS and DASH delivery. Single encoded segments, different manifests. Reduces storage and improves CDN cache efficiency.
  • HLS dominates due to Apple ecosystem. Safari/iOS require HLS; most services support both HLS and DASH.
  • DRM is platform-fragmented. Widevine (Google), FairPlay (Apple), PlayReady (Microsoft). CENC enables single encryption for all three.
  • Low-latency is achievable with HTTP. LL-HLS/LL-DASH achieve 2-4 seconds via partial segments and blocking playlist reload.
  • WebRTC is for true real-time. Sub-500ms latency via UDP transport, but requires SFU infrastructure and doesn’t scale like CDNs.

Specifications:

Official Documentation:

Tools:

Read more

  • Previous

    Video Transcoding Pipeline Design

    Platform Engineering / Media Systems 18 min read

    Building scalable video transcoding pipelines requires orchestrating CPU/GPU-intensive encoding jobs across distributed infrastructure while optimizing for quality, cost, and throughput. This article covers pipeline architecture patterns, codec selection, rate control strategies, job orchestration with chunked parallel processing, quality validation, and failure handling for production video platforms.

  • Next

    Image Processing Service Design: CDN, Transforms, and APIs

    Platform Engineering / Media Systems 45 min read

    This document presents the architectural design for a cloud-agnostic, multi-tenant image processing platform that provides on-the-fly transformations with enterprise-grade security, performance, and cost optimization. The platform supports hierarchical multi-tenancy (Organization → Tenant → Space), public and private image delivery, and deployment across AWS, GCP, Azure, or on-premise infrastructure. Key capabilities include deterministic transformation caching to ensure sub-second delivery, HMAC-SHA256 signed URLs for secure private access, CDN (Content Delivery Network) integration for global edge caching, and a “transform-once-serve-forever” approach that minimizes processing costs while guaranteeing HTTP 200 responses even for first-time transformation requests.