Frontend System Design
20 min read

Design a File Uploader

Building robust file upload requires handling browser constraints, network failures, and user experience across a wide spectrum of file sizes and device capabilities. A naive approach—form submission or single XHR (XMLHttpRequest)—fails at scale: large files exhaust memory, network interruptions lose progress, and users see no feedback. Production uploaders solve this through chunked uploads, resumable protocols, and careful memory management.

Server

Client

Small < 5MB

Large ≥ 5MB

Yes

No

No

Yes

File Selection

Validation

Size Check

Single Upload

Chunked Upload

Per-Chunk Progress

Failure?

Resume from Offset

Complete

Receive Chunk

Store Chunk

All Chunks?

Return Offset

Assemble File

Chunked uploads enable resumability and progress tracking while keeping memory usage constant regardless of file size.

File upload design centers on three architectural decisions:

  1. Upload method: Single request (FormData/XHR) vs chunked (Blob.slice + sequential requests). Single is simpler; chunked enables resumability and progress.

  2. Resumability protocol: tus (open standard), proprietary (Google/AWS), or none. Resumability adds server-side state but eliminates re-upload on failure.

  3. Memory strategy: Load entire file (simple, fails at ~1-2GB) vs stream chunks (constant memory, handles any size).

Key browser constraints shape the design:

  • No Fetch upload progress: XHR’s upload.progress event is the only native progress mechanism
  • Blob.slice is O(1): Slicing creates a view, not a copy—safe for huge files
  • URL.createObjectURL leaks memory: Must call revokeObjectURL() explicitly
  • File type detection is unreliable: file.type comes from extension, not content

Production implementations (Dropbox, Google Drive, Slack) all use chunked uploads with resumability for files above a threshold (typically 5-20MB), with chunk sizes between 5-256MB depending on network assumptions.

The File API provides the foundation for file handling in browsers. Per the W3C File API specification, a File extends Blob with name and lastModified properties. The critical constraint: accessing file data requires either loading it entirely into memory (FileReader) or streaming it chunk-by-chunk (Blob.stream() or Blob.slice()).

Memory limits: FileReader’s readAsArrayBuffer() or readAsDataURL() loads the entire file into memory. On mobile devices with 50-100MB practical limits, files over a few hundred MB can crash the tab. Desktop browsers handle more but still fail at 1-2GB for single reads.

Main thread budget: Image processing (thumbnail generation, dimension validation) can block the main thread. A 20MP JPEG decode takes 100-200ms on mid-range mobile devices—enough to cause noticeable jank.

Blob URL lifecycle: URL.createObjectURL() creates a reference that persists until document unload or explicit revokeObjectURL(). In long-running SPAs (Single Page Applications) with frequent file selection, leaked URLs accumulate memory.

MethodProgress EventsStreamingMemory UsageBrowser Support
Form submissionNoneN/ALowUniversal
XMLHttpRequestupload.progressNoEntire body in memoryUniversal
Fetch + FormDataNoneNoEntire body in memoryUniversal
Fetch + StreamNone (unreliable)YesO(chunk)Chrome 105+ only

The critical gap: Fetch API has no upload progress events. As Jake Archibald documented, using Streams to measure Fetch upload progress gives inaccurate results due to browser buffering—bytes enqueued don’t equal bytes sent. XHR remains the only reliable progress mechanism.

FactorSimple UploaderProduction Uploader
File size< 10 MBAny size
Network reliabilityStable connectionIntermittent, mobile
Concurrent uploadsSingle fileMultiple, queued
Failure recoveryRestart from beginningResume from last chunk
Memory usageO(file size)O(chunk size)

How it works:

The entire file is sent in one HTTP request using FormData. XHR provides progress events.

single-upload.ts
3 collapsed lines
interface UploadOptions {
file: File
url: string
onProgress?: (loaded: number, total: number) => void
onComplete?: (response: unknown) => void
onError?: (error: Error) => void
}
function uploadFile({ file, url, onProgress, onComplete, onError }: UploadOptions): () => void {
const xhr = new XMLHttpRequest()
xhr.upload.addEventListener("progress", (e) => {
if (e.lengthComputable) {
onProgress?.(e.loaded, e.total)
}
})
xhr.addEventListener("load", () => {
if (xhr.status >= 200 && xhr.status < 300) {
onComplete?.(JSON.parse(xhr.responseText))
} else {
onError?.(new Error(`Upload failed: ${xhr.status}`))
}
})
xhr.addEventListener("error", () => onError?.(new Error("Network error")))
const formData = new FormData()
formData.append("file", file)
7 collapsed lines
xhr.open("POST", url)
xhr.send(formData)
// Return abort function
return () => xhr.abort()
}

Why FormData over raw Blob:

FormData automatically sets the correct Content-Type: multipart/form-data with boundary. Setting this header manually is error-prone—the boundary must match the body encoding exactly.

Performance characteristics:

MetricValue
Memory usageO(file size)
Progress granularity~50ms events
Resume capabilityNone
Implementation complexityLow

Best for:

  • Files under 5-10 MB
  • Stable network connections
  • Simple use cases without resume requirements

Trade-offs:

  • Simple implementation, minimal code
  • Native progress events from XHR
  • Works in all browsers
  • No resume on failure—restart from beginning
  • Memory usage scales with file size
  • Large files may timeout

How it works:

The file is sliced into chunks using Blob.slice(). Each chunk uploads sequentially, with the server tracking received bytes. On failure, upload resumes from the last successful chunk.

chunked-upload.ts
5 collapsed lines
interface ChunkedUploadOptions {
file: File
uploadUrl: string
chunkSize?: number // Default 5MB
onProgress?: (uploaded: number, total: number) => void
onComplete?: () => void
onError?: (error: Error) => void
}
async function chunkedUpload({
file,
uploadUrl,
chunkSize = 5 * 1024 * 1024,
onProgress,
onComplete,
onError,
}: ChunkedUploadOptions): Promise<void> {
let offset = 0
// Query server for existing offset (resume support)
try {
const headResponse = await fetch(uploadUrl, { method: "HEAD" })
const serverOffset = headResponse.headers.get("Upload-Offset")
if (serverOffset) {
offset = parseInt(serverOffset, 10)
}
} catch {
// No existing upload, start from 0
}
while (offset < file.size) {
const chunk = file.slice(offset, offset + chunkSize)
// Use XHR for per-chunk progress
await new Promise<void>((resolve, reject) => {
const xhr = new XMLHttpRequest()
xhr.upload.addEventListener("progress", (e) => {
if (e.lengthComputable) {
onProgress?.(offset + e.loaded, file.size)
}
})
xhr.addEventListener("load", () => {
if (xhr.status >= 200 && xhr.status < 300) {
resolve()
} else {
reject(new Error(`Chunk upload failed: ${xhr.status}`))
}
})
xhr.addEventListener("error", () => reject(new Error("Network error")))
xhr.open("PATCH", uploadUrl)
xhr.setRequestHeader("Content-Type", "application/offset+octet-stream")
xhr.setRequestHeader("Upload-Offset", String(offset))
xhr.send(chunk)
})
offset += chunk.size
}
onComplete?.()
}

Why Blob.slice() doesn’t copy data:

Per the W3C File API specification, slice() returns a new Blob that references the same underlying data with different start/end positions. This is O(1) regardless of file size—critical for multi-gigabyte files.

Chunk size selection:

NetworkRecommended Chunk SizeRationale
Fiber/broadband50-100 MBMaximize throughput, reduce overhead
Mobile 4G/5G5-10 MBBalance between progress and recovery
Unstable connection1-5 MBMinimize data loss per failure

AWS S3 requires minimum 5 MB chunks (except final). Google Drive requires 256 KB minimum. tus protocol has no minimum but recommends 5 MB.

Performance characteristics:

MetricValue
Memory usageO(chunk size)
Progress granularityPer-chunk + intra-chunk
Resume capabilityFull (from last chunk)
Implementation complexityMedium

Best for:

  • Files over 10 MB
  • Unreliable networks (mobile, international)
  • Applications requiring resume capability

Trade-offs:

  • Constant memory regardless of file size
  • Resume from last successful chunk
  • Fine-grained progress
  • More HTTP requests (overhead)
  • Server must track upload state
  • More complex client and server implementation

How it works:

The tus protocol (resumable uploads) is an open standard with three phases: creation, upload, and completion. The server returns a unique upload URL that persists across sessions.

tus-client.ts
8 collapsed lines
interface TusUploadOptions {
file: File
endpoint: string // Server endpoint for creating uploads
chunkSize?: number
metadata?: Record<string, string>
onProgress?: (uploaded: number, total: number) => void
onComplete?: (uploadUrl: string) => void
onError?: (error: Error) => void
}
class TusUpload {
private uploadUrl: string | null = null
private offset = 0
private aborted = false
constructor(private options: TusUploadOptions) {}
async start(): Promise<void> {
const { file, endpoint, metadata, chunkSize = 5 * 1024 * 1024 } = this.options
// Phase 1: Create upload resource
if (!this.uploadUrl) {
const encodedMetadata = metadata
? Object.entries(metadata)
.map(([k, v]) => `${k} ${btoa(v)}`)
.join(",")
: undefined
const createResponse = await fetch(endpoint, {
method: "POST",
headers: {
"Tus-Resumable": "1.0.0",
"Upload-Length": String(file.size),
...(encodedMetadata && { "Upload-Metadata": encodedMetadata }),
},
})
if (createResponse.status !== 201) {
throw new Error(`Failed to create upload: ${createResponse.status}`)
}
this.uploadUrl = createResponse.headers.get("Location")
if (!this.uploadUrl) {
throw new Error("Server did not return upload URL")
}
}
// Phase 2: Query current offset (for resume)
const headResponse = await fetch(this.uploadUrl, {
method: "HEAD",
headers: { "Tus-Resumable": "1.0.0" },
})
const serverOffset = headResponse.headers.get("Upload-Offset")
this.offset = serverOffset ? parseInt(serverOffset, 10) : 0
// Phase 3: Upload chunks
while (this.offset < file.size && !this.aborted) {
const chunk = file.slice(this.offset, this.offset + chunkSize)
const patchResponse = await fetch(this.uploadUrl, {
method: "PATCH",
headers: {
"Tus-Resumable": "1.0.0",
"Upload-Offset": String(this.offset),
"Content-Type": "application/offset+octet-stream",
},
body: chunk,
})
if (patchResponse.status !== 204) {
throw new Error(`Chunk upload failed: ${patchResponse.status}`)
}
const newOffset = patchResponse.headers.get("Upload-Offset")
this.offset = newOffset ? parseInt(newOffset, 10) : this.offset + chunk.size
this.options.onProgress?.(this.offset, file.size)
}
if (!this.aborted) {
this.options.onComplete?.(this.uploadUrl)
}
}
abort(): void {
this.aborted = true
}
4 collapsed lines
getUploadUrl(): string | null {
return this.uploadUrl
}
}

tus protocol headers:

HeaderPurposeRequired
Tus-ResumableProtocol version (1.0.0)All requests except OPTIONS
Upload-LengthTotal file sizePOST (creation)
Upload-OffsetCurrent byte positionPATCH, HEAD response
Upload-MetadataKey-value pairs (base64 encoded)Optional
Upload-ExpiresRFC 9110 datetimeServer response

tus status codes:

CodeMeaning
201 CreatedUpload resource created
204 No ContentChunk accepted
409 ConflictOffset mismatch
412 Precondition FailedUnsupported protocol version
460Checksum mismatch (extension)

Adopters: Cloudflare, Vimeo, Supabase, Transloadit

Trade-offs:

  • Standardized protocol with multiple server implementations
  • Cross-session resume (upload URL persists)
  • Optional checksum verification (extension)
  • No native progress events (Fetch-based)
  • Server must implement tus protocol
  • More HTTP round-trips than proprietary protocols
FactorSingle RequestChunkedtus Protocol
File size limit~100 MB practicalUnlimitedUnlimited
Resume capabilityNoneSame sessionCross-session
Progress trackingNative XHRPer-chunkPer-chunk (no intra-chunk)
Server complexityMinimalMediumtus implementation
StandardizationN/ACustomOpen standard
Implementation effortLowMediumLow (use library)

Under 10 MB

Over 10 MB

No

Yes

Yes

No

Yes

No

Design file uploader

Max file size?

Single XHR upload

Need cross-session resume?

Chunked upload

Server supports tus?

tus protocol

Custom resumable protocol

Need native progress?

XHR per chunk

Fetch per chunk

Standard file input:

<input type="file" accept="image/*,.pdf" multiple />

The accept attribute filters the file picker but is advisory—users can still select any file. Always validate server-side.

Directory selection (non-standard but widely supported):

<input type="file" webkitdirectory />

This selects entire directories. Each File object includes webkitRelativePath with the path relative to the selected directory. Supported in Chrome 6+, Firefox 50+, Safari 11.1+.

drop-zone.ts
3 collapsed lines
function createDropZone(element: HTMLElement, onFiles: (files: File[]) => void): () => void {
const handleDragOver = (e: DragEvent) => {
e.preventDefault()
e.dataTransfer!.dropEffect = "copy"
element.classList.add("drag-over")
}
const handleDragLeave = () => {
element.classList.remove("drag-over")
}
const handleDrop = (e: DragEvent) => {
e.preventDefault()
element.classList.remove("drag-over")
const files: File[] = []
// DataTransferItemList for directory support
if (e.dataTransfer?.items) {
for (const item of e.dataTransfer.items) {
if (item.kind === "file") {
const file = item.getAsFile()
if (file) files.push(file)
}
}
} else if (e.dataTransfer?.files) {
// Fallback for older browsers
files.push(...Array.from(e.dataTransfer.files))
}
onFiles(files)
}
element.addEventListener("dragover", handleDragOver)
element.addEventListener("dragleave", handleDragLeave)
element.addEventListener("drop", handleDrop)
return () => {
element.removeEventListener("dragover", handleDragOver)
4 collapsed lines
element.removeEventListener("dragleave", handleDragLeave)
element.removeEventListener("drop", handleDrop)
}
}

DataTransfer security:

Per the WHATWG specification, drag data is not available to scripts until the drop event completes. During dragover, dataTransfer.files is empty—you can only check dataTransfer.types to see if files are present.

File type validation (magic bytes):

The file.type property comes from file extension mapping, not actual content. A .jpg renamed to .png reports image/png. For security-sensitive applications, read the file header:

magic-bytes-validation.ts
2 collapsed lines
const MAGIC_SIGNATURES: Record<string, number[]> = {
"image/jpeg": [0xff, 0xd8, 0xff],
"image/png": [0x89, 0x50, 0x4e, 0x47, 0x0d, 0x0a, 0x1a, 0x0a],
"image/gif": [0x47, 0x49, 0x46, 0x38], // "GIF8"
"image/webp": [0x52, 0x49, 0x46, 0x46], // "RIFF" (check offset 8 for "WEBP")
"application/pdf": [0x25, 0x50, 0x44, 0x46], // "%PDF"
}
async function detectFileType(file: File): Promise<string | null> {
const slice = file.slice(0, 12) // Read first 12 bytes
const buffer = await slice.arrayBuffer()
const bytes = new Uint8Array(buffer)
for (const [mimeType, signature] of Object.entries(MAGIC_SIGNATURES)) {
if (signature.every((byte, i) => bytes[i] === byte)) {
// Special case: WebP requires additional check at offset 8
if (mimeType === "image/webp") {
const webpMarker = new TextDecoder().decode(bytes.slice(8, 12))
if (webpMarker !== "WEBP") continue
}
return mimeType
}
}
// Fallback to extension-based type
return file.type || null
}

Image dimension validation:

dimension-validation.ts
async function validateImageDimensions(
file: File,
maxWidth: number,
maxHeight: number,
): Promise<{ width: number; height: number }> {
const url = URL.createObjectURL(file)
try {
const img = await new Promise<HTMLImageElement>((resolve, reject) => {
const image = new Image()
image.onload = () => resolve(image)
image.onerror = () => reject(new Error("Failed to load image"))
image.src = url
})
if (img.width > maxWidth || img.height > maxHeight) {
throw new Error(`Image ${img.width}x${img.height} exceeds max ${maxWidth}x${maxHeight}`)
}
return { width: img.width, height: img.height }
} finally {
URL.revokeObjectURL(url) // Always clean up
}
}

Per OWASP File Upload Cheat Sheet:

SVG files can execute JavaScript:

<!-- Malicious SVG -->
<svg xmlns="http://www.w3.org/2000/svg">
<script>alert('XSS')</script>
</svg>

SVGs can contain <script> tags, event handlers (onload, onclick), <foreignObject> with HTML, and xlink:href with javascript URLs. Mitigation: sanitize SVGs server-side or convert to raster formats.

Filename attacks:

  • Double extensions: malware.php.jpg
  • Null bytes: file.php%00.jpg
  • Path traversal: ../../../etc/passwd

Always sanitize filenames server-side—generate UUIDs rather than preserving user-provided names.

Content-Type mismatch:

A file can have .jpg extension, image/jpeg Content-Type header, but actually be a PHP script. Validate content on the server by checking magic bytes, not just headers.

AspectcreateObjectURLreadAsDataURL
SpeedSynchronous, instantAsynchronous, slower
MemoryURL reference onlyFull base64 in memory
Outputblob:origin/uuiddata:mime;base64,...
CleanupManual revokeObjectURL()Automatic (GC)
Large filesBetterMemory intensive

Recommendation: Use createObjectURL for previews, always clean up.

image-preview.ts
function createImagePreview(file: File, imgElement: HTMLImageElement): () => void {
const url = URL.createObjectURL(file)
imgElement.src = url
// Return cleanup function
return () => URL.revokeObjectURL(url)
}
// Usage with cleanup
const cleanup = createImagePreview(file, previewImg)
// Later, when preview no longer needed:
cleanup()

For large images, generate thumbnails in a Web Worker to avoid blocking the main thread:

thumbnail-worker.ts
2 collapsed lines
// thumbnail-worker.ts
self.addEventListener("message", async (e: MessageEvent<File>) => {
const file = e.data
const maxSize = 200
// createImageBitmap doesn't block and works in workers
const bitmap = await createImageBitmap(file)
// Calculate scaled dimensions
let { width, height } = bitmap
if (width > height) {
if (width > maxSize) {
height = (height * maxSize) / width
width = maxSize
}
} else {
if (height > maxSize) {
width = (width * maxSize) / height
height = maxSize
}
}
// OffscreenCanvas enables canvas operations in workers
const canvas = new OffscreenCanvas(width, height)
const ctx = canvas.getContext("2d")!
ctx.drawImage(bitmap, 0, 0, width, height)
const blob = await canvas.convertToBlob({ type: "image/jpeg", quality: 0.8 })
2 collapsed lines
self.postMessage(blob)
})
main-thread.ts
// main-thread.ts
const worker = new Worker("thumbnail-worker.ts", { type: "module" })
function generateThumbnail(file: File): Promise<Blob> {
return new Promise((resolve) => {
worker.onmessage = (e) => resolve(e.data)
worker.postMessage(file)
})
}

Performance impact:

Main thread canvas decode for 20MP JPEG: ~100-200ms (causes jank) Web Worker with OffscreenCanvas: Same time but non-blocking

For non-image files, use file type icons based on MIME type or extension:

file-icon.ts
const FILE_ICONS: Record<string, string> = {
"application/pdf": "pdf-icon.svg",
"application/zip": "archive-icon.svg",
"application/x-zip-compressed": "archive-icon.svg",
"text/plain": "text-icon.svg",
"video/": "video-icon.svg",
"audio/": "audio-icon.svg",
}
function getFileIcon(file: File): string {
// Check exact MIME type
if (FILE_ICONS[file.type]) {
return FILE_ICONS[file.type]
}
// Check MIME type prefix (video/, audio/)
for (const [prefix, icon] of Object.entries(FILE_ICONS)) {
if (prefix.endsWith("/") && file.type.startsWith(prefix)) {
return icon
}
}
return "generic-file-icon.svg"
}
progress-tracking.ts
5 collapsed lines
interface UploadProgress {
file: File
loaded: number
total: number
percent: number
speed: number // bytes/second
remaining: number // seconds
}
class ProgressTracker {
private startTime = Date.now()
private samples: Array<{ time: number; loaded: number }> = []
constructor(private total: number) {}
update(loaded: number): UploadProgress {
const now = Date.now()
this.samples.push({ time: now, loaded })
// Keep last 5 seconds of samples for speed calculation
const cutoff = now - 5000
this.samples = this.samples.filter((s) => s.time > cutoff)
// Calculate speed from samples
let speed = 0
if (this.samples.length >= 2) {
const oldest = this.samples[0]
const elapsed = (now - oldest.time) / 1000
const bytesTransferred = loaded - oldest.loaded
speed = elapsed > 0 ? bytesTransferred / elapsed : 0
}
const remaining = speed > 0 ? (this.total - loaded) / speed : Infinity
return {
loaded,
total: this.total,
percent: (loaded / this.total) * 100,
speed,
remaining,
}
}
}
upload-queue.ts
8 collapsed lines
type UploadStatus = "pending" | "uploading" | "completed" | "failed"
interface QueuedUpload {
id: string
file: File
status: UploadStatus
progress: number
error?: Error
}
class UploadQueue {
private queue: QueuedUpload[] = []
private concurrency: number
private activeCount = 0
private onUpdate?: (queue: QueuedUpload[]) => void
constructor(options: { concurrency?: number; onUpdate?: (queue: QueuedUpload[]) => void }) {
this.concurrency = options.concurrency ?? 3
this.onUpdate = options.onUpdate
}
add(files: File[]): void {
const newItems = files.map((file) => ({
id: crypto.randomUUID(),
file,
status: "pending" as UploadStatus,
progress: 0,
}))
this.queue.push(...newItems)
this.notify()
this.processNext()
}
private async processNext(): Promise<void> {
if (this.activeCount >= this.concurrency) return
const next = this.queue.find((item) => item.status === "pending")
if (!next) return
this.activeCount++
next.status = "uploading"
this.notify()
try {
await this.uploadFile(next)
next.status = "completed"
next.progress = 100
} catch (error) {
next.status = "failed"
next.error = error as Error
}
this.activeCount--
this.notify()
this.processNext()
}
private async uploadFile(item: QueuedUpload): Promise<void> {
// Implementation calls actual upload function
// Updates item.progress during upload
}
private notify(): void {
this.onUpdate?.(this.queue)
}
cancel(id: string): void {
const item = this.queue.find((q) => q.id === id)
17 collapsed lines
if (item && item.status === "pending") {
this.queue = this.queue.filter((q) => q.id !== id)
this.notify()
}
}
retry(id: string): void {
const item = this.queue.find((q) => q.id === id)
if (item && item.status === "failed") {
item.status = "pending"
item.progress = 0
item.error = undefined
this.notify()
this.processNext()
}
}
}

Retry with exponential backoff:

retry-logic.ts
3 collapsed lines
async function uploadWithRetry<T>(
uploadFn: () => Promise<T>,
options: { maxRetries?: number; baseDelay?: number } = {},
): Promise<T> {
const { maxRetries = 3, baseDelay = 1000 } = options
let lastError: Error
for (let attempt = 0; attempt <= maxRetries; attempt++) {
try {
return await uploadFn()
} catch (error) {
lastError = error as Error
// Don't retry on client errors (4xx)
if (error instanceof Response && error.status >= 400 && error.status < 500) {
throw error
}
if (attempt < maxRetries) {
const delay = baseDelay * Math.pow(2, attempt)
const jitter = delay * 0.2 * Math.random()
await new Promise((r) => setTimeout(r, delay + jitter))
}
}
}
throw lastError!
}

Error categorization:

Error TypeRetry?User Message
Network errorYes”Connection lost. Retrying…“
408, 429, 500, 502, 503, 504Yes”Server busy. Retrying…“
400 Bad RequestNo”Invalid file”
401, 403No”Permission denied”
413 Payload Too LargeNo”File too large”
415 Unsupported Media TypeNo”File type not allowed”

For cross-session resume, store upload state in IndexedDB:

upload-state-store.ts
5 collapsed lines
interface StoredUploadState {
id: string
fileName: string
fileSize: number
uploadUrl: string
offset: number
createdAt: number
// Store file reference if same session
file?: File
}
class UploadStateStore {
private db: IDBDatabase | null = null
async init(): Promise<void> {
return new Promise((resolve, reject) => {
const request = indexedDB.open("upload-state", 1)
request.onupgradeneeded = (e) => {
const db = (e.target as IDBOpenDBRequest).result
db.createObjectStore("uploads", { keyPath: "id" })
}
request.onsuccess = (e) => {
this.db = (e.target as IDBOpenDBRequest).result
resolve()
}
request.onerror = () => reject(request.error)
})
}
async save(state: StoredUploadState): Promise<void> {
if (!this.db) throw new Error("DB not initialized")
return new Promise((resolve, reject) => {
const tx = this.db!.transaction("uploads", "readwrite")
const store = tx.objectStore("uploads")
const request = store.put(state)
request.onsuccess = () => resolve()
request.onerror = () => reject(request.error)
})
}
async getAll(): Promise<StoredUploadState[]> {
if (!this.db) throw new Error("DB not initialized")
return new Promise((resolve, reject) => {
const tx = this.db!.transaction("uploads", "readonly")
19 collapsed lines
const store = tx.objectStore("uploads")
const request = store.getAll()
request.onsuccess = () => resolve(request.result)
request.onerror = () => reject(request.error)
})
}
async delete(id: string): Promise<void> {
if (!this.db) throw new Error("DB not initialized")
return new Promise((resolve, reject) => {
const tx = this.db!.transaction("uploads", "readwrite")
const store = tx.objectStore("uploads")
const request = store.delete(id)
request.onsuccess = () => resolve()
request.onerror = () => reject(request.error)
})
}
}

Storage limits:

BrowserIndexedDB Limit
Chrome60% of disk space
Firefox10% of disk or 10 GiB
Safari~60% of disk (browser), ~15% (WebView)

Avoid:

// ❌ Loads entire file into memory
const data = await file.arrayBuffer()
upload(data)

Prefer:

// ✅ Constant memory with chunking
for (let offset = 0; offset < file.size; offset += chunkSize) {
const chunk = file.slice(offset, offset + chunkSize)
await uploadChunk(chunk)
}

Blob URL cleanup pattern:

// Track all created URLs
const activeUrls = new Set<string>()
function createTrackedUrl(blob: Blob): string {
const url = URL.createObjectURL(blob)
activeUrls.add(url)
return url
}
function revokeUrl(url: string): void {
URL.revokeObjectURL(url)
activeUrls.delete(url)
}
// Cleanup all on component unmount
function revokeAll(): void {
activeUrls.forEach((url) => URL.revokeObjectURL(url))
activeUrls.clear()
}

Challenge: Billions of files, many partially identical (document versions, moved files).

Approach:

  • Files split into 4 MB blocks
  • Each block hashed client-side
  • Server checks which blocks already exist
  • Only upload missing blocks
  • Server assembles file from block references

Key insight: A 100 MB file with one changed paragraph uploads only ~4 MB.

Stack:

  • Client: Content-addressable chunking
  • Backend: Block storage with reference counting
  • Protocol: Custom HTTP API with block-level operations

Approach:

  • Small files (<5 MB): Single request
  • Large files: Resumable upload session

Protocol details:

  • Session URI valid for 1 week
  • 256 KB minimum chunk size
  • Content-Range header specifies byte range
  • 308 Resume Incomplete status for continuation

Error recovery:

  1. On failure, query session with empty PUT
  2. Server responds with Range header showing received bytes
  3. Client resumes from that offset

Approach:

  • Phase 1: files.getUploadURLExternal - Get temporary upload URL
  • Phase 2: Upload to URL, then files.completeUploadExternal

Why two phases:

  • Upload URL keeps API tokens server-side
  • Client uploads directly to storage, bypassing app servers
  • Reduces load on API servers

Limits: 1 GB max (1 MB for code snippets)

Approach:

  • Files upload to CDN
  • Images/videos processed server-side (thumbnails, transcoding)
  • Proxied through CDN with size variants

Client-side:

  • Progress tracking per file
  • Concurrent upload limit (typically 3)
  • Retry with exponential backoff
accessible-dropzone.ts
function AccessibleDropzone({ onFiles }: { onFiles: (files: File[]) => void }) {
const inputRef = useRef<HTMLInputElement>(null);
const handleKeyDown = (e: KeyboardEvent) => {
if (e.key === 'Enter' || e.key === ' ') {
e.preventDefault();
inputRef.current?.click();
}
};
return (
<div
role="button"
tabIndex={0}
onKeyDown={handleKeyDown}
aria-label="Upload files. Press Enter or Space to open file picker, or drag and drop files here."
>
<input
ref={inputRef}
type="file"
multiple
className="visually-hidden"
onChange={(e) => onFiles(Array.from(e.target.files || []))}
/>
<span aria-hidden="true">Drag files here or click to upload</span>
</div>
);
}
progress-announcements.ts
function useProgressAnnouncement() {
const [announcement, setAnnouncement] = useState('');
const announce = useCallback((progress: number, fileName: string) => {
// Announce at key milestones to avoid noise
if (progress === 0) {
setAnnouncement(`Starting upload of ${fileName}`);
} else if (progress === 100) {
setAnnouncement(`${fileName} upload complete`);
} else if (progress % 25 === 0) {
setAnnouncement(`${fileName}: ${progress}% uploaded`);
}
}, []);
return {
announce,
AriaLive: () => (
<div aria-live="polite" aria-atomic="true" className="visually-hidden">
{announcement}
</div>
)
};
}
error-state.ts
function UploadError({ error, onRetry }: { error: Error; onRetry: () => void }) {
return (
<div role="alert" aria-live="assertive">
<span>Upload failed: {error.message}</span>
<button onClick={onRetry} aria-label="Retry failed upload">
Retry
</button>
</div>
);
}

File upload design requires choosing between simplicity and robustness based on file size expectations and network reliability:

  • Under 5-10 MB: Single XHR request with FormData provides native progress events with minimal complexity.
  • Over 10 MB: Chunked uploads with Blob.slice() keep memory constant and enable resume.
  • Unreliable networks: tus protocol or similar resumable protocol enables cross-session resume.

The key constraints driving these decisions:

  • Fetch has no upload progress events—XHR remains necessary for progress tracking
  • Blob.slice() is O(1) and safe for any file size
  • URL.createObjectURL() leaks memory without explicit cleanup
  • Client-side file.type is unreliable—validate with magic bytes for security

Production implementations (Dropbox, Google Drive, Slack) universally use chunked uploads for large files, with chunk sizes between 5-256 MB depending on their network assumptions.

  • Browser File API (File, Blob, FileReader)
  • XMLHttpRequest or Fetch API
  • Basic understanding of HTTP multipart encoding
TermDefinition
ChunkA slice of the file uploaded independently
OffsetCurrent byte position in the file
ResumeContinuing upload from last successful offset
tusOpen protocol for resumable uploads (tus.io)
Magic bytesFile signature bytes that identify true file type
Blob URLBrowser-internal URL referencing in-memory data
  • Single-request XHR upload works for files under 5-10 MB with native progress events
  • Chunked uploads with Blob.slice() enable constant memory usage and resume capability
  • tus protocol provides standardized cross-session resume with multiple server implementations
  • Always validate file types server-side—client-side file.type is based on extension only
  • Clean up URL.createObjectURL() references to prevent memory leaks
  • Use OffscreenCanvas in Web Workers for thumbnail generation to avoid main thread blocking
  • Implement exponential backoff with jitter for retry logic

Read more

  • Previous

    Design a Data Grid

    System Design / Frontend System Design 19 min read

    High-performance data grids render thousands to millions of rows while maintaining 60fps scrolling and sub-second interactions. This article explores the architectural patterns, virtualization strategies, and implementation trade-offs that separate production-grade grids from naive table implementations.The core challenge: browsers struggle with more than a few thousand DOM nodes. A grid with 100,000 rows and 20 columns would create 2 million cells—rendering all of them guarantees a frozen UI. Every major grid library solves this through virtualization, but their approaches differ significantly in complexity, flexibility, and performance characteristics.

  • Next

    Critical Rendering Path: Rendering Pipeline Overview

    Browser & Runtime Internals / Critical Rendering Path 14 min read

    The browser’s rendering pipeline transforms HTML, CSS, and JavaScript into visual pixels through a series of discrete, highly optimized stages. Modern browser engines like Chromium employ the RenderingNG architecture—a next-generation rendering system developed between 2014 and 2021—which decouples the main thread from the compositor and GPU processes to ensure 60fps+ performance and minimize interaction latency.