Frontend System Design
19 min read

Bundle Splitting Strategies

Modern JavaScript applications ship megabytes of code by default. Without bundle splitting, users download, parse, and execute the entire application before seeing anything interactive—regardless of which features they’ll actually use. Bundle splitting transforms monolithic builds into targeted delivery: load the code for the current route immediately, defer everything else until needed. The payoff is substantial—30-60% reduction in initial bundle size translates directly to faster Time to Interactive (TTI) and improved Core Web Vitals.

With Splitting

User requests /home

Download 150KB home chunk

Parse home code

Execute home code

Page interactive: 1-2s

Prefetch other routes in background

Without Splitting

User requests /home

Download 2MB bundle

Parse all code

Execute all code

Page interactive: 4-6s

Bundle splitting reduces initial payload by loading only what's needed for the current route, deferring the rest.

Bundle splitting is a build-time optimization that divides application code into separate chunks loaded on demand. The core insight: users access one route at a time, so they should only pay the download cost for that route’s code.

Three fundamental strategies exist, each addressing different optimization goals:

  1. Entry splitting: Create separate bundles per entry point (multi-page apps, micro-frontends). Each entry gets independent dependency resolution.
  2. Route/component splitting: Use dynamic import() to load code when users navigate or interact. The dominant pattern for SPAs.
  3. Vendor splitting: Isolate node_modules into stable chunks. Dependencies change less frequently than application code—separate chunks improve cache hit rates across deployments.

Critical constraints shape implementation:

  • Static analysis requirement: Bundlers must determine chunk boundaries at build time. Fully dynamic import paths (e.g., import(userInput)) cannot be split.
  • HTTP/2 sweet spot: 10-30 chunks balance caching benefits against connection overhead. Too few chunks = poor cache utilization; too many = HTTP/2 stream saturation.
  • Compression effectiveness: Smaller chunks compress less efficiently than larger ones. A 10KB chunk may only compress 50%; a 100KB chunk may compress 70%.
  • Prefetch/preload timing: Splitting without intelligent loading creates waterfalls. Prefetching predicted routes and preloading critical resources recovers the latency cost.

Build tools handle these trade-offs differently: Webpack’s SplitChunksPlugin offers granular control with complex configuration; Vite/Rollup favor convention over configuration with sensible defaults; esbuild prioritizes speed with simpler splitting semantics.

JavaScript execution blocks the main thread. Large bundles create measurable user experience degradation:

Parse time: V8 parses JavaScript at roughly 1MB/second on mobile devices (benchmarked on mid-range Android). A 2MB bundle = 2 seconds of parse time before any code executes.

Execution time: Module initialization (top-level code, class definitions, side effects) runs synchronously. Heavy initialization chains delay interactivity.

Memory pressure: Each loaded module consumes memory for its AST, compiled bytecode, and runtime objects. On memory-constrained mobile devices, loading unused code competes with application state for limited heap.

MetricTargetBundle Impact
LCP (Largest Contentful Paint)< 2.5sInitial JS blocks render; smaller bundles = faster LCP
INP (Interaction to Next Paint)< 200msLarge bundles increase main thread blocking time
TTI (Time to Interactive)< 3.8sDirect correlation with JavaScript payload size

Rule of thumb: Keep initial JavaScript under 100KB gzipped for mobile-first applications. Each additional 100KB adds roughly 1 second to TTI on 3G connections.

Application TypeWithout SplittingWith SplittingReduction
Marketing site50-100KB20-40KB40-60%
Dashboard SPA500KB-1MB100-200KB60-80%
E-commerce1-2MB200-400KB70-80%
Complex SaaS2-5MB300-600KB80-90%

How it works:

Each route in the application becomes a separate chunk. When users navigate, the router triggers a dynamic import that loads the route’s code on demand.

route-based-splitting.tsx
4 collapsed lines
import { lazy, Suspense } from 'react';
import { createBrowserRouter, RouterProvider } from 'react-router-dom';
// Dynamic imports create separate chunks at build time
// Webpack names chunks based on the file path or magic comments
const Home = lazy(() => import('./pages/Home'));
const Dashboard = lazy(() => import('./pages/Dashboard'));
const Settings = lazy(() => import('./pages/Settings'));
const Analytics = lazy(() =>
import(/* webpackChunkName: "analytics" */ './pages/Analytics')
);
const router = createBrowserRouter([
{
path: '/',
element: (
<Suspense fallback={<PageSkeleton />}>
<Home />
</Suspense>
),
},
{
path: '/dashboard',
element: (
<Suspense fallback={<PageSkeleton />}>
<Dashboard />
</Suspense>
),
},
// Additional routes...
]);
function App() {
return <RouterProvider router={router} />;
1 collapsed line
}

Why this works:

  • import() returns a Promise that resolves to the module
  • Bundlers perform static analysis to identify import boundaries
  • Each import() call becomes a separate chunk in the build output
  • React’s lazy() integrates Promises with Suspense boundaries

Framework implementations:

FrameworkMechanismConfiguration
Next.jsAutomatic per-page splittingZero config; each pages/*.tsx is a chunk
React Routerlazy() + SuspenseManual setup per route
Vue Router() => import('./Page.vue')Built into router config
AngularloadChildren in route configBuilt into router module
SvelteKit+page.svelte conventionAutomatic per-route

Performance characteristics:

MetricValue
Typical reduction30-60% of initial bundle
Chunk count1 per route + shared chunks
Navigation latency50-200ms for chunk fetch
CachingUnchanged routes stay cached

Best for:

  • SPAs with distinct route boundaries
  • Applications where users typically access a subset of features
  • Progressive disclosure patterns (landing → dashboard → settings)

Trade-offs:

  • Navigation delay for first visit to each route
  • Requires Suspense boundaries for loading states
  • Shared dependencies across routes need vendor splitting

Real-world example:

Airbnb’s web application uses route-based splitting extensively. Their listing search page, booking flow, and host dashboard are separate chunks. Engineers measured a 67% reduction in initial JavaScript and 20% improvement in TTI after implementing route splitting with prefetching.

How it works:

Heavy components within a route are split and loaded on demand—triggered by user interaction or visibility rather than navigation.

component-splitting.tsx
3 collapsed lines
import { lazy, Suspense, useState } from 'react';
// Heavy visualization library only loads when chart is rendered
const AnalyticsChart = lazy(() =>
import(/* webpackChunkName: "chart" */ './AnalyticsChart')
);
// Rich text editor loads when user clicks "Edit"
const RichTextEditor = lazy(() =>
import(/* webpackChunkName: "editor" */ './RichTextEditor')
);
function Dashboard() {
const [showChart, setShowChart] = useState(false);
const [editing, setEditing] = useState(false);
return (
<div>
<button onClick={() => setShowChart(true)}>Show Analytics</button>
{showChart && (
<Suspense fallback={<ChartSkeleton />}>
<AnalyticsChart data={analyticsData} />
</Suspense>
)}
<button onClick={() => setEditing(true)}>Edit Content</button>
8 collapsed lines
{editing && (
<Suspense fallback={<EditorSkeleton />}>
<RichTextEditor content={content} />
</Suspense>
)}
</div>
);
}

Typical candidates for component splitting:

Component TypeExample LibrariesTypical Size
Rich text editorsSlate, TipTap, Quill100-300KB
ChartingD3, Chart.js, Recharts50-200KB
Code editorsMonaco, CodeMirror500KB-2MB
PDF viewerspdf.js400-600KB
MapsMapbox, Google Maps100-500KB
Markdownmarked + highlight.js50-150KB

Trigger strategies:

  1. User interaction: Load when user clicks a button or tab
  2. Intersection Observer: Load when component scrolls into viewport
  3. Hover intent: Prefetch on hover, load on click
  4. Idle callback: Load during browser idle time
intersection-observer-loading.tsx
5 collapsed lines
import { lazy, Suspense, useEffect, useRef, useState } from 'react';
const HeavyComponent = lazy(() => import('./HeavyComponent'));
function LazyOnVisible({ children }: { children: React.ReactNode }) {
const ref = useRef<HTMLDivElement>(null);
const [isVisible, setIsVisible] = useState(false);
useEffect(() => {
const observer = new IntersectionObserver(
([entry]) => {
if (entry.isIntersecting) {
setIsVisible(true);
observer.disconnect();
}
},
{ rootMargin: '200px' } // Start loading 200px before visible
);
if (ref.current) observer.observe(ref.current);
return () => observer.disconnect();
}, []);
return (
5 collapsed lines
<div ref={ref}>
{isVisible ? children : <Placeholder />}
</div>
);
}

Trade-offs:

  • Additional network requests on user interaction
  • Requires careful loading state design
  • Can feel slow without prefetching

How it works:

Third-party dependencies (node_modules) are extracted into separate chunks. Application code changes frequently; dependencies don’t. Separate chunks maximize cache efficiency.

webpack.config.js - vendor splitting
8 collapsed lines
// webpack.config.js
module.exports = {
optimization: {
splitChunks: {
chunks: "all",
cacheGroups: {
// Framework chunks (React, Vue, etc.) - very stable
framework: {
test: /[\\/]node_modules[\\/](react|react-dom|scheduler)[\\/]/,
name: "framework",
priority: 40,
enforce: true,
},
// Large libraries get their own chunks
vendors: {
test: /[\\/]node_modules[\\/]/,
name(module) {
// Get library name for chunk naming
const match = module.context.match(/[\\/]node_modules[\\/](.*?)([\\/]|$)/)
const packageName = match ? match[1] : "vendors"
return `vendor-${packageName.replace("@", "")}`
},
priority: 20,
minSize: 50000, // Only split if > 50KB
},
// Shared application code
commons: {
name: "commons",
minChunks: 2, // Used by at least 2 chunks
6 collapsed lines
priority: 10,
},
},
},
},
}

Webpack SplitChunksPlugin defaults (v5):

OptionDefaultEffect
chunks'async'Only split async imports (change to 'all' for sync too)
minSize20000Only create chunks > 20KB
maxSize0No upper limit (set to enable chunk splitting)
minChunks1Module must be shared by N chunks to split
maxAsyncRequests30Max parallel requests for on-demand loads
maxInitialRequests30Max parallel requests for entry points

Vite/Rollup approach:

vite.config.js - manual chunks
// vite.config.js
export default {
build: {
rollupOptions: {
output: {
manualChunks: {
// Group related dependencies
"vendor-react": ["react", "react-dom", "react-router-dom"],
"vendor-ui": ["@radix-ui/react-dialog", "@radix-ui/react-dropdown-menu"],
"vendor-utils": ["lodash-es", "date-fns"],
},
},
},
},
}

Cache efficiency math:

Assume weekly deployments with application code changes each time:

StrategyCache Hit RateBandwidth Saved
Single bundle0% (any change invalidates)0%
App + vendors~70% (vendors rarely change)40-60%
Granular chunks~85% (only changed chunks reload)60-80%

Trade-offs:

  • More HTTP requests (mitigated by HTTP/2)
  • Configuration complexity
  • Chunk naming affects cache keys

Path 4: Dynamic Entry Points (Module Federation)

How it works:

Multiple applications share code at runtime. Each application builds independently but can consume modules from other builds dynamically.

webpack.config.js - module federation
5 collapsed lines
// Host application webpack.config.js
const { ModuleFederationPlugin } = require("webpack").container
module.exports = {
plugins: [
new ModuleFederationPlugin({
name: "host",
remotes: {
// Load dashboard app's exposed modules at runtime
dashboard: "dashboard@https://dashboard.example.com/remoteEntry.js",
analytics: "analytics@https://analytics.example.com/remoteEntry.js",
},
shared: {
react: { singleton: true, requiredVersion: "^18.0.0" },
"react-dom": { singleton: true, requiredVersion: "^18.0.0" },
},
}),
],
}
// Remote application webpack.config.js
module.exports = {
plugins: [
new ModuleFederationPlugin({
11 collapsed lines
name: "dashboard",
filename: "remoteEntry.js",
exposes: {
"./DashboardWidget": "./src/DashboardWidget",
},
shared: {
react: { singleton: true },
"react-dom": { singleton: true },
},
}),
],
}

Use cases:

  • Micro-frontends with independent deployment
  • Platform teams providing shared components
  • A/B testing different component versions
  • White-label applications with customizable modules

Trade-offs:

  • Runtime dependency resolution adds latency
  • Version conflicts require careful management
  • Build configuration complexity
  • Debugging across federated boundaries is harder

Real-world example:

Spotify’s web player uses a micro-frontend architecture where different squads own different features (playlists, search, player controls). Module Federation enables independent deployments while sharing React and design system components.

FactorRoute SplittingComponent SplittingVendor SplittingModule Federation
Setup complexityLowLowMediumHigh
Build config changesMinimalMinimalModerateSignificant
Runtime overheadLowLowNoneMedium
Cache benefitsMediumMediumHighHigh
Team scalabilityGoodGoodGoodExcellent
Best forSPAsHeavy componentsAll appsMicro-frontends

Yes

No

SPA with routes

MPA

Yes

No

Yes

No

Need bundle splitting

Multiple teams/deployments?

Module Federation

Application type?

Route-based splitting

Entry point splitting

Heavy components on some routes?

Add component splitting

Route splitting sufficient

Large shared dependencies?

Add vendor splitting

Entry splitting sufficient

Splitting code creates smaller initial bundles but introduces latency when loading subsequent chunks. Resource hints recover this latency by loading chunks before they’re needed.

HintPriorityTimingUse Case
prefetchLowestBrowser idle timeNext navigation, predicted routes
preloadHighImmediate, parallelCurrent page critical resources
modulepreloadHighImmediate, with parsingES modules needed soon
resource-hints.html
<!-- Prefetch: Load during idle for predicted next navigation -->
<link rel="prefetch" href="/chunks/dashboard-abc123.js" />
<!-- Preload: Critical for current page, load immediately -->
<link rel="preload" href="/chunks/hero-image.js" as="script" />
<!-- Modulepreload: ES module with dependency resolution -->
<link rel="modulepreload" href="/chunks/analytics-module.js" />

Key difference: modulepreload also parses and compiles the module, making execution instant when needed. preload only fetches.

Webpack uses special comments to control chunk behavior:

webpack-magic-comments.ts
// Named chunk for better cache keys and debugging
import(/* webpackChunkName: "dashboard" */ "./Dashboard")
// Prefetch: load during idle (for predicted navigation)
import(/* webpackPrefetch: true */ "./Settings")
// Preload: load in parallel with current chunk (rare)
import(/* webpackPreload: true */ "./CriticalModal")
// Combined
import(
/* webpackChunkName: "analytics" */
/* webpackPrefetch: true */
"./Analytics"
)
// Specify loading mode
import(/* webpackMode: "lazy" */ "./LazyModule") // Default
import(/* webpackMode: "eager" */ "./EagerModule") // No separate chunk

Prefetch behavior: Webpack injects <link rel="prefetch"> tags after the parent chunk loads. The browser fetches during idle time with lowest priority.

Preload behavior: Webpack injects <link rel="preload"> tags alongside the parent chunk request. Use sparingly—preloading non-critical resources competes with critical ones.

Next.js (automatic):

next-prefetch.tsx
import Link from 'next/link';
// Prefetch is automatic on viewport intersection (production only)
<Link href="/dashboard">Dashboard</Link>
// Disable prefetch for rarely-used links
<Link href="/admin" prefetch={false}>Admin</Link>
// Programmatic prefetch
import { useRouter } from 'next/navigation';
const router = useRouter();
router.prefetch('/dashboard');

React Router (manual):

react-router-prefetch.tsx
5 collapsed lines
import { useEffect } from 'react';
import { Link, useLocation } from 'react-router-dom';
// Prefetch on hover
function PrefetchLink({ to, children }: { to: string; children: React.ReactNode }) {
const prefetch = () => {
// Trigger the dynamic import without rendering
if (to === '/dashboard') {
import('./pages/Dashboard');
} else if (to === '/settings') {
import('./pages/Settings');
}
};
return (
<Link to={to} onMouseEnter={prefetch} onFocus={prefetch}>
{children}
</Link>
);
}
// Prefetch likely next routes based on current route
function usePrefetchPredicted() {
const location = useLocation();
11 collapsed lines
useEffect(() => {
if (location.pathname === '/') {
// Users on home often go to dashboard
import('./pages/Dashboard');
} else if (location.pathname === '/dashboard') {
// Dashboard users often check settings
import('./pages/Settings');
}
}, [location.pathname]);
}

Prefetch chunks when their trigger elements become visible:

intersection-prefetch.ts
3 collapsed lines
function prefetchOnVisible(selector: string, importFn: () => Promise<unknown>) {
const elements = document.querySelectorAll(selector)
const observer = new IntersectionObserver(
(entries) => {
entries.forEach((entry) => {
if (entry.isIntersecting) {
importFn().catch(() => {}) // Prefetch, ignore errors
observer.unobserve(entry.target)
}
})
},
{ rootMargin: "200px" },
)
elements.forEach((el) => observer.observe(el))
return () => observer.disconnect()
}
// Usage: prefetch analytics chunk when link is visible
prefetchOnVisible('[href="/analytics"]', () => import("./pages/Analytics"))
StrategyLatency ImprovementBandwidth CostImplementation
No prefetch0ms0-
Hover prefetch100-300msLow (on interaction)Manual
Viewport prefetch200-500msMedium (visible links)IntersectionObserver
Predictive prefetch300-1000msHigher (predicted routes)Analytics-driven
Aggressive prefetchFull (no latency)Highest (all routes)Simple but wasteful

Before optimizing, measure. Bundle analyzers visualize chunk contents, revealing:

  • Unexpectedly large dependencies
  • Duplicate modules across chunks
  • Unused exports (tree-shaking failures)
  • Missing code splitting opportunities
webpack.config.js - bundle analyzer
3 collapsed lines
const BundleAnalyzerPlugin = require("webpack-bundle-analyzer").BundleAnalyzerPlugin
module.exports = {
plugins: [
new BundleAnalyzerPlugin({
analyzerMode: "static", // Generate HTML file
reportFilename: "bundle-report.html",
openAnalyzer: false,
generateStatsFile: true,
statsFilename: "bundle-stats.json",
}),
],
}

What to look for:

PatternProblemSolution
Single huge chunkNo splittingAdd route/component splitting
Moment.js (500KB)Full library with localesReplace with date-fns or dayjs
Lodash (70KB)Full libraryUse lodash-es with tree shaking
Duplicated ReactMultiple versionsResolve alias in config
Large polyfillsTargeting old browsersAdjust browserslist, use core-js 3
vite.config.js - bundle visualization
import { visualizer } from "rollup-plugin-visualizer"
export default {
plugins: [
visualizer({
filename: "bundle-analysis.html",
gzipSize: true,
brotliSize: true,
}),
],
}

How to use:

  1. Open DevTools → More Tools → Coverage
  2. Click reload button to start instrumenting
  3. Interact with the page
  4. Review unused JavaScript percentage

Interpreting results:

  • Red = unused code (candidate for splitting or removal)
  • Green = executed code

Caveat: Coverage shows runtime usage for one session. Code that handles edge cases or error states may appear “unused” but is still necessary.

Set bundle size budgets that fail builds when exceeded:

webpack.config.js - performance budgets
module.exports = {
performance: {
maxAssetSize: 250000, // 250KB per chunk
maxEntrypointSize: 400000, // 400KB for entry points
hints: "error", // Fail build on violation
},
}

Recommended budgets:

Application TypeEntry PointPer Chunk
Marketing site100KB gzip50KB gzip
SPA150KB gzip75KB gzip
Complex app250KB gzip100KB gzip
webpack.config.js - complete splitting setup
10 collapsed lines
const path = require("path")
module.exports = {
mode: "production",
entry: {
main: "./src/index.tsx",
},
output: {
filename: "[name].[contenthash].js",
chunkFilename: "[name].[contenthash].chunk.js",
path: path.resolve(__dirname, "dist"),
clean: true,
},
optimization: {
splitChunks: {
chunks: "all",
maxInitialRequests: 25,
minSize: 20000,
cacheGroups: {
// React ecosystem - very stable
framework: {
test: /[\\/]node_modules[\\/](react|react-dom|react-router|react-router-dom|scheduler)[\\/]/,
name: "framework",
chunks: "all",
priority: 40,
},
// UI libraries
ui: {
test: /[\\/]node_modules[\\/](@radix-ui|@headlessui|framer-motion)[\\/]/,
name: "vendor-ui",
chunks: "all",
priority: 30,
},
// Remaining node_modules
vendors: {
test: /[\\/]node_modules[\\/]/,
name: "vendors",
chunks: "all",
priority: 20,
},
// Shared app code
common: {
name: "common",
minChunks: 2,
priority: 10,
reuseExistingChunk: true,
},
},
},
// Extract webpack runtime for better caching
runtimeChunk: "single",
},
}
vite.config.ts - complete splitting setup
5 collapsed lines
import { defineConfig } from "vite"
import react from "@vitejs/plugin-react"
export default defineConfig({
plugins: [react()],
build: {
target: "es2020",
rollupOptions: {
output: {
manualChunks: (id) => {
// Framework chunk
if (id.includes("node_modules/react")) {
return "framework"
}
// UI library chunk
if (id.includes("node_modules/@radix-ui") || id.includes("node_modules/@headlessui")) {
return "vendor-ui"
}
// Utility libraries
if (id.includes("node_modules/lodash") || id.includes("node_modules/date-fns")) {
return "vendor-utils"
}
// Let Rollup handle remaining splitting
},
// Consistent chunk naming
chunkFileNames: "assets/[name]-[hash].js",
entryFileNames: "assets/[name]-[hash].js",
},
},
// Enable CSS code splitting
cssCodeSplit: true,
},
})
esbuild.config.ts
5 collapsed lines
import * as esbuild from "esbuild"
await esbuild.build({
entryPoints: ["src/index.tsx"],
bundle: true,
// Required for code splitting
splitting: true,
format: "esm", // Required when splitting is enabled
outdir: "dist",
// Chunk naming
chunkNames: "chunks/[name]-[hash]",
entryNames: "[name]-[hash]",
// Minimize output
minify: true,
// Target modern browsers
target: ["es2020", "chrome90", "firefox90", "safari14"],
})

esbuild limitations:

  • No equivalent to Webpack’s splitChunks cacheGroups
  • Manual chunks require separate entry points
  • Less granular control over chunk boundaries

Turbopack handles splitting automatically in Next.js:

next.config.ts
import type { NextConfig } from "next"
const nextConfig: NextConfig = {
// Turbopack is default in Next.js 15+
// No explicit code splitting config needed
experimental: {
// Fine-tune package imports for tree shaking
optimizePackageImports: ["lodash-es", "@icons-pack/react-simple-icons"],
},
}
export default nextConfig

Automatic optimizations:

  • Page-level code splitting (each route = separate chunk)
  • Shared chunks for common dependencies
  • Automatic prefetching for <Link> components
  • React Server Components reduce client bundle size

Bundlers analyze import paths at build time. Fully dynamic paths prevent splitting:

dynamic-import-analysis.ts
// WORKS: Bundler creates chunk for each page
const pages = {
home: () => import("./pages/Home"),
about: () => import("./pages/About"),
contact: () => import("./pages/Contact"),
}
// WORKS: Template literal with static prefix
// Creates chunks for all files matching the pattern
const loadPage = (name: string) => import(`./pages/${name}.tsx`)
// DOES NOT WORK: Fully dynamic path
const userPath = getUserInput()
import(userPath) // Bundler cannot analyze at build time

Webpack behavior with template literals:

When using import(\./pages/${name}.tsx`), Webpack creates a chunk for every file matching ./pages/*.tsx`. This is called a “context module.”

Circular dependencies can prevent proper chunk splitting:

circular-dependency-problem.ts
// moduleA.ts
import { helperB } from "./moduleB"
export const helperA = () => helperB()
// moduleB.ts
import { helperA } from "./moduleA"
export const helperB = () => helperA()
// Bundler may merge these into single chunk to resolve cycle

Detection: Use madge or dpdm to identify circular dependencies:

Terminal window
npx madge --circular src/

Code splitting and tree shaking work together, but order matters:

  1. Tree shaking removes unused exports
  2. Code splitting divides remaining code into chunks

Gotcha: Side effects can prevent tree shaking:

side-effects-problem.ts
// This module has side effects (modifies global state on import)
import "./analytics-init" // Always included even if unused
// Mark as side-effect-free in package.json
// "sideEffects": false
// Or list specific files with side effects
// "sideEffects": ["./src/analytics-init.ts"]

CSS-in-JS libraries (styled-components, Emotion) may extract CSS that doesn’t split with the component:

css-in-js-splitting.tsx
3 collapsed lines
import styled from "styled-components"
// These styles may not code-split with the component
const StyledButton = styled.button`
background: blue;
color: white;
`
// Lazy load the component - but CSS may be in main bundle
const LazyComponent = lazy(() => import("./StyledComponent"))

Solution: Configure SSR extraction or use CSS Modules for components that need reliable code splitting.

Server-rendered HTML must match client-rendered output. Code-split components that render different content based on browser APIs cause mismatches:

ssr-mismatch.tsx
3 collapsed lines
import { lazy, Suspense } from 'react';
// Problem: window is undefined on server
const ClientOnlyChart = lazy(() => import('./Chart'));
function Dashboard() {
// This causes hydration mismatch
return (
<Suspense fallback={<div>Loading...</div>}>
{typeof window !== 'undefined' && <ClientOnlyChart />}
</Suspense>
);
}
// Solution: Use useEffect for client-only rendering
function SafeDashboard() {
const [mounted, setMounted] = useState(false);
useEffect(() => {
setMounted(true);
}, []);
return (
<Suspense fallback={<div>Loading...</div>}>
{mounted && <ClientOnlyChart />}
</Suspense>
);
}

HTTP/2 multiplexes requests over a single connection, but has limits:

BrowserMax Concurrent Streams
Chrome100
Firefox100
Safari100

Recommendation: Keep chunk count under 50 for initial load. The sweet spot is 10-30 chunks that balance:

  • Cache granularity (more chunks = better cache utilization)
  • Connection overhead (fewer chunks = less overhead)
  • Compression efficiency (larger chunks = better compression)

Chunk content hashes should only change when content changes. Common pitfalls:

hash-stability.js
// Problem: Import order affects hash
import { a } from "./a" // Hash changes if this moves
import { b } from "./b"
// Problem: Absolute paths in source
__dirname // Different on different machines
// Solution: Use deterministic module IDs
module.exports = {
optimization: {
moduleIds: "deterministic",
chunkIds: "deterministic",
},
}

Challenge: Make code splitting accessible without configuration.

Architecture:

  • Each file in pages/ or app/ becomes a route chunk automatically
  • Shared dependencies extracted to _app chunk
  • Framework code (React, Next.js runtime) in separate chunks
  • Automatic prefetching via <Link> component

Key decisions:

  • Convention over configuration: file system = routes = chunks
  • Aggressive prefetching by default (can disable per-link)
  • Server Components reduce client bundle by rendering on server

Outcome: Applications using Next.js automatic splitting see 30-50% smaller initial bundles compared to manual Webpack configuration.

Challenge: E-commerce pages have unique splitting needs—product data, cart, checkout.

Architecture:

  • Route-based splitting for pages
  • Component splitting for cart drawer, product variants
  • Aggressive prefetching for product navigation
  • Streaming SSR with Suspense boundaries

Key insight: Cart functionality is loaded on interaction, not initial page load. Most users browse products before adding to cart—deferring cart code saves 50KB on initial load.

Challenge: Support old browsers without penalizing modern ones.

Architecture:

  • Two builds: ES2020 for modern browsers, ES5 for legacy
  • <script type="module"> loads modern bundle
  • <script nomodule> loads legacy bundle
  • Modern bundle is 20-40% smaller (no polyfills, better compression)
differential-loading.html
<!-- Modern browsers load this (smaller) -->
<script type="module" src="/main-modern.js"></script>
<!-- Only IE11 and old browsers load this -->
<script nomodule src="/main-legacy.js"></script>

Trade-off: Doubles build time and artifact storage, but modern users (95%+) get smaller bundles.

Challenge: Users navigate unpredictably; guessing wrong wastes bandwidth.

Architecture:

  • Collect navigation patterns via analytics
  • Build ML model predicting next route from current route
  • Prefetch predicted routes during idle time
  • Confidence threshold prevents over-prefetching

Key insight: 80% of navigations follow predictable patterns. Prefetching the top 3 predicted routes covers most users while limiting bandwidth waste.

Outcome: 20% reduction in perceived navigation latency without significant bandwidth increase.

Smaller chunks compress less efficiently:

Chunk SizeGzip ReductionBrotli Reduction
5KB30-40%35-45%
20KB50-60%55-65%
100KB65-70%70-75%
500KB70-75%75-80%

Implication: Don’t over-split. A 5KB chunk that could be 2KB gzipped might not be worth the additional HTTP request.

AspectGzipBrotli
Compression ratioGood (65-70%)Better (70-75%)
Compression speedFastSlower (3-10x)
Browser support99%+96%+
RecommendationFallbackPrimary

Best practice: Pre-compress assets at build time with Brotli (level 11). Serve Brotli to supporting browsers, gzip to others via content negotiation.

compression-plugin.js
// webpack-compression-plugin or vite-plugin-compression
{
algorithm: 'brotliCompress',
compressionOptions: { level: 11 },
filename: '[path][base].br',
}

Bundle splitting transforms application delivery from “all or nothing” to “what you need, when you need it.” The strategies layer: start with route-based splitting for the largest wins, add vendor splitting for cache efficiency, then apply component splitting for heavy UI elements.

Key principles:

  1. Measure first: Use bundle analyzers to identify actual opportunities, not perceived ones
  2. Route splitting is table stakes: Every SPA should split by route
  3. Vendor chunks maximize caching: Dependencies change less than application code
  4. Prefetching recovers latency: Splitting without prefetching creates navigation delays
  5. HTTP/2 changes the math: More chunks are viable, but don’t over-split (10-30 is optimal)
  6. Compression efficiency decreases with chunk size: Balance splitting benefits against compression loss

The configuration complexity varies by tool—Next.js and Vite provide sensible defaults; Webpack offers granular control at the cost of configuration burden. Choose based on your team’s needs and application constraints.

  • JavaScript module systems (ESM, CommonJS)
  • Build tool fundamentals (bundling, minification)
  • Browser network fundamentals (HTTP/2, caching)
TermDefinition
ChunkA bundle file produced by the build process
Entry pointInitial file that starts application execution
Dynamic importimport() syntax that loads modules on demand
Code splittingDividing code into chunks loaded separately
Tree shakingRemoving unused exports from bundles
Vendor chunkChunk containing third-party dependencies
Magic commentBuild-tool-specific comment affecting bundling behavior
PrefetchLow-priority fetch for predicted future needs
PreloadHigh-priority fetch for current page needs
  • Bundle splitting reduces initial JavaScript by loading code on demand
  • Route-based splitting provides 30-60% reduction with minimal configuration
  • Vendor splitting maximizes cache hit rates across deployments
  • Prefetch/preload hints recover navigation latency from splitting
  • Bundle analyzers identify optimization opportunities before and after changes
  • HTTP/2 enables more chunks, but 10-30 is the practical sweet spot
  • Compression efficiency decreases with chunk size—don’t over-split

Read more

  • Previous

    Multi-Tenant Pluggable Widget Framework

    System Design / Frontend System Design 31 min read

    Designing a frontend framework that hosts third-party extensions—dynamically loaded at runtime based on tenant configurations. This article covers the architectural decisions behind systems like VS Code extensions, Figma plugins, and Shopify embedded apps: module loading strategies (Webpack Module Federation vs SystemJS), sandboxing techniques (iframe, Shadow DOM, Web Workers, WASM), manifest and registry design, the host SDK API contract, and multi-tenant orchestration that resolves widget implementations per user or organization.

  • Next

    Rendering Strategies

    System Design / Frontend System Design 18 min read

    Choosing between CSR, SSR, SSG, and hybrid rendering is not a binary decision—it’s about matching rendering strategy to content characteristics. Static content benefits from build-time rendering; dynamic, personalized content needs request-time rendering; interactive components need client-side JavaScript. Modern frameworks like Next.js 15, Astro 5, and Nuxt 3 enable mixing strategies within a single application, rendering each route—or even each component—with the optimal approach.