jsmanifest logojsmanifest

5 Bundle Size Optimization Techniques That Actually Work

5 Bundle Size Optimization Techniques That Actually Work

Learn five practical bundle size optimization techniques that deliver real performance improvements. From tree shaking to compression strategies, discover what actually works in production.

While I was looking over some production bundles the other day, I realized how many developers (including past me) obsess over bundle size without understanding what actually moves the needle. I was once guilty of spending hours shaving off a few kilobytes while completely ignoring the 200KB date library that was blocking my initial render.

Little did I know that bundle size optimization isn't just about making numbers smaller—it's about delivering faster, more responsive applications to users. When I finally decided to measure the real impact of my optimizations on Core Web Vitals, everything changed. Let me share the five techniques that actually made a difference.

Why Bundle Size Optimization Actually Matters in 2025

I cannot stress this enough! Your bundle size directly impacts three critical metrics: First Contentious Paint (FCP), Largest Contentful Paint (LCP), and Time to Interactive (TTI). Every unnecessary kilobyte delays when users can actually interact with your application.

Here's what I've learned from production deployments: a 100KB reduction in bundle size typically translates to 200-500ms faster load times on 4G connections. That might not sound like much, but it's the difference between users staying or bouncing.

The real challenge isn't just making your bundle smaller—it's ensuring that what remains is exactly what users need for their current interaction. This is where strategic optimization becomes fascinating!

Tree Shaking: Eliminating Dead Code at Build Time

Tree shaking was one of those concepts I thought I understood until I actually looked at my production bundles. I was importing entire libraries when I only needed a single function. The build tools were supposed to "automatically" remove unused code, but they can't perform magic if you don't give them the right ingredients.

Here's a common mistake I see everywhere:

// ❌ Bad: Imports the entire lodash library (~70KB)
import _ from 'lodash'
 
const result = _.uniq([1, 2, 2, 3, 4, 4])
 
// ❌ Also bad: Still imports unnecessary code
import { uniq } from 'lodash'
 
const result = uniq([1, 2, 2, 3, 4, 4])

When I finally decided to check my bundle analyzer, I discovered lodash was contributing 68KB to my bundle when I only needed three functions. Here's what actually works:

// ✅ Good: Only imports the specific function (~2KB)
import uniq from 'lodash/uniq'
 
const result = uniq([1, 2, 2, 3, 4, 4])
 
// ✅ Even better: Use ES6 when possible (0KB additional)
const result = [...new Set([1, 2, 2, 3, 4, 4])]

The second approach reduced my bundle by 66KB instantly. That's a 97% reduction for that single dependency! Modern bundlers like webpack 5, Vite, and Rollup support tree shaking out of the box, but they rely on ES modules. Make sure your dependencies export ES modules (look for the module field in package.json).

Bundle analysis showing before and after tree shaking optimization

Dynamic Imports and Route-Based Code Splitting

I used to load every single component, utility, and dependency upfront. My reasoning was simple but flawed: "Users will eventually need everything, so why not load it now?" Luckily we can do much better with dynamic imports.

Code splitting transformed how I think about application architecture. Instead of shipping one massive bundle, you ship smaller chunks that load exactly when needed. This is especially powerful for route-based applications.

Here's the pattern I now use in every React application:

// ❌ Bad: All routes loaded upfront
import Dashboard from './pages/Dashboard'
import Settings from './pages/Settings'
import Reports from './pages/Reports'
 
const routes = [
  { path: '/dashboard', component: Dashboard },
  { path: '/settings', component: Settings },
  { path: '/reports', component: Reports },
]
 
// ✅ Good: Routes loaded on-demand
import { lazy, Suspense } from 'react'
 
const Dashboard = lazy(() => import('./pages/Dashboard'))
const Settings = lazy(() => import('./pages/Settings'))
const Reports = lazy(() => import('./pages/Reports'))
 
const routes = [
  { path: '/dashboard', component: Dashboard },
  { path: '/settings', component: Settings },
  { path: '/reports', component: Reports },
]
 
// Usage with suspense boundary
function App() {
  return (
    <Suspense fallback={<div>Loading...</div>}>
      <Routes routes={routes} />
    </Suspense>
  )
}

This pattern reduced my initial bundle from 450KB to 120KB. The dashboard now loads in 1.2 seconds instead of 3.5 seconds on 3G connections. Users on the dashboard never download the settings or reports code until they navigate there.

The same principle applies to heavy components like charts, modals, or third-party widgets. I was once loading a 45KB markdown editor on every page when users only needed it in one specific form. Dynamic imports cut that waste immediately.

Analyzing and Replacing Heavy Dependencies

While I was debugging slow load times last month, I discovered that moment.js was contributing 67KB (minified + gzipped) to my bundle. I was using it for exactly two things: formatting dates and parsing ISO strings. That's an absurd ROI on bundle size.

Here's my process for auditing dependencies:

First, I run webpack-bundle-analyzer or use the built-in bundle analysis in Vite. This visualizes exactly what's taking up space. When I see a large chunk, I ask three questions:

  1. Do I actually need this entire library?
  2. Is there a lighter alternative?
  3. Can I implement this functionality myself in 50 lines or less?

For date manipulation, replacing moment.js with date-fns saved me 55KB:

// Before: moment.js (67KB gzipped)
import moment from 'moment'
 
const formatted = moment(date).format('MMMM Do YYYY')
const isAfter = moment(date1).isAfter(date2)
 
// After: date-fns (2KB per function, tree-shakeable)
import { format, isAfter } from 'date-fns'
 
const formatted = format(date, 'MMMM do yyyy')
const result = isAfter(date1, date2)

In other words, I went from shipping 67KB to shipping approximately 4KB for the two functions I actually needed. That's a 94% reduction with zero loss in functionality.

I've replaced other common bloat culprits too: axios with native fetch, jquery with vanilla JavaScript, and lodash with ES6 methods. Each replacement compounds your savings.

Compression Strategies: Brotli vs Gzip in Production

This is where many developers stop optimizing because it feels like infrastructure work rather than code work. I was guilty of this mindset for years. When I finally decided to implement proper compression, I saw immediate gains without changing a single line of application code.

Modern browsers support two primary compression algorithms: Gzip and Brotli. Gzip has been the standard for years, but Brotli consistently delivers 15-20% better compression ratios for text-based assets like JavaScript, CSS, and HTML.

Here's what I learned from A/B testing both in production: Brotli at compression level 6 provides the best balance between compression ratio and build time. Level 11 (maximum) only saves an additional 3-5% but takes exponentially longer to compress.

Most hosting providers (Vercel, Netlify, Cloudflare) automatically serve Brotli-compressed assets if you enable it. If you're managing your own infrastructure, configure your web server to pre-compress assets during the build:

// vite.config.ts example with compression
import { defineConfig } from 'vite'
import viteCompression from 'vite-plugin-compression'
 
export default defineConfig({
  plugins: [
    viteCompression({
      algorithm: 'brotliCompress',
      ext: '.br',
      threshold: 1024, // Only compress files > 1KB
      deleteOriginFile: false,
    }),
    viteCompression({
      algorithm: 'gzip',
      ext: '.gz',
      threshold: 1024,
      deleteOriginFile: false,
    }),
  ],
})

This configuration generates both Brotli and Gzip versions during build time. Your server serves the Brotli version to supporting browsers and falls back to Gzip for older browsers.

Comparison chart showing Brotli vs Gzip compression ratios

The results speak for themselves: my 320KB JavaScript bundle became 95KB with Gzip and 78KB with Brotli. That's a 76% reduction with Brotli!

Bundle Analysis Tools: Finding Your Performance Bottlenecks

I cannot stress this enough: you can't optimize what you don't measure. I spent months making arbitrary optimization decisions until I started using proper analysis tools. These tools revealed that my "optimized" bundle still contained duplicate dependencies, unused polyfills, and accidentally imported test utilities.

My workflow now starts with webpack-bundle-analyzer for webpack projects or the built-in analysis in Vite. I run this after every major dependency change. The visualization immediately shows which packages consume the most space.

For a more detailed analysis, I use source-map-explorer to see exactly which parts of each package made it into the bundle. This helped me discover that importing one component from @mui/material was pulling in the entire library because of improper imports.

The most valuable insight came from comparing bundle sizes over time. I set up bundle size monitoring in CI using bundlesize or size-limit. Now every pull request shows the exact impact on bundle size before merging. This catches bloat before it reaches production.

Real-World Results: Measuring Impact on Core Web Vitals

After implementing these five techniques across three production applications, I measured the impact using Chrome's Lighthouse and real user monitoring. The results were fascinating!

Application one (e-commerce): Initial bundle dropped from 580KB to 145KB. LCP improved from 4.2s to 1.8s on 4G connections. Bounce rate decreased by 23%.

Application two (SaaS dashboard): Initial bundle dropped from 450KB to 120KB. FCP improved from 2.1s to 0.9s. Time to Interactive decreased by 2.3 seconds.

Application three (content site): Initial bundle dropped from 220KB to 85KB. All Core Web Vitals moved into the "good" range. Organic traffic increased by 15% after three months (Google confirmed Core Web Vitals impact rankings).

The common thread? These weren't small, incremental improvements. They were transformative changes that users actually noticed. Load times that used to frustrate users now felt instant.

Here's what surprised me most: the optimization work took less time than I expected. Tree shaking and proper imports took one afternoon. Implementing code splitting took two days. Switching to lighter dependencies happened gradually over a sprint. Setting up compression took an hour.

The ROI on bundle optimization is incredible. Little did I know that a few strategic decisions could eliminate hundreds of kilobytes and save seconds of load time. When I look back, I wish I had prioritized this work much earlier.

And that concludes the end of this post! I hope you found this valuable and look out for more in the future!