Structured Clone in Node.js: Deep Copying Done Right
Discover how structuredClone() finally solves JavaScript's deep copying problem in Node.js - no more JSON hacks or heavy libraries required.
While I was looking over some legacy code the other day, I stumbled upon something that made me cringe. It was a function I wrote years ago that used JSON.parse(JSON.stringify(obj)) to deep copy objects. I was once guilty of reaching for this hack whenever I needed to clone an object without references. Little did I know that this approach was quietly breaking things in production.
The problem? It strips out functions, loses Date objects, chokes on circular references, and butchers anything that isn't plain JSON. I spent countless hours debugging issues caused by this "quick fix." When I finally decided to research proper deep copying solutions, I discovered that Node.js had introduced something wonderful: structuredClone().
Why Deep Copying in JavaScript Has Always Been a Problem
Let me show you what I mean. Here's code I used to write all the time:
const original = {
name: 'John',
metadata: {
created: new Date(),
tags: ['user', 'active']
}
};
const shallow = { ...original };
shallow.metadata.tags.push('premium');
console.log(original.metadata.tags); // ['user', 'active', 'premium'] 😱This is the classic shallow copy gotcha. The spread operator only copies the first level. Nested objects and arrays? They're still references to the same memory locations. I cannot stress this enough! This has caused more bugs in production than I care to admit.
The traditional solutions were all problematic:
JSON.parse(JSON.stringify())- Fast but loses types and breaks on circular references- Lodash's
cloneDeep()- Reliable but adds 17.4KB (5.3KB gzipped) to your bundle just for one function - Manual recursive cloning - Error-prone and you'll inevitably miss edge cases
In other words, we were stuck choosing between bundle size, reliability, and performance. That's a terrible trade-off.

Understanding structuredClone: The Native Deep Copy Solution
When Node.js 17 introduced structuredClone(), I was skeptical. I had been burned before by "solutions" that looked good on paper. But after using it in several production applications, I realized this was different.
The structuredClone() function is built on the structured clone algorithm, which was originally designed for web APIs like postMessage(). It's a native, browser-standard way of creating deep copies that handles way more than JSON can dream of.
Here's the same example from before, done right:
const original = {
name: 'John',
metadata: {
created: new Date(),
tags: ['user', 'active'],
config: new Map([['theme', 'dark'], ['lang', 'en']])
}
};
const deepCopy = structuredClone(original);
deepCopy.metadata.tags.push('premium');
deepCopy.metadata.config.set('theme', 'light');
console.log(original.metadata.tags); // ['user', 'active'] ✅
console.log(original.metadata.config.get('theme')); // 'dark' ✅
console.log(deepCopy.metadata.created instanceof Date); // true ✅Notice how it preserves the Date object and even clones the Map correctly? This is what I mean by "done right." No more weird type coercion or lost data.
How structuredClone Works: The Structured Clone Algorithm
The structured clone algorithm walks through your entire object graph, creating new instances of everything it encounters. When I came across the spec, I was fascinated by how thorough it is.
Let me show you something that would break with JSON but works perfectly with structuredClone():
interface UserData {
id: string;
profile: {
avatar: ArrayBuffer;
settings: Map<string, any>;
metadata: {
joined: Date;
lastActive: Date;
};
};
friends: Set<string>;
}
const userData: UserData = {
id: 'user-123',
profile: {
avatar: new ArrayBuffer(8),
settings: new Map([
['notifications', true],
['privacy', 'friends']
]),
metadata: {
joined: new Date('2024-01-15'),
lastActive: new Date()
}
},
friends: new Set(['user-456', 'user-789'])
};
const backup = structuredClone(userData);
// Modify the clone
backup.friends.add('user-999');
backup.profile.settings.set('theme', 'dark');
console.log(userData.friends.size); // 2 ✅
console.log(backup.friends.size); // 3 ✅
console.log(userData.profile.settings.has('theme')); // false ✅This code handles ArrayBuffer, Map, Set, and Date objects without breaking a sweat. Try that with JSON.parse(JSON.stringify()) and watch it explode.

structuredClone vs JSON.parse/stringify vs Lodash cloneDeep
Luckily we can now compare these approaches with real data. I ran benchmarks in my Node.js applications and the results surprised me.
For simple objects without special types, JSON.parse(JSON.stringify()) is still the fastest - about 2-3x faster than structuredClone(). But here's the thing: the moment you need to preserve types, it's worthless.
Lodash's cloneDeep() is incredibly reliable and handles edge cases beautifully, but you're paying a cost. That 17.4KB might not sound like much, but when you're optimizing a Node.js microservice or trying to keep cold start times down in serverless environments, every kilobyte matters.
structuredClone() hits the sweet spot. It's built into the runtime (zero bundle cost), handles almost everything you throw at it, and the performance is good enough for most use cases. In my testing, it was only about 20-30% slower than Lodash for complex objects.
The ROI is clear: remove a dependency, get better type preservation, and keep your bundle lean.
What structuredClone Can and Cannot Clone
Before you go replacing all your cloning code, you need to know the limitations. I learned this the hard way when I tried to clone an object with functions attached.
structuredClone() can handle:
- Objects and arrays (obviously)
- Primitive values (strings, numbers, booleans, null, undefined)
Date,RegExp,Map,SetArrayBuffer,TypedArray,DataViewBlob,File(in browsers)- Circular references (huge win!)
But it cannot clone:
- Functions
- DOM nodes
- Property descriptors, getters, setters
- Object prototypes (only own properties)
- Symbols as property keys
When I finally decided to migrate a legacy codebase to use structuredClone(), I hit errors because some objects had methods. The solution was to separate data from behavior - which, honestly, made the code better anyway.
Real-World Use Cases: When to Use structuredClone in Node.js
Let me share a practical example from a project I worked on. We had an API that processed user submissions, applied transformations, and needed to maintain the original for audit logging:
async function processSubmission(submission: FormSubmission) {
// Keep original for audit trail
const auditRecord = structuredClone(submission);
// Apply transformations without worrying about mutations
const processed = structuredClone(submission);
processed.data = normalizeData(processed.data);
processed.metadata.processedAt = new Date();
processed.attachments = await processAttachments(processed.attachments);
// Both records are completely independent
await db.audit.create(auditRecord);
await db.submissions.create(processed);
}This pattern is wonderful for:
- Implementing undo/redo functionality
- Creating snapshots before risky operations
- Caching complex objects without reference issues
- Testing (cloning test fixtures)
- State management in Node.js applications
I've also used it extensively when working with configuration objects that get passed through multiple middleware layers. Instead of worrying about whether some middleware mutated the config, I just clone it at the boundary.
Performance Considerations and Best Practices
While structuredClone() is fantastic, I learned not to use it everywhere blindly. For simple, flat objects, the overhead isn't worth it. Use the spread operator or Object.assign() instead.
Here's my decision tree:
- Flat object with no nested references? Use spread operator
- Need to preserve special types (Date, Map, Set)? Use
structuredClone() - Have functions or methods in your objects? Rethink your data structure
- Performance critical path with millions of operations? Benchmark first
One gotcha I encountered: if you're cloning the same structure repeatedly, consider if you actually need a deep copy each time. Sometimes a shallow copy at strategic points is more appropriate.
Also, structuredClone() is synchronous. For massive objects (megabytes of data), it will block the event loop. In those cases, you might want to chunk the operation or use worker threads.
Deep Copying Done Right: Making the Switch
The transition from my old JSON-based cloning to structuredClone() was one of the best refactoring decisions I've made. I removed Lodash from several projects, eliminated subtle bugs around Date objects, and the code became more readable.
If you're still using JSON.parse(JSON.stringify()) or importing heavy libraries just for deep cloning, it's time to make the switch. Node.js 17+ has had this feature since 2021, and it's now available in all modern browsers too.
Start small. Pick one module where you're doing deep copying and replace it with structuredClone(). Test thoroughly (especially if you had functions hiding in your objects), and gradually roll it out. The performance characteristics are good enough for most applications, and the type safety alone makes it worthwhile.
And that concludes the end of this post! I hope you found this valuable and look out for more in the future!