Prisma in Production: 5 Lessons Learned
Real production lessons with Prisma ORM that no tutorial warned me about. Migration strategies, connection pooling, N+1 queries, and the checklist I wish I had before launch.
While I was deploying my first production application with Prisma the other day, I realized something crucial: development and production are two completely different worlds when it comes to database ORMs. The queries that ran beautifully on my laptop suddenly brought my API to its knees under real user load.
I was once guilty of thinking Prisma was just "plug and play" for production. Little did I know that the same features that made Prisma wonderful in development—its type safety, intuitive API, automatic migrations—could become significant pain points if I didn't understand how they worked under production constraints.
Here are five lessons that saved my application (and my reputation).
Lesson 1: Migration Strategy Makes or Breaks Deployments
When I finally decided to deploy my first feature update to production, I learned the hard way that running prisma migrate deploy wasn't enough. I had a migration that added a non-nullable column to a table with millions of rows. The deployment hung for 40 minutes while users were getting timeout errors.
The problem? I didn't have a migration strategy. In development, you can get away with breaking schema changes because you're the only user. In production, every migration must be backwards compatible and handle data transformations gracefully.
Here's what I should have done instead:
// Bad: Adding a required field in one step
model User {
id String @id @default(cuid())
email String @unique
username String // This will fail on existing rows!
createdAt DateTime @default(now())
}
// Good: Multi-step migration approach
// Step 1: Add as optional
model User {
id String @id @default(cuid())
email String @unique
username String? // Nullable first
createdAt DateTime @default(now())
}
// Step 2: Backfill data with a data migration script
// migration-backfill-usernames.ts
import { PrismaClient } from '@prisma/client'
const prisma = new PrismaClient()
async function backfillUsernames() {
const usersWithoutUsername = await prisma.user.findMany({
where: { username: null }
})
for (const user of usersWithoutUsername) {
await prisma.user.update({
where: { id: user.id },
data: { username: user.email.split('@')[0] }
})
}
}
backfillUsernames()
// Step 3: Make required in next deployment
model User {
id String @id @default(cuid())
email String @unique
username String // Now safe to make required
createdAt DateTime @default(now())
}I cannot stress this enough! Always test migrations against production-sized datasets in a staging environment first. The migration that takes 2 seconds with 50 test records might take 20 minutes with 2 million real records.

Lesson 2: Connection Pooling Is Non-Negotiable at Scale
My second production incident happened at 3 AM when our user base tripled overnight. The application started throwing Can't reach database server errors despite the database being perfectly healthy. The culprit? Connection exhaustion.
Prisma creates a connection pool by default, but I never configured it properly. Each serverless function was spinning up its own Prisma instance with its own pool, and we quickly hit our database's connection limit.
In other words, I was treating database connections like they were infinite. They're not. Luckily we can configure Prisma's connection pool to match our deployment environment:
// .env configuration for different environments
// Development (single instance)
DATABASE_URL="postgresql://user:pass@localhost:5432/mydb?connection_limit=5"
// Production (serverless with high concurrency)
DATABASE_URL="postgresql://user:pass@prod.db:5432/mydb?connection_limit=1&pool_timeout=10"
// Production (traditional server)
DATABASE_URL="postgresql://user:pass@prod.db:5432/mydb?connection_limit=10&pool_timeout=20"For serverless deployments, I learned to use connection poolers like PgBouncer or AWS RDS Proxy. These sit between your application and database, managing connections efficiently across thousands of serverless invocations. The ROI on setting this up is immediate—my cold start times dropped by 60% and connection errors disappeared entirely.
Lesson 3: N+1 Queries Will Destroy Your Response Times
When I came across my first N+1 query problem in production, I felt embarrassed. My beautifully typed Prisma code was hammering the database with hundreds of queries per request. What looked elegant in my code editor was a performance disaster.
Here's the exact scenario that bit me:
// ❌ Bad: N+1 query disaster
async function getPostsWithAuthors() {
const posts = await prisma.post.findMany()
// This creates a separate query for EACH post!
const postsWithAuthors = await Promise.all(
posts.map(async (post) => {
const author = await prisma.user.findUnique({
where: { id: post.authorId }
})
return { ...post, author }
})
)
return postsWithAuthors
}
// ✅ Good: Single query with include
async function getPostsWithAuthors() {
return await prisma.post.findMany({
include: {
author: true
}
})
}
// ✅ Even better: Select only what you need
async function getPostsWithAuthors() {
return await prisma.post.findMany({
select: {
id: true,
title: true,
content: true,
author: {
select: {
id: true,
name: true,
avatarUrl: true
}
}
}
})
}The difference was staggering. My endpoint went from 2.3 seconds with 101 database queries to 180ms with a single query. Prisma's include and select options are wonderful tools, but you have to use them consciously. Every relation you include is a potential join or subquery.
I now have a rule: if I'm mapping over results and making database calls inside that loop, I stop and refactor immediately.
Lesson 4: Shadow Databases and Transaction Isolation Levels Matter
Here's something no Prisma tutorial warned me about: shadow databases. When Prisma generates migrations, it creates a temporary "shadow database" to validate the migration steps. In development, this happens automatically. In production CI/CD pipelines? That's where things get interesting.
I was once guilty of not giving my CI/CD user permission to create databases. My deployment pipeline failed mysteriously with "cannot create shadow database" errors. The fix was simple but not obvious—either grant CREATE DATABASE permissions or use Prisma's --skip-generate flag with pre-generated migrations.
Transaction isolation levels became critical when I had race conditions in my payment processing. Two requests would read the same account balance, both see $100, and both try to deduct $80. One transaction would succeed, the other should fail, but with Prisma's default settings, both were committing successfully.
The solution was explicit transaction management with proper isolation:
// ❌ Race condition waiting to happen
async function processPayment(userId: string, amount: number) {
const user = await prisma.user.findUnique({
where: { id: userId }
})
if (user.balance >= amount) {
await prisma.user.update({
where: { id: userId },
data: { balance: user.balance - amount }
})
return { success: true }
}
return { success: false, error: 'Insufficient funds' }
}
// ✅ Proper transaction with isolation
async function processPayment(userId: string, amount: number) {
return await prisma.$transaction(async (tx) => {
const user = await tx.user.findUnique({
where: { id: userId }
})
if (!user || user.balance < amount) {
throw new Error('Insufficient funds')
}
return await tx.user.update({
where: { id: userId },
data: { balance: user.balance - amount }
})
}, {
isolationLevel: 'Serializable',
timeout: 5000
})
}
Lesson 5: Query Performance Monitoring and Error Handling
My biggest production lesson was realizing that I couldn't debug what I couldn't measure. Prisma's built-in logging helped in development, but production needed real observability.
I implemented query logging middleware that captured slow queries and sent them to our monitoring system:
import { PrismaClient } from '@prisma/client'
const prisma = new PrismaClient({
log: [
{ emit: 'event', level: 'query' },
{ emit: 'event', level: 'error' },
],
})
// Track slow queries
prisma.$on('query', (e) => {
if (e.duration > 1000) {
console.warn('Slow query detected:', {
query: e.query,
duration: e.duration,
params: e.params,
timestamp: e.timestamp
})
// Send to monitoring service
monitoringService.trackSlowQuery(e)
}
})
// Handle database errors gracefully
prisma.$on('error', (e) => {
console.error('Database error:', e)
// Alert the team for critical errors
if (e.message.includes('connection')) {
alertService.notify('Database connection error', e)
}
})
// Wrapper function with retry logic and timeouts
async function executeWithRetry<T>(
operation: () => Promise<T>,
maxRetries = 3
): Promise<T> {
let lastError: Error
for (let attempt = 1; attempt <= maxRetries; attempt++) {
try {
return await operation()
} catch (error) {
lastError = error as Error
// Don't retry validation errors
if (error.code === 'P2002') {
throw error
}
// Exponential backoff
if (attempt < maxRetries) {
const delay = Math.pow(2, attempt) * 100
await new Promise(resolve => setTimeout(resolve, delay))
}
}
}
throw lastError!
}
// Usage
const user = await executeWithRetry(() =>
prisma.user.findUnique({ where: { id: userId } })
)Production Checklist: What to Verify Before Launch
After burning myself multiple times, I created this checklist that I run through before every production deployment:
Database Configuration:
- Connection pool limits configured for environment
- Connection pooler (PgBouncer/RDS Proxy) in place for serverless
- Database backup and recovery tested
- Migration rollback strategy documented
Schema and Migrations:
- All migrations tested against production-sized data
- Shadow database permissions configured
- Backwards-compatible migration strategy for multi-step changes
- Data migration scripts reviewed and tested
Query Performance:
- N+1 queries eliminated (checked with query logging)
- Proper indexes on frequently queried columns
selectstatements using only needed fields- Relations properly optimized with
include
Error Handling:
- Query timeouts configured
- Retry logic for transient errors
- Graceful degradation for database unavailability
- Monitoring and alerting in place
Security:
- Parameterized queries only (no
$queryRawUnsafe) - Database credentials in secure environment variables
- Row-level security or middleware authorization
- Connection strings without credentials in logs
Moving Forward: Sustainable Prisma Development
The fascinating thing about production is that it reveals assumptions you didn't know you were making. Every lesson I learned with Prisma made me a better developer overall. The connection pooling knowledge applies to any database layer. The migration strategy thinking helps with any schema evolution. The performance monitoring mindset improves every system I build.
Prisma is a wonderful tool when you understand its production characteristics. The type safety and developer experience remain unmatched, but those benefits compound when you pair them with production-ready practices. I came across teams still choosing raw SQL over ORMs because of "performance concerns," but the real issue isn't the ORM—it's understanding how to use it at scale.
The ROI on learning these lessons early is enormous. Every hour spent understanding connection pooling saves days of debugging production incidents. Every N+1 query caught in development saves your users from slow page loads.
Start with the checklist. Implement monitoring early. Test migrations against real data sizes. Treat production as a different environment from the start, not as an afterthought.
And that concludes the end of this post! I hope you found this valuable and look out for more in the future!