Optimize PostgreSQL for Next.js applications — indexing strategies, query analysis, connection pooling, and performance monitoring for production databases.
The first step to optimization is measurement:
-- Enable query timing
\timing
-- Analyze a query's execution plan
EXPLAIN ANALYZE
SELECT p.*, c.name as category_name
FROM "Product" p
JOIN "Category" c ON p."categoryId" = c.id
WHERE p."deletedAt" IS NULL
AND ppublished
Key things to look for in the output:
-- 1. Soft delete filter — every query uses this
CREATE INDEX idx_product_deleted_at ON "Product" ("deletedAt")
WHERE "deletedAt" IS NULL;
-- 2. Composite index for common query patterns
CREATE INDEX idx_product_published_created
ON "Product" (published, "createdAt" DESC)
WHERE "deletedAt" IS NULL;
-- 3. Full-text search index
CREATE INDEX idx_product_search
ON "Product" USING gin (
to_tsvector('english', name || ' ' || COALESCE(description, ''))
);
-- 4. Foreign key indexes (Prisma creates some automatically)
CREATE INDEX idx_order_item_order_id ON "OrderItem" ("orderId");
CREATE INDEX idx_order_item_product_id ON "OrderItem" ("productId");
-- 5. Unique constraint indexes
CREATE UNIQUE INDEX idx_user_email ON "User" (email)
WHERE "deletedAt" IS NULL;
// ❌ N+1 query problem
const orders = await prisma.order.findMany();
for (const order of orders) {
const items = await prisma.orderItem.findMany({
where: { orderId: order.id }, // N additional queries!
});
}
// ✅ Eager loading with include
const orders = await prisma.order.findMany({
include: {
items: {
include: { product: true },
},
user: {
select: { name: true, email: true },
},
},
});
// ✅ Even better — select only needed fields
const orders = await prisma.order.findMany({
select: {
id: true,
total: true,
status: true,
createdAt: true,
items: {
select: {
quantity: true,
price: true,
product: {
select: { name: true, slug: true },
},
},
},
},
});
In serverless environments like Vercel, each function invocation can create a new database connection:
// Without pooling: 100 concurrent requests = 100 connections
// PostgreSQL default max: 100 connections
// Result: "too many connections" errors
// Solution 1: Prisma singleton (development)
const globalForPrisma = globalThis as unknown as {
prisma: PrismaClient | undefined;
};
export const prisma = globalForPrisma.prisma ?? new PrismaClient();
if (process.env.NODE_ENV !== "production") {
globalForPrisma.prisma = prisma;
}
// Solution 2: PgBouncer (production)
// DATABASE_URL="postgresql://user:pass@pgbouncer:6432/mydb?pgbouncer=true"
// Solution 3: Prisma Accelerate
// Managed connection pooling service
// Next.js data cache with revalidation
import { unstable_cache } from "next/cache";
const getCachedProducts = unstable_cache(
async (categorySlug: string) => {
return prisma.product.findMany({
where: {
category: { slug: categorySlug },
published: true,
deletedAt: null,
},
orderBy: { createdAt: "desc" },
take: 20,
});
},
["products-by-category"],
{ revalidate: 300 } // 5 minutes
);
-- Find slow queries
SELECT query, calls, total_time, mean_time
FROM pg_stat_statements
ORDER BY mean_time DESC
LIMIT 10;
-- Table sizes
SELECT relname, pg_size_pretty(pg_total_relation_size(relid))
FROM pg_catalog.pg_statio_user_tables
ORDER BY pg_total_relation_size(relid) DESC;
-- Index usage statistics
SELECT indexrelname, idx_scan, idx_tup_read
FROM pg_stat_user_indexes
ORDER BY idx_scan ASC; -- Low scans = potentially unused index
-- Vacuum and analyze
VACUUM ANALYZE "Product";
VACUUM ANALYZE "Order";
# postgresql.conf optimizations for web applications
# Memory
shared_buffers = 256MB # 25% of RAM
effective_cache_size = 768MB # 75% of RAM
work_mem = 4MB # Per-operation sorting memory
# Connections
max_connections = 200
After applying these optimizations to our Developer Portfolio & SaaS Platform:
| Metric | Before | After |
|---|---|---|
| Average query time | 45ms | 8ms |
| P95 query time | 200ms | 35ms |
| Connection errors | ~5/day | 0 |
| Database CPU | 60% | 25% |
All these database patterns are implemented in our platform. Get a production-ready PostgreSQL setup:
Related reads:
Follow on LinkedIn for database tips and insights.
Production-grade databases, production-grade results. Our Developer Portfolio SaaS comes with optimized PostgreSQL schemas — starting at $299.
Get the latest articles, tutorials, and updates delivered straight to your inbox. No spam, unsubscribe at any time.