PostgreSQL Performance Tuning: Handle 10,000+ Queries Per Second

Your database is slow. Queries that took milliseconds last month now take seconds. Users are complaining. Your monitoring dashboard shows query times creeping up, connection pools maxing out, and CPU spiking during peak hours. You've thrown more RAM at it, upgraded the server—nothing helps.
PostgreSQL can handle massive loads, but only when configured correctly. Out-of-the-box settings are designed for compatibility, not performance. This guide shows you how to tune PostgreSQL to handle 10,000+ queries per second in production.
Why PostgreSQL Performance Matters
Most PostgreSQL performance problems stem from three issues: poor configuration, missing indexes, and inefficient queries. Default PostgreSQL settings assume you're running on a machine with 1GB of RAM from 2005. Modern servers have 16GB+, multiple cores, and fast SSDs—but PostgreSQL doesn't know that unless you tell it.
What you'll gain:
- Queries that run 10x faster with proper indexing
- Sustained throughput of 10,000+ queries/second
- Lower CPU and memory usage under load
- Reduced connection overhead
- Better cache hit rates
Infrastructure cost: ~$30/month on Elestio (4 CPU / 8GB RAM) handles most production workloads. Far cheaper than paying database specialists to firefight performance issues.
Quick Wins: Configuration Tuning
PostgreSQL's performance is controlled by postgresql.conf. These settings have the biggest impact:
Memory Configuration
# Total memory for caching data (25% of system RAM)
shared_buffers = 2GB
# Memory per query operation (2-4MB per connection)
work_mem = 4MB
# Memory for maintenance operations
maintenance_work_mem = 512MB
# Memory for WAL buffers
wal_buffers = 16MB
# Planner memory for query optimization
effective_cache_size = 6GB
Why these matter:
shared_buffers: PostgreSQL's main cache. Too low = disk reads. Too high = OS cache competition.work_mem: Sorting and hashing memory. Too low = disk temp files. Too high = OOM with many connections.effective_cache_size: Tells the planner how much OS cache is available. Affects query plan choices.
Connection and Worker Configuration
# Maximum client connections
max_connections = 100
# Background workers for parallel queries
max_worker_processes = 4
max_parallel_workers = 4
max_parallel_workers_per_gather = 2
Connection pooling is essential. Each PostgreSQL connection consumes memory (work_mem × operations). Use PgBouncer to pool connections:
# docker-compose.yml for PgBouncer
services:
pgbouncer:
image: edoburu/pgbouncer:latest
environment:
DB_HOST: postgres
DB_NAME: myapp
DB_USER: myapp
DB_PASSWORD: secret
POOL_MODE: transaction
MAX_CLIENT_CONN: 1000
DEFAULT_POOL_SIZE: 25
ports:
- "6432:5432"
Write-Ahead Log (WAL) Tuning
# WAL performance vs durability trade-off
wal_level = replica
synchronous_commit = off # Fast writes, slight crash risk
checkpoint_timeout = 15min
max_wal_size = 2GB
min_wal_size = 512MB
Warning: synchronous_commit = off means ~100ms of transactions could be lost in a crash. For most applications, this is acceptable. For financial systems, keep it on.
Index Strategy: Make Queries Fast
Indexes are the difference between 10ms and 10-second queries. Every WHERE, JOIN, and ORDER BY clause should hit an index.
Identify Missing Indexes
-- Find slow queries missing indexes
SELECT query, calls, total_time/calls as avg_time_ms
FROM pg_stat_statements
WHERE mean_time > 100 -- Queries taking >100ms
ORDER BY total_time DESC
LIMIT 10;
Index Types and When to Use Them
B-tree (default): Equality and range queries
CREATE INDEX idx_users_email ON users(email);
CREATE INDEX idx_orders_created ON orders(created_at);
Hash: Equality only, faster than B-tree for exact matches
CREATE INDEX idx_sessions_token ON sessions USING HASH(token);
GiST: Full-text search, geometric data
CREATE INDEX idx_posts_search ON posts USING GiST(to_tsvector('english', content));
Partial indexes: When you filter by common values
-- Only index active users (saves space)
CREATE INDEX idx_active_users ON users(last_seen) WHERE status = 'active';
Composite Indexes for Complex Queries
-- Query: WHERE user_id = ? AND created_at > ?
CREATE INDEX idx_orders_user_date ON orders(user_id, created_at);
-- Order matters! Left-most column must match WHERE clause
-- This index works for:
-- WHERE user_id = ?
-- WHERE user_id = ? AND created_at > ?
-- But NOT for:
-- WHERE created_at > ? (user_id not specified)
Query Optimization Patterns
Use EXPLAIN ANALYZE
EXPLAIN ANALYZE
SELECT * FROM orders
WHERE user_id = 123
AND status = 'completed'
ORDER BY created_at DESC
LIMIT 10;
Look for:
- Seq Scan = Table scan = Bad. Add index.
- Index Scan = Good.
- Nested Loop with high cost = Join problem. Check join indexes.
- Sort operations = Add ORDER BY to index.
Avoid SELECT *
-- Bad: Fetches all columns (more data = slower)
SELECT * FROM users WHERE email = 'user@example.com';
-- Good: Fetch only needed columns
SELECT id, email, name FROM users WHERE email = 'user@example.com';
Use EXISTS Instead of COUNT
-- Bad: Counts all rows (slow)
SELECT COUNT(*) FROM orders WHERE user_id = 123;
IF count > 0 THEN ...
-- Good: Stops at first match (fast)
SELECT EXISTS(SELECT 1 FROM orders WHERE user_id = 123 LIMIT 1);
Batch Operations
-- Bad: 1000 separate queries
FOR EACH user IN users:
UPDATE users SET last_seen = NOW() WHERE id = user.id;
-- Good: Single query
UPDATE users SET last_seen = NOW()
WHERE id = ANY(ARRAY[1,2,3,...1000]);
Connection Pooling with PgBouncer
PostgreSQL connections are expensive. Each connection is a forked process consuming ~10MB. With 1000 concurrent users, you'd need 10GB just for connections.
Solution: PgBouncer pools connections. 1000 clients → 25 PostgreSQL connections.
# pgbouncer.ini
[databases]
myapp = host=postgres dbname=myapp
[pgbouncer]
pool_mode = transaction
max_client_conn = 1000
default_pool_size = 25
reserve_pool_size = 5
Pool modes:
session: Client gets dedicated connection (slowest)transaction: Connection released after transaction (recommended)statement: Connection released after statement (fastest, least compatible)
Monitoring Performance
Track these metrics to catch problems early:
Cache Hit Ratio
SELECT
sum(heap_blks_hit) / (sum(heap_blks_hit) + sum(heap_blks_read)) AS cache_hit_ratio
FROM pg_statio_user_tables;
Target: >99%. Below 95% means shared_buffers is too small or queries aren't using indexes.
Active Connections
SELECT count(*) FROM pg_stat_activity WHERE state = 'active';
Alert if: Approaching max_connections. Add connection pooling.
Long-Running Queries
SELECT pid, age(clock_timestamp(), query_start), query
FROM pg_stat_activity
WHERE state = 'active'
AND query_start < now() - interval '5 minutes'
ORDER BY query_start;
Kill long queries:
SELECT pg_terminate_backend(pid);
Troubleshooting Common Issues
Problem: Slow queries after adding data
Solution: ANALYZE updates statistics for query planner
ANALYZE users;
-- Or for all tables:
VACUUM ANALYZE;
Problem: Queries fast in testing, slow in production
Solution: Development has 100 rows, production has 10 million. Query planner chooses wrong strategy. Check indexes and EXPLAIN ANALYZE on production data size.
Problem: High CPU during queries
Solution: Sequential scans or missing indexes. Check pg_stat_statements for slow queries.
Problem: Connection pool exhausted
Solution: Increase PgBouncer default_pool_size or reduce pool_mode to transaction.
Deploying PostgreSQL on Elestio
Ready to deploy a production-tuned PostgreSQL instance?
- Select PostgreSQL from Elestio marketplace
- Choose provider (Hetzner, DigitalOcean, Vultr) - 4 CPU / 8GB RAM minimum for high-performance workloads
- Click "Deploy Service"
Elestio deploys PostgreSQL with this stack in /opt/app:
version: '3'
services:
postgres:
image: elestio/postgres:latest
shm_size: 1g
environment:
POSTGRES_DB: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: ${SOFTWARE_PASSWORD}
PGDATA: /var/lib/postgresql/data
TS_TUNE_MAX_CONNS: 100
volumes:
- ./data:/var/lib/postgresql/data
ports:
- '172.17.0.1:5432:5432'
Applying Performance Tuning
Edit postgresql.conf directly in the mounted data volume:
# Navigate to PostgreSQL data directory
cd /opt/app/data
# Edit config file
nano postgresql.conf
# Add performance tuning from this guide:
# - shared_buffers = 2GB
# - work_mem = 4MB
# - maintenance_work_mem = 512MB
# - effective_cache_size = 6GB
# - max_connections = 100
# Save and restart PostgreSQL
cd /opt/app
docker-compose restart postgres
Configuration persists in /opt/app/data across container restarts.
For custom domain setup with automated SSL, follow the official Elestio documentation.
Summary
PostgreSQL performance comes down to three things: proper configuration, smart indexes, and efficient queries. Default settings won't handle production load. Tune memory settings for your hardware, add indexes for common queries, use connection pooling, and monitor cache hit ratios.
Start with configuration tuning (biggest impact for minimal effort), add indexes where EXPLAIN ANALYZE shows sequential scans, and use PgBouncer to handle connection pooling. Your database will handle 10,000+ queries per second without breaking a sweat.
Ready to deploy production-grade PostgreSQL? Deploy PostgreSQL on Elestio and start handling massive query loads today.
Thanks for reading ❤️
Published by Michael Soto, Senior Content Strategist @ Elest.io