Skip to content

How to Fix Slow PostgreSQL Queries

Key idea:

Slow Postgres queries are the most common bottleneck in web apps. Diagnostic tools: pg_stat_statements (top by total time), EXPLAIN ANALYZE (query plan), auto_explain (log plans automatically). Fix path: identify slow query → add index → rewrite (avoid N+1, SELECT *) → partitioning (for >100M rows). Always measure: p50 vs p99 — p99 matters for user experience.

Below: step-by-step, working examples, common pitfalls, FAQ.

Step-by-Step Setup

  1. Enable pg_stat_statements: CREATE EXTENSION pg_stat_statements; + restart Postgres
  2. Find slow queries: SELECT query, mean_exec_time, calls FROM pg_stat_statements ORDER BY mean_exec_time DESC LIMIT 10;
  3. For each query: EXPLAIN (ANALYZE, BUFFERS) SELECT ...;
  4. Seq Scan on a large table → add index: CREATE INDEX CONCURRENTLY idx_name ON table(col);
  5. N+1 problem: ORM loads parent + loops fetch child. Fix: eager loading (JOIN, WITH)
  6. SELECT * → SELECT only needed columns (network + memory savings)
  7. Connection pooling: PgBouncer in front of Postgres (10-100× reduce idle conns)
  8. Partition large tables (>100M rows): PARTITION BY RANGE(date)

Working Examples

ScenarioConfig
EXPLAIN ANALYZEEXPLAIN (ANALYZE, BUFFERS)\nSELECT * FROM users WHERE email = $1;\n# Seq Scan → bad, Index Scan → good
Top slow queriesSELECT substring(query, 1, 50) AS query_short, ROUND(mean_exec_time::numeric, 2) AS avg_ms, calls, ROUND(total_exec_time::numeric / 1000, 2) AS total_sec FROM pg_stat_statements ORDER BY total_exec_time DESC LIMIT 10;
Add concurrent indexCREATE INDEX CONCURRENTLY idx_users_email ON users(email);\n# CONCURRENTLY — no table lock, safe for prod
Composite indexCREATE INDEX idx_orders_user_created ON orders(user_id, created_at DESC);\n# For query: WHERE user_id = ? ORDER BY created_at DESC
auto_explain (logs slow queries)# postgresql.conf: shared_preload_libraries = 'auto_explain' auto_explain.log_min_duration = 1000 # ms auto_explain.log_analyze = true auto_explain.log_buffers = true

Common Pitfalls

  • CREATE INDEX without CONCURRENTLY — table-level lock in production, blocks writes
  • Too many indexes — writes slow down (every INSERT updates all indexes)
  • Index not used → check: is column in WHERE/JOIN? Cost vs Seq Scan? Stats up-to-date (ANALYZE)?
  • EXPLAIN vs EXPLAIN ANALYZE: first — estimate, second — actual. Always ANALYZE for final decision
  • pg_stat_statements misses queries < auto_explain threshold. Lower threshold when digging deeper

Learn more

Frequently Asked Questions

How to see real-time queries?

<code>SELECT * FROM pg_stat_activity WHERE state = 'active';</code> — live queries. Cancel with PID: <code>SELECT pg_cancel_backend(pid);</code>

When to VACUUM ANALYZE?

Autovacuum does it automatically. Manual needed after bulk insert/update — refresh stats for query planner.

Connection pooling — PgBouncer or application-level?

Application pool (node-pg-pool) — per-app. PgBouncer — cluster-wide, shared. Production — PgBouncer (1000 app connections → 25 Postgres).

Slow on 10 GB DB — indexes or scale?

Indexes first (often 100× improvement). Scale (replicas, shards) — after exhausting query optimization.