Slow Postgres queries — most common bottleneck веб-приложений. Diagnostic tools: pg_stat_statements (топ по total time), EXPLAIN ANALYZE (query plan), auto_explain (log plans automatically). Fix path: identify slow query → add index → rewrite (avoid N+1, SELECT *) → partitioning (для >100M rows). Always measure: p50 vs p99 — p99 matters для user experience.
Ниже: пошаговая инструкция, рабочие примеры, типичные ошибки, FAQ.
pg_stat_statements: CREATE EXTENSION pg_stat_statements; + restart PostgresSELECT query, mean_exec_time, calls FROM pg_stat_statements ORDER BY mean_exec_time DESC LIMIT 10;EXPLAIN (ANALYZE, BUFFERS) SELECT ...;CREATE INDEX CONCURRENTLY idx_name ON table(col);JOIN, WITH)| Сценарий | Конфиг |
|---|---|
| EXPLAIN ANALYZE | EXPLAIN (ANALYZE, BUFFERS)\nSELECT * FROM users WHERE email = $1;\n# Seq Scan → bad, Index Scan → good |
| Top slow queries | SELECT
substring(query, 1, 50) AS query_short,
ROUND(mean_exec_time::numeric, 2) AS avg_ms,
calls,
ROUND(total_exec_time::numeric / 1000, 2) AS total_sec
FROM pg_stat_statements
ORDER BY total_exec_time DESC LIMIT 10; |
| Add concurrent index | CREATE INDEX CONCURRENTLY idx_users_email ON users(email);\n# CONCURRENTLY — no table lock, safe for prod |
| Composite index | CREATE INDEX idx_orders_user_created ON orders(user_id, created_at DESC);\n# Для query: WHERE user_id = ? ORDER BY created_at DESC |
| auto_explain (logs slow queries) | # postgresql.conf:
shared_preload_libraries = 'auto_explain'
auto_explain.log_min_duration = 1000 # ms
auto_explain.log_analyze = true
auto_explain.log_buffers = true |
<code>SELECT * FROM pg_stat_activity WHERE state = 'active';</code> — live queries. С PID можно canсel: <code>SELECT pg_cancel_backend(pid);</code>
Autovacuum делает автоматически. Manual нужен после bulk insert/update — refresh stats для query planner.
Application pool (node-pg-pool) — per-app. PgBouncer — cluster-wide, shared. Production — PgBouncer (1000 app connections → 25 Postgres).
Сначала indexes (часто 100× improvement). Scale (replicas, shards) — после exhausting query optimization.