Database Connection Pooling: How It Works and Best Practices
Why Connection Pooling Matters
Every database query requires a connection. Creating a new connection involves a TCP handshake, authentication, memory allocation on the database server, and session initialization. For MySQL, this process takes 5–50ms. For PostgreSQL, it can take 50–100ms due to process forking. Multiply this by thousands of requests per second, and connection overhead becomes a major bottleneck.
Connection pooling solves this by maintaining a set of pre-established connections that are shared across requests. Instead of creating and destroying connections, your application borrows a connection from the pool, uses it, and returns it. The result: dramatically lower latency, higher throughput, and reduced load on the database server.
How Connection Pools Work
A connection pool is a cache of database connections. The core lifecycle:
- Initialization — the pool creates a minimum number of connections at startup (the "minimum pool size")
- Borrow — when application code needs a database connection, it requests one from the pool. If an idle connection is available, it is returned immediately. If not, a new connection is created (up to the maximum pool size)
- Use — the application executes queries on the borrowed connection
- Return — after use, the connection is returned to the pool and becomes available for the next request
- Eviction — idle connections that exceed the idle timeout are closed to free resources
- Validation — before lending a connection, the pool may verify it is still alive (Ping or test query)
Pool Size Configuration
The most critical setting. Too small: requests queue waiting for connections, increasing latency. Too large: the database is overwhelmed by concurrent connections, each consuming memory and CPU.
| Parameter | Description | Typical Default |
|---|---|---|
| min_pool_size | Connections created at startup | 5–10 |
| max_pool_size | Maximum concurrent connections | 20–50 |
| idle_timeout | Close idle connections after this duration | 300s |
| connection_timeout | Max wait time to borrow a connection | 5–30s |
| max_lifetime | Close connections after this total age | 1800s |
The Optimal Pool Size Formula
A widely cited formula from the PostgreSQL wiki:
pool_size = (core_count * 2) + effective_spindle_count
Example for a 4-core server with SSD:
pool_size = (4 * 2) + 1 = 9
This is a starting point, not a final answer. The right pool size depends on query duration, concurrency patterns, and whether the workload is CPU-bound or I/O-bound. Profile under realistic load to find the optimum.
A common mistake is setting the pool too large. PostgreSQL performance degrades significantly beyond ~100 connections due to lock contention and context switching. MySQL handles more connections but still suffers beyond 200–300.
Connection Pooling in PHP
PHP's request lifecycle creates a unique challenge: each request typically creates a new database connection and destroys it at the end. Native PHP does not have built-in connection pooling. Solutions include:
- Persistent connections —
PDO::ATTR_PERSISTENT = truereuses connections across requests in the same worker process. Simple but limited: no pool size control, connections may leak - ProxySQL — a middleware proxy that pools connections between PHP and MySQL. Highly configurable, supports connection multiplexing
- MySQL Router — official MySQL middleware for connection management and load balancing
- Swoole/RoadRunner — PHP application servers that maintain connection pools across requests
// PHP PDO persistent connection
$pdo = new PDO(
'mysql:host=localhost;dbname=myapp',
'user',
'password',
[PDO::ATTR_PERSISTENT => true]
);
// With ProxySQL, connect to proxy instead of MySQL directly
$pdo = new PDO(
'mysql:host=127.0.0.1;port=6033;dbname=myapp',
'user',
'password'
);
Connection Pooling with PgBouncer
PgBouncer is the most popular connection pooler for PostgreSQL. It sits between the application and PostgreSQL, managing a pool of server connections:
# /etc/pgbouncer/pgbouncer.ini
[databases]
myapp = host=127.0.0.1 port=5432 dbname=myapp
[pgbouncer]
listen_port = 6432
listen_addr = 127.0.0.1
pool_mode = transaction
max_client_conn = 1000
default_pool_size = 25
min_pool_size = 5
reserve_pool_size = 5
reserve_pool_timeout = 3
server_idle_timeout = 300
server_lifetime = 3600
PgBouncer supports three pool modes:
- Session mode — one server connection per client session. Safest but least efficient
- Transaction mode — connection is assigned per transaction. Best for most web applications. 10x better connection utilization
- Statement mode — connection is assigned per statement. Most aggressive but breaks multi-statement transactions
Monitoring Connection Pools
Metrics to watch:
- Active connections — currently in use. If consistently equal to max_pool_size, the pool is saturated
- Idle connections — waiting for work. Too many wastes database memory
- Wait queue length — requests waiting for a connection. Non-zero means the pool is too small or queries are too slow
- Connection creation rate — frequent new connections indicate churn, which defeats the purpose of pooling
- Average wait time — time requests spend waiting for a connection. Should be near zero
-- MySQL: check active connections
SHOW STATUS LIKE 'Threads_connected';
SHOW STATUS LIKE 'Threads_running';
SHOW STATUS LIKE 'Max_used_connections';
-- PostgreSQL: check connections
SELECT state, count(*)
FROM pg_stat_activity
GROUP BY state;
-- PgBouncer: check pool stats
SHOW POOLS;
SHOW STATS;
Common Pitfalls
- Connection leaks — borrowing a connection without returning it. Eventually the pool is exhausted and all requests hang. Always use try/finally or language-specific resource management
- Pool per microservice instance — 20 instances with pool size 50 = 1000 connections to the database. Coordinate pool sizes across all services
- No connection validation — stale connections (after database restart, network blip) cause errors. Enable validation with a test query or TCP check
- Ignoring max_lifetime — very long-lived connections can hold stale server state, leak memory, or miss configuration changes. Rotate connections periodically
- Long-running transactions — a transaction that holds a connection for seconds blocks other requests from using it, effectively reducing pool size
Best Practices
- Start with a small pool size (10–20) and increase based on monitoring
- Set connection timeout to prevent indefinite waits (5–10 seconds)
- Use transaction-mode pooling for web workloads (PgBouncer) or ProxySQL for MySQL
- Monitor pool saturation and wait queue metrics actively
- Ensure all code paths return connections (handle errors and exceptions)
- Coordinate pool sizes when running multiple application instances
- Test failover: what happens when the database restarts? Does the pool recover gracefully?
- Keep transactions short to maximize connection availability
Conclusion
Connection pooling is one of the highest-impact, lowest-effort performance optimizations for database-backed applications. It reduces latency, increases throughput, and protects the database from connection storms. Choose the right pooling strategy for your stack, configure pool sizes carefully, monitor for saturation, and avoid connection leaks. The payoff is immediate and measurable.
Check your website right now
Check now →