Skip to content

Postgres Connection Pooling with PgBouncer

A Postgres connection is not a lightweight socket. It spawns a backend OS process, allocates private memory, and requires a full authentication round-trip. Running 500 application threads each with a direct connection degrades performance even when most are idle. PgBouncer solves this by multiplexing many application connections onto a small pool of real server connections.

Why connections are expensive

Each Postgres connection corresponds to a postgres backend process. That process allocates:

  • A private memory context: shared buffers are shared, but each backend has its own work_mem, local sort buffers, hash tables, and plan cache — typically 5–10 MB per connection at rest.
  • OS resources: file descriptors, a stack, and kernel scheduling overhead. At 500 connections, context-switch overhead is measurable.
  • Lock manager entries: even idle connections hold slots in the lock manager hash table.

The practical limit varies by server RAM, but most production databases become unstable well before max_connections = 500. The inflection point is usually around 100–200 connections on a 16 GB instance. Beyond that, Postgres spends more time managing connections than executing queries.

PgBouncer pooling modes

PgBouncer sits between your application and Postgres, maintaining a small pool of real server connections and multiplexing many application connections onto them. It supports three pooling modes:

Transaction mode (recommended)

A server connection is held for the duration of a single transaction, then returned to the pool. This is the most efficient mode — a server connection is only occupied while work is actually happening.

Trade-off: session-level features don't survive across transactions. SET variables, prepared statements (unless server_reset_query is configured), and advisory locks are lost when the connection is returned. Applications must be written with this in mind.

Session mode

A server connection is held for the entire application session. This preserves all session state but provides no multiplexing benefit — it's essentially a proxy with connection limiting. Use session mode only when you need session-level features like prepared statements or SET LOCAL settings that persist across transactions.

Statement mode

A server connection is held for a single statement. Auto-commit only — no multi-statement transactions. Rarely appropriate for production applications; mostly useful for read-only analytics workloads.

Pool sizing formula

The classic formula for server connection pool size, derived from the original HikariCP research:

pool_size = (core_count * 2) + effective_spindles

For a 4-core server with SSD (1 effective spindle): pool size = 9. For an 8-core server with SSD: 17. These numbers seem low but they reflect CPU saturation, not connection count. The counterintuitive insight: adding more connections beyond this point adds context-switch overhead without improving throughput.

In practice, add headroom for bursts and monitoring connections:

# PgBouncer pgbouncer.ini
[databases]
mydb = host=localhost port=5432 dbname=mydb

[pgbouncer]
pool_mode = transaction
default_pool_size = 20
max_client_conn = 1000
min_pool_size = 5
reserve_pool_size = 5
reserve_pool_timeout = 3

What to watch in pg_stat_activity

With a pool in front of Postgres, most server connections will show idle or idle in transaction in pg_stat_activity. Watch for:

SELECT state, count(*)
FROM pg_stat_activity
WHERE datname = 'mydb'
GROUP BY state;

A large count of idle in transaction connections indicates application code that opens a transaction, does slow work (HTTP calls, file I/O), and hasn't committed. In transaction mode, PgBouncer can't return these connections to the pool until the transaction closes. Set a tight idle_in_transaction_session_timeout to kill them automatically:

SET idle_in_transaction_session_timeout = '30s';

Application-level pooling vs PgBouncer

ORMs and database drivers (SQLAlchemy, HikariCP, pg, asyncpg) all include built-in connection pools. These are useful for single-process applications but don't help when you scale horizontally — 10 app replicas each with a 10-connection pool means 100 server connections.

PgBouncer sits outside the application process and is shared across all replicas. It remains the right tool for horizontal scaling. Use both: a small application-level pool (3–5 connections per replica) combined with PgBouncer keeps round-trip latency low while capping total server connections.

Quick checklist

  • Check pg_stat_activity — if active connections exceed 100, add a pooler.
  • Use transaction mode for stateless applications; session mode only when session state is required.
  • Target server pool size at (cores × 2) + spindles, not connection count.
  • Set idle_in_transaction_session_timeout to catch runaway transactions.
  • Deploy PgBouncer outside the app process so it's shared across replicas.

Related: Postgres MVCC · Reading EXPLAIN Plans

Postgres Connection Pooling with PgBouncer | TuneStack