PostgreSQL Does Not Handle Connections Like Other Databases
Most databases keep a connection pool internally. PostgreSQL does not.
Every connection spawns a backend process. At 200 connections you have 200 processes. At 500 you have a memory and scheduling problem. PostgreSQL is designed to run behind a connection pooler — PgBouncer, pgpool-II, or similar — not to accept thousands of direct connections from application servers.
When connections pile up, new client connections fail. Existing ones slow down. The server can go unstable.
This script shows exactly where you stand — active vs idle connections, which users and applications are consuming the most, and how close you are to the configured limit.
Loading…
Reading the Results
Connection Summary
| Column | What It Tells You |
|---|---|
max_connections | The configured connection ceiling for this server |
used_connections | Current total open connections |
connections_remaining | Headroom left before the ceiling |
superuser_reserved | Slots reserved for superuser access (superuser_reserved_connections) |
active_connections | Sessions actively running a query |
idle_connections | Sessions connected but doing nothing |
idle_in_transaction | Sessions inside a transaction but not currently executing — dangerous |
Per User / Application Breakdown
| Column | What It Tells You |
|---|---|
usename | Database user |
application_name | Reported application name (from connection string) |
state | active, idle, idle in transaction |
count | Number of connections in that state for that user/app |
max_conn_for_user | Per-user connection limit (pg_authid.rolconnlimit, -1 = no limit) |
What to Watch For
| Signal | What It Means |
|---|---|
connections_remaining < 10 | Critically close to the ceiling — new connections will be refused |
High idle_in_transaction count | Open transactions not being committed — they hold locks and waste slots |
idle_connections >> active_connections | Connection pool is too large or leaking — connections not being returned |
| One application user consuming most slots | Connection pool misconfigured or not limiting per-service connections |
used_connections near max_connections | Immediate risk — scale the pooler or increase max_connections (requires restart) |
application_name showing direct app connections | Applications bypassing the pooler — should be routed through PgBouncer |
Connection Pooling Checklist
If you are seeing connection pressure, the fix is rarely raising max_connections (it costs memory per connection). Instead:
- Route all application connections through PgBouncer in transaction pooling mode
- Set a
connection_limiton each application role:ALTER ROLE app_user CONNECTION LIMIT 50; - Reduce idle timeouts in your application connection pool
- Monitor
idle in transactionsessions and kill long-running ones:SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE state = 'idle in transaction' AND query_start < NOW() - INTERVAL '5 minutes';
Gareth Winterman