Part 2 - PgBouncer to ProxySQL: A Brief Feature Comparison
This is Part 2 of a series comparing PgBouncer and ProxySQL as PostgreSQL proxies. Part 1 argued for replacing the typical scattered middle tier (PgBouncer for pooling, with routing logic in application code, an HAProxy in front for failover-aware load balancing, and separate exporters for metrics) with a ProxySQL fleet that absorbs most of that work into the proxy itself. That covered the case for the change. This post is about what each tool actually does once you put them next to each other.
Connection pooling is the only feature both tools really share. Almost everything else in this comparison covers responsibilities PgBouncer deliberately leaves alone, and that ProxySQL pulls into the proxy itself.
At a glance
A capability summary first; the rest of the post walks through the rows that matter most. Each row is a production responsibility, and the columns describe what each tool actually does today.
| Capability | PgBouncer 1.25.x | ProxySQL 3.0.x |
|---|---|---|
| Pooling behaviors | Session / transaction / statement modes | Statement-level multiplexing, sticky on state, fast_forward |
| Pooling scope | Global, with per-database overrides | Per-user, per-session decision, per-query override |
| Extended Query / prepared statements with multiplexing | Per-session cache, opt-in via max_prepared_statements (since 1.21) | Global prepared-statement cache; query rules re-evaluated at every Bind/Execute, so a single client’s Parse/Bind/Execute can each land on different backends — see extended-query lifecycle, extended-query protocol, prepared-statement cache |
| Read/write splitting in the proxy | No — lives in app or HAProxy | Yes, via query rules |
| Topology awareness | No | Yes — hostgroups + active monitoring |
| Automatic writer/reader move on promotion | No | Yes — pg_is_in_recovery() driven |
| Backend health checks | One probe (server_check_query) | Four probes — connect, ping, read-only, replication lag |
| Replication-lag-aware shunning | No | Yes |
| Retry on connection-error query failure | No | Yes — configurable retries |
| Live query rules (block, route, rewrite, cache, throttle) | No | Yes |
| Per-query digest stats with attribution | No | Yes — stats_pgsql_query_digest |
| Per-rule and per-user counters | No | Yes |
| Built-in Prometheus | Sidecar pgbouncer_exporter | Endpoint built in |
| Per-hostgroup tuning (multiplex toggle, throttling, pool-fill) | No | Yes — pgsql_hostgroup_attributes |
| Audit / event / slow-query log at the proxy | No | Yes |
| Native cluster config sync | External (deploy tool) | Yes — proxysql_servers, checksum/pull sync |
| Staged config with runtime apply | INI + reload | MEMORY → RUNTIME → DISK |
| Hot TLS certificate rotation | Reload required | PROXYSQL RELOAD TLS |
| SSL-to-backend configuration layers | Global only (server_tls_* for all backends) | Global default (pgsql-ssl_p2s_*) plus per-backend overrides (pgsql_servers_ssl_params) |
| Authentication methods | Trust, MD5, SCRAM-SHA-256, password passthrough, LDAP, HBA file | Cleartext, MD5, SCRAM-SHA-256 — credentials live in pgsql_users (no passthrough) |
| LISTEN / NOTIFY | Yes | No on multiplex (planned) — yes on fast_forward sessions |
A note on fast_forward: ProxySQL also offers a per-user pass-through mode for traffic that doesn’t need rule evaluation — common cases are bulk-loader users and synthetic monitoring agents where the per-query overhead isn’t welcome. Traffic on those sessions still flows through ProxySQL, but query rules, rewrite, cache, and most proxy-side query visibility do not apply to it. FFTO (Fast Forward Traffic Observer) puts the per-query visibility piece back — opt-in in the 3.1.x line and on by default from 4.0.x.
The sections that follow unpack each row that matters, in priority order.
Routing and topology awareness
PgBouncer doesn’t look at the queries it forwards. It pools connections to whatever backends you’ve listed in the INI, and that’s the whole contract. Anything resembling a read/write split — knowing the difference between a SELECT and an UPDATE, or knowing which host is the writer right now — has to live somewhere outside the pooler. Most production stacks put that knowledge in application code, in an ORM router, in a separate HAProxy with a pg_is_in_recovery() health-check script, or some combination of all three. PgBouncer’s own connection-level distribution (host=h1,h2,h3 in a database entry) is round-robin at the TCP level; it has no way to tell SELECT apart from UPDATE because it isn’t a protocol-aware proxy.
ProxySQL parses the Postgres wire protocol. Each statement is matched against pgsql_query_rules and dispatched to a hostgroup. The Monitor module polls pg_is_in_recovery() against each backend on its own schedule, and pgsql_replication_hostgroups keeps the writer-and-reader mapping aligned with backend role automatically. When the HA orchestrator promotes standby-2, ProxySQL picks up the role change within seconds and moves the new primary into the writer hostgroup. Applications keep talking to the same proxy endpoint; in-flight work on the old primary can still fail the way it always would during failover, but the topology change itself doesn’t ripple out into application configuration.
%%{init: {'theme':'base', 'themeVariables': {'fontFamily':'Trebuchet MS, Segoe UI, sans-serif', 'fontSize':'18px', 'lineColor':'#475569'}, 'flowchart': {'padding': 40, 'nodeSpacing': 30, 'rankSpacing': 50, 'curve': 'basis', 'htmlLabels': true, 'wrap': true, 'subGraphTitleMargin': {'top': 8, 'bottom': 12}}}}%%
flowchart TB
subgraph PB_ROUTE["<span style='font-size:24px;font-weight:700;color:#1e3a8a'>PgBouncer stack <br/></span>"]
direction TB
PB_APP["App / ORM <br/>routing logic lives here "]
PB_LB["HAProxy or custom checks <br/>decides writer vs reader "]
PB_W["PgBouncer writer pool "]
PB_R["PgBouncer reader pool "]
PB_PRI[("Primary ")]
PB_R1[("Replica 1 ")]
PB_R2[("Replica 2 ")]
PB_PAT["HA orchestrator "]
PB_EXP["External exporters <br/>pool + backend metrics "]
PB_APP --> PB_LB
PB_LB --> PB_W --> PB_PRI
PB_LB --> PB_R
PB_R --> PB_R1
PB_R --> PB_R2
PB_PAT -.->|observes| PB_PRI
PB_PAT -.->|observes| PB_R1
PB_PAT -.->|observes| PB_R2
PB_EXP -.->|scrapes| PB_W
PB_EXP -.->|scrapes| PB_R
PB_EXP -.->|scrapes| PB_PRI
PB_EXP -.->|scrapes| PB_R1
PB_EXP -.->|scrapes| PB_R2
end
classDef app fill:#fff4cc,stroke:#d97706,stroke-width:2px,color:#7c2d12;
classDef proxy fill:#dbeafe,stroke:#2563eb,stroke-width:2px,color:#1e3a8a;
classDef data fill:#ffe4e6,stroke:#e11d48,stroke-width:2px,color:#881337;
classDef external fill:#ede9fe,stroke:#7c3aed,stroke-width:2px,color:#4c1d95;
class PB_APP app;
class PB_LB,PB_W,PB_R proxy;
class PB_PRI,PB_R1,PB_R2 data;
class PB_PAT,PB_EXP external;
style PB_ROUTE fill:#eef6ff,stroke:#60a5fa,stroke-width:3px,color:#1e3a8a
linkStyle default stroke:#475569,stroke-width:2.4px,color:#334155
%%{init: {'theme':'base', 'themeVariables': {'fontFamily':'Trebuchet MS, Segoe UI, sans-serif', 'fontSize':'18px', 'lineColor':'#475569'}, 'flowchart': {'padding': 40, 'nodeSpacing': 30, 'rankSpacing': 50, 'curve': 'basis', 'htmlLabels': true, 'wrap': true, 'subGraphTitleMargin': {'top': 8, 'bottom': 12}}}}%%
flowchart TB
subgraph PS_ROUTE["<span style='font-size:24px;font-weight:700;color:#065f46'>ProxySQL stack <br/></span>"]
direction TB
PS_APP["App <br/>single proxy endpoint "]
subgraph PS_BOX["ProxySQL "]
direction TB
PS_RULES["Query rules "]
PS_HG["Writer / reader <br/>hostgroups "]
PS_POOL["Pooling and <br/>multiplexing "]
PS_MON["Monitor <br/>role + lag checks "]
PS_RULES --> PS_HG --> PS_POOL
PS_MON -.->|updates| PS_HG
end
PS_PRI[("Primary ")]
PS_R1[("Replica 1 ")]
PS_R2[("Replica 2 ")]
PS_PAT["HA orchestrator "]
PS_APP --> PS_RULES
PS_POOL --> PS_PRI
PS_POOL --> PS_R1
PS_POOL --> PS_R2
PS_PAT -.->|promotes| PS_PRI
PS_PAT -.->|promotes| PS_R1
PS_PAT -.->|promotes| PS_R2
end
classDef app fill:#fff4cc,stroke:#d97706,stroke-width:2px,color:#7c2d12;
classDef module fill:#d5f5ff,stroke:#0891b2,stroke-width:2px,color:#164e63;
classDef data fill:#ffe4e6,stroke:#e11d48,stroke-width:2px,color:#881337;
classDef external fill:#ede9fe,stroke:#7c3aed,stroke-width:2px,color:#4c1d95;
class PS_APP app;
class PS_RULES,PS_HG,PS_POOL,PS_MON module;
class PS_PRI,PS_R1,PS_R2 data;
class PS_PAT external;
style PS_ROUTE fill:#ecfdf5,stroke:#34d399,stroke-width:3px,color:#065f46
style PS_BOX fill:#e0f2fe,stroke:#0891b2,stroke-width:2px,color:#164e63
linkStyle default stroke:#475569,stroke-width:2.4px,color:#334155
The operational shift is that applications stop needing to carry topology knowledge. One endpoint and one driver are enough; routing classes, ORM annotations, and middleware libraries become ProxySQL’s job rather than application code’s. We’ve watched the same pattern play out on the MySQL side for years — once routing moves into the proxy, the database team gets the freedom to add replicas, drain a node, or roll a primary failover without coordinating those changes with every application that holds a connection string.
Live traffic policy
PgBouncer has no concept of a query rule. Once a connection is authenticated, the queries on it are forwarded straight through to the backend. When something is hammering production from the wrong side of a deploy boundary, the available options reduce to shipping an application change, terminating sessions and hoping the client reconnects without immediately re-issuing the same statement, or blocking the user at the database layer through pg_hba.conf and a PostgreSQL reload. Each of those works, eventually. None of them are fast, and none of them really belong in the proxy layer.
ProxySQL’s pgsql_query_rules table turns traffic policy into rows you can edit at runtime, rather than something you have to ship and deploy:
| Action | Column | Use case |
|---|---|---|
| Block with custom error | error_msg | Incident-time containment, deprecation enforcement |
| Route to a hostgroup | destination_hostgroup | Read/write split, per-tenant isolation |
| Rewrite the query | match_pattern + replace_pattern | Force a LIMIT, rename a deprecated table, inject a hint |
| Cache the result | cache_ttl | Absorb load on hot, cacheable reads |
| Apply forwarding delay | delay | Soft backpressure on a misbehaving pattern |
| Per-query timeout | timeout | Enforce SLO without changing PG-side statement_timeout |
| Per-rule retry budget | retries | Override the default retry-on-failure for specific patterns |
| Override multiplexing per query | multiplex | Make one query pattern sticky (or force back to multiplexable) without changing the rest of the session |
A rule change is an INSERT INTO pgsql_query_rules followed by LOAD PGSQL QUERY RULES TO RUNTIME. The new rule is live as soon as the LOAD finishes; nothing restarts.
The difference at incident time is real. With a PgBouncer stack, the “this query is killing us” recovery loop usually involves finding the owning team, shipping the fix, watching it propagate, and reloading config — well under an hour if the team is responsive, but rarely under 15 minutes. On a ProxySQL stack, the same loop is a few minutes’ work: identify the digest, insert a rule with error_msg, run LOAD, and breathe again. Once the underlying fix lands later, DELETE the rule and run LOAD a second time.
The query cache is TTL-based and lives in the same process as the rest of the routing decision, so there’s no extra service to operate alongside it. Two patterns fit it well: incident absorption (a query suddenly gets hot, you give it a 30-second TTL while the real fix ships) and reference-data lookups whose refresh cadence is already predictable.
Backend monitoring
PgBouncer’s backend monitoring is a single optional probe: server_check_query, defaulting to SELECT 1;, running at the interval set by server_check_delay. It’s enough to notice a dead backend and stop sending it traffic, which is the whole job PgBouncer is trying to do here.
ProxySQL runs four independent probes, each on its own thread, each with its own interval and timeout:
| Probe | What it checks | What it drives |
|---|---|---|
| Connect | TCP + auth still works | Mark backend SHUNNED on failure |
| Ping | Backend responds | Mark backend SHUNNED on failure |
| Read-only | pg_is_in_recovery() value | Move backend between writer and reader hostgroups |
| Replication lag | LSN behind primary | Mark SHUNNED_REPLICATION_LAG over max_replication_lag |
Each probe writes to its own history table: pgsql_server_connect_log, pgsql_server_ping_log, pgsql_server_read_only_log, and pgsql_server_replication_lag_log. When a backend gets shunned at 3
SELECT against the matching log table, with timestamps, latency, and whatever value the probe was looking at when the decision was made.
%%{init: {'theme':'base', 'themeVariables': {'fontFamily':'Trebuchet MS, Segoe UI, sans-serif', 'fontSize':'18px', 'lineColor':'#475569'}, 'flowchart': {'padding': 40, 'nodeSpacing': 30, 'rankSpacing': 50, 'curve': 'basis', 'htmlLabels': true, 'wrap': true, 'subGraphTitleMargin': {'top': 8, 'bottom': 12}}}}%%
flowchart TB
subgraph PB_MON["<span style='font-size:24px;font-weight:700;color:#1e3a8a'>PgBouncer monitoring <br/></span>"]
direction TB
PB_POOL["PgBouncer "]
PB_CHECK["server_check_query <br/>one liveness probe "]
PB_EXT["External observability <br/>exporters + Prometheus + alerts "]
PB_DB[("Backend ")]
PB_POOL --> PB_CHECK --> PB_DB
PB_EXT -.->|scrapes| PB_POOL
PB_EXT -.->|scrapes| PB_DB
end
classDef proxy fill:#dbeafe,stroke:#2563eb,stroke-width:2px,color:#1e3a8a;
classDef external fill:#fff4cc,stroke:#d97706,stroke-width:2px,color:#7c2d12;
classDef db fill:#ffe4e6,stroke:#e11d48,stroke-width:2px,color:#881337;
class PB_POOL proxy;
class PB_CHECK,PB_EXT external;
class PB_DB db;
style PB_MON fill:#eef6ff,stroke:#60a5fa,stroke-width:3px,color:#1e3a8a
linkStyle default stroke:#475569,stroke-width:2.4px,color:#334155
%%{init: {'theme':'base', 'themeVariables': {'fontFamily':'Trebuchet MS, Segoe UI, sans-serif', 'fontSize':'18px', 'lineColor':'#475569'}, 'flowchart': {'padding': 40, 'nodeSpacing': 30, 'rankSpacing': 50, 'curve': 'basis', 'htmlLabels': true, 'wrap': true, 'subGraphTitleMargin': {'top': 8, 'bottom': 12}}}}%%
flowchart TB
subgraph PS_MON["<span style='font-size:24px;font-weight:700;color:#065f46'>ProxySQL monitoring <br/></span>"]
direction TB
PS_DB[("Backend set ")]
subgraph PS_MONITOR["Monitor module "]
direction LR
M_CONN["Connect "]
M_PING["Ping "]
M_ROLE["Read-only role "]
M_LAG["Replication lag "]
end
subgraph PS_LOGS["monitor schema "]
direction LR
L_CONN["pgsql_server_connect_log "]
L_PING["pgsql_server_ping_log "]
L_ROLE["pgsql_server_read_only_log "]
L_LAG["pgsql_server_replication_lag_log "]
end
PS_ACTION["Automatic reactions <br/>shun, hostgroup move, lag shun "]
PS_METRICS["Built-in metrics <br/>REST API / Prometheus "]
M_CONN --> PS_DB
M_PING --> PS_DB
M_ROLE --> PS_DB
M_LAG --> PS_DB
M_CONN --> L_CONN --> PS_ACTION
M_PING --> L_PING --> PS_ACTION
M_ROLE --> L_ROLE --> PS_ACTION
M_LAG --> L_LAG --> PS_ACTION
PS_ACTION -.-> PS_METRICS
end
classDef probe fill:#d5f5ff,stroke:#0891b2,stroke-width:2px,color:#164e63;
classDef log fill:#eff6ff,stroke:#60a5fa,stroke-width:2px,color:#1e40af;
classDef action fill:#dcfce7,stroke:#10b981,stroke-width:2px,color:#065f46;
classDef db fill:#ffe4e6,stroke:#e11d48,stroke-width:2px,color:#881337;
class M_CONN,M_PING,M_ROLE,M_LAG probe;
class L_CONN,L_PING,L_ROLE,L_LAG log;
class PS_ACTION,PS_METRICS action;
class PS_DB db;
style PS_MON fill:#ecfdf5,stroke:#34d399,stroke-width:3px,color:#065f46
style PS_MONITOR fill:#fffaf0,stroke:#f59e0b,stroke-width:2px,color:#7c2d12
style PS_LOGS fill:#eef6ff,stroke:#60a5fa,stroke-width:2px,color:#1e3a8a
linkStyle default stroke:#475569,stroke-width:2.4px,color:#334155
The replication-lag probe is the one with no equivalent on the PgBouncer side. ProxySQL uses it to automatically stop sending reads to any replica that’s fallen behind a configurable threshold. A PgBouncer + HAProxy stack handles the same scenario by tolerating stale reads, by adding a “force-to-primary” annotation in the application’s routing layer, or by writing a separate health-check script that calls pg_last_wal_replay_lsn() itself. None of those are free, and stale reads keep showing up as a recurring P2 source on stacks that haven’t built one of them.
Observability — per-query, per-user, per-rule
PgBouncer’s observability is pool-level. Commands like SHOW POOLS, SHOW STATS, SHOW SERVERS, SHOW CLIENTS, and SHOW DATABASES (with SHOW LISTS, SHOW MEM, and a few others rounding out the admin surface) give you the operational picture: how many clients are connected, how many backend slots are in use, the total bytes in and out, and the configured pool topology. None of that answers which queries are running, who’s running them, or which queries are slow. The production answer to those questions usually comes from enabling pg_stat_statements on every backend and running an aggregator across the fleet.
ProxySQL records per-distinct-query statistics automatically in stats_pgsql_query_digest. A single SQL query against that table answers what’s slow, who’s running it, and against which hostgroup, fleet-wide:
SELECT username, database, hostgroup, count_star,
sum_time / count_star AS avg_us,
substr(digest_text, 1, 60) AS query_sample
FROM stats_pgsql_query_digest
ORDER BY sum_time DESC LIMIT 20;
The digest is the most-used surface, but it isn’t the only one. Other useful tables and endpoints:
stats_pgsql_query_rules— per-rule hit counters, so you can see which rules are firingstats_pgsql_users— per-user connection countersstats_pgsql_errors— per-backend, per-user, per-database error counters with the last error textstats_pgsql_free_connections— full state of every backend connection in the poolstats_pgsql_processlist— proxy-side equivalent ofpg_stat_activity- A built-in Prometheus endpoint on port 6070 (enable via
admin-restapi_enabled)
pg_stat_statements and the ProxySQL digest cover overlapping but different ground; one is the database’s view of what ran, the other is the proxy’s. For the day-to-day question of who’s putting load on the cluster right now, the proxy-side view tends to be quicker to query and easier to attribute, and it doesn’t need a separate exporter to expose the data to Prometheus.
A PgBouncer-based stack that wants comparable observability ends up running three things: pgbouncer_exporter for pool metrics, pg_exporter for database metrics, and per-instance pg_stat_statements aggregation across the fleet. Each one has its own version skew against the thing it’s monitoring. ProxySQL collapses those three into one process and one schema.
Hostgroup-level tuning
PgBouncer exposes its tuning controls at the database entry: pool_mode, default_pool_size, max_db_connections, and reserve_pool_size are all set per database in the INI, and every client connecting to that database gets the same treatment.
ProxySQL puts those controls at the hostgroup level, which is a more useful unit once a single backend is fronting multiple workloads:
| Per-hostgroup attribute | What it controls |
|---|---|
multiplex | Per-hostgroup multiplexing toggle |
connection_warming | Pre-warm the pool instead of lazy-create |
throttle_connections_per_sec | Hard cap on backend connection-creation rate |
ignore_session_variables | Exceptions for session variables that should not force stickiness when set |
free_connections_pct | Pool-fill behavior |
max_num_online_servers | Cap on ONLINE servers in the hostgroup |
throttle_connections_per_sec is the rate-limiting primitive. The “Live traffic policy” section above covered per-rule blocking and per-rule delay, which are query-level controls. This one is connection-level: a hard cap that earns its keep when a primary is mid-recovery or a backend is touchy about connection storms right after a deploy.
ignore_session_variables is the multiplexing escape hatch, although Postgres-side support for it is still evolving. The mental model worth carrying is that ProxySQL is not promising “every SET pins the connection forever.” It’s promising “ProxySQL keeps reusing backends as long as it can either preserve or safely recreate any session state on a different backend.” Anything ProxySQL doesn’t know how to abstract away can still pin the session to a hostgroup or a specific backend.
Per-backend SSL — when one TLS config doesn’t fit
PgBouncer’s TLS-to-backend settings (server_tls_ca_file, server_tls_cert_file, server_tls_key_file, server_tls_sslmode, and the rest of that family) live under [pgbouncer] and apply globally. Every backend the bouncer connects to ends up using the same CA chain, the same client certificate, and the same TLS mode.
That’s a fine model when the proxy fronts one cluster with one CA. It stops working as soon as a deployment crosses CA boundaries, which happens whenever a primary on AWS RDS sits in front of a replica on Google Cloud SQL, or whenever an in-cluster Postgres lives alongside a managed-service backend. In those cases PgBouncer needs more than one configuration. The common production pattern is to run one PgBouncer fleet per TLS domain (separate processes, separate INI files, separate listening ports) and let the application or an HAProxy layer choose the right fleet per backend. There are escape hatches: concatenating CA bundles for verify-ca, dropping to sslmode = require, or skipping TLS entirely and trusting the private network. Each one trades off something — hostname pinning, certificate verification, or end-to-end encryption.
ProxySQL splits SSL into two layers: a global default and a per-backend override. The right shape depends on the deployment.
Global default, the same shape PgBouncer offers — proxy-to-server settings under the pgsql-ssl_p2s_* family:
| Variable | What it sets |
|---|---|
pgsql-ssl_p2s_ca / pgsql-ssl_p2s_capath | CA file or directory used by default for backend verification |
pgsql-ssl_p2s_cert / pgsql-ssl_p2s_key | Default client cert and key presented to backends |
pgsql-ssl_p2s_cipher | Default cipher list |
pgsql-ssl_p2s_crl / pgsql-ssl_p2s_crlpath | Default revocation list / directory |
A pgsql_servers.use_ssl flag (0/1) decides per backend whether SSL is even attempted. With nothing else configured, every backend that has use_ssl = 1 uses the global pgsql-ssl_p2s_* settings, which is operationally equivalent to PgBouncer’s model.
Per-backend override, keyed on the (hostname, port, username) triple, lives in pgsql_servers_ssl_params. One row per backend with its own ssl_ca, ssl_cert, ssl_key, ssl_crl, ssl_crlpath, ssl_protocol_version_range. Where a row matches, it overrides the global; where there’s no row, the global applies. The same proxy fleet can talk to AWS RDS with the AWS CA chain and to Cloud SQL with the GCP CA chain, doing verify-full against the right CA in both directions, with no fleet split and no application-side routing decision:
-- Cloud A backend, AWS CA chain, TLS 1.2 through 1.3
INSERT INTO pgsql_servers_ssl_params
(hostname, port, username, ssl_ca, ssl_cert, ssl_key, ssl_protocol_version_range)
VALUES
('aws-primary.internal', 5432, '',
'/etc/ssl/aws-rds-ca.pem', '/etc/ssl/aws-client.crt', '/etc/ssl/aws-client.key',
'TLSv1.2-TLSv1.3');
-- Cloud B backend, GCP CA chain, TLS 1.3 only
INSERT INTO pgsql_servers_ssl_params
(hostname, port, username, ssl_ca, ssl_cert, ssl_key, ssl_protocol_version_range)
VALUES
('gcp-replica.internal', 5432, '',
'/etc/ssl/gcp-cloudsql-ca.pem', '/etc/ssl/gcp-client.crt', '/etc/ssl/gcp-client.key',
'TLSv1.3');
LOAD PGSQL SERVERS TO RUNTIME;
SAVE PGSQL SERVERS TO DISK;
The two-layer model gives ProxySQL a TLS configuration that’s effectively a superset of PgBouncer’s. If you only ever set the globals, the deployment looks the same as a PgBouncer-style stack. Add pgsql_servers_ssl_params rows once you find a backend that doesn’t fit the global template. For a single-cluster deployment, the override layer simply isn’t in the way; nothing forces you to use it. For a multi-cloud or mixed managed-service deployment, or any topology that crosses CA boundaries, the operational gap is concrete: multiple PgBouncer fleets to deploy, monitor, and upgrade versus one ProxySQL fleet with a global default and a few override rows. It’s the difference between operating one proxy tier and operating several.
Audit log, event log, slow-query log
PgBouncer doesn’t ship audit logging, event logging, or a slow-query log of its own. The PgBouncer-stack answer for “log slow queries” or “audit who connected when” is usually to do it at PostgreSQL: log_min_duration_statement, log_connections, log_disconnections, and an external audit extension if more is needed. That works, but everything funnels into the database’s own logging system, which loses the distinction between what the proxy saw and what actually reached the backend.
ProxySQL has all three at the proxy layer:
- Audit log —
pgsql-auditlog_filenamewrites proxy-side audit records to a rotating file. For compliance regimes that want an evidence trail independent of PostgreSQL’s own logs, this is the relevant surface. - Event log —
pgsql-eventslog_filenamecaptures proxy-side events to a rotating file and complements the digest and processlist views during incident review. - Slow-query threshold —
pgsql-long_query_time(in milliseconds) sets the cutoff for what ProxySQL counts as slow in its statistics. Use query timeouts separately if you want a long query terminated.
SAVE PGSQL QUERY DIGEST TO DISK persists the in-memory digest table so it survives restarts, complementing the file-based event log for forensics across restarts.
For SOC 2, PCI-DSS, or any audit regime that wants proxy-level evidence independent of the database’s own logs, those three surfaces are what you’d otherwise be building by hand on top of PgBouncer.
Cluster sync — config consistency as a primitive
Both tools end up running as fleets in production. Whether the fleet stays in sync is a feature inside one tool and a property of the surrounding deployment pipeline in the other.
PgBouncer has no concept of cluster awareness. Multiple PgBouncer processes are independent of each other regardless of whether they’re on the same node behind SO_REUSEPORT or spread across nodes behind a load balancer. Keeping their configurations consistent is a job for the deployment tooling: Ansible, a Helm chart, a baked image, take your pick. Drift between processes is a real failure mode. Somebody edits one host’s INI during an incident, forgets to roll the change to the others, and the cluster is quietly inconsistent until someone notices.
ProxySQL has a proxysql_servers table for peer registration through admin SQL. Once the peers are registered, ProxySQL Cluster compares checksums between them and syncs any changed configuration across the cluster, instead of leaving the entire consistency story to external tooling.
%%{init: {'theme':'base', 'themeVariables': {'fontFamily':'Trebuchet MS, Segoe UI, sans-serif', 'fontSize':'18px', 'lineColor':'#475569'}, 'flowchart': {'padding': 40, 'nodeSpacing': 30, 'rankSpacing': 50, 'curve': 'basis', 'htmlLabels': true, 'wrap': true, 'subGraphTitleMargin': {'top': 8, 'bottom': 12}}}}%%
flowchart TB
subgraph PB_CLUSTER["<span style='font-size:24px;font-weight:700;color:#1e3a8a'>PgBouncer fleet <br/></span>"]
direction TB
PB_DEPLOY["Ansible / Helm / image rollout <br/>keeps nodes aligned "]
PB1["Node 1 <br/>local INI + pool "]
PB2["Node 2 <br/>local INI + pool "]
PB3["Node 3 <br/>local INI + pool "]
PB_DRIFT["Risk during incidents <br/>one-off hotfix can drift "]
PB_DEPLOY --> PB1
PB_DEPLOY --> PB2
PB_DEPLOY --> PB3
PB1 -.-> PB_DRIFT
PB2 -.-> PB_DRIFT
PB3 -.-> PB_DRIFT
end
classDef deploy fill:#fff4cc,stroke:#d97706,stroke-width:2px,color:#7c2d12;
classDef pgb fill:#dbeafe,stroke:#2563eb,stroke-width:2px,color:#1e3a8a;
classDef risk fill:#ffe4e6,stroke:#e11d48,stroke-width:2px,color:#881337;
class PB_DEPLOY deploy;
class PB1,PB2,PB3 pgb;
class PB_DRIFT risk;
style PB_CLUSTER fill:#eef6ff,stroke:#60a5fa,stroke-width:3px,color:#1e3a8a
linkStyle default stroke:#475569,stroke-width:2.4px,color:#334155
%%{init: {'theme':'base', 'themeVariables': {'fontFamily':'Trebuchet MS, Segoe UI, sans-serif', 'fontSize':'18px', 'lineColor':'#475569'}, 'flowchart': {'padding': 40, 'nodeSpacing': 30, 'rankSpacing': 50, 'curve': 'basis', 'htmlLabels': true, 'wrap': true, 'subGraphTitleMargin': {'top': 8, 'bottom': 12}}}}%%
flowchart TB
subgraph PS_CLUSTER["<span style='font-size:24px;font-weight:700;color:#065f46'>ProxySQL fleet <br/></span>"]
direction TB
PS_CHANGE["Change one node <br/>LOAD to runtime, then save "]
PS1["Node A <br/>runtime + disk state "]
PS2["Node B <br/>runtime + disk state "]
PS3["Node C <br/>runtime + disk state "]
PS_SYNC["ProxySQL Cluster <br/>proxysql_servers + checksum sync "]
PS_CHANGE --> PS1
PS1 -.-> PS_SYNC
PS2 -.-> PS_SYNC
PS3 -.-> PS_SYNC
end
classDef deploy fill:#fff4cc,stroke:#d97706,stroke-width:2px,color:#7c2d12;
classDef proxy fill:#d5f5ff,stroke:#0891b2,stroke-width:2px,color:#164e63;
classDef sync fill:#dcfce7,stroke:#10b981,stroke-width:2px,color:#065f46;
class PS_CHANGE deploy;
class PS1,PS2,PS3 proxy;
class PS_SYNC sync;
style PS_CLUSTER fill:#ecfdf5,stroke:#34d399,stroke-width:3px,color:#065f46
linkStyle default stroke:#475569,stroke-width:2.4px,color:#334155
The model isn’t a fleet-wide atomic commit; each node still applies the change independently. But the rollout has built-in consistency checking, instead of relying on hope that every host received the rsync. Combined with the runtime configuration lifecycle described in the next section, it adds up to an actual platform for fleet-wide configuration changes that doesn’t go through a deploy pipeline.
Configuration model — staged, atomic, runtime-tunable
PgBouncer’s configuration model is the INI file plus RELOAD. The INI is the source of truth, most settings are picked up by SIGHUP, and a handful of them (listen_addr, listen_port, unix_socket_dir, user, pidfile) need a full restart. There’s an admin console for stats and a few control commands (PAUSE, RESUME, KILL), but every configuration change ultimately routes through the INI.
ProxySQL keeps configuration in admin SQL across three explicit layers:
- MEMORY — the working copy. Edit through
INSERT/UPDATE/DELETE. - RUNTIME — the live state actually serving traffic.
- DISK — the persisted state that survives restart.
Promotion between layers is explicit. LOAD ... TO RUNTIME activates a change atomically; SAVE ... TO DISK persists it. You can stage edits in MEMORY, diff them against the runtime_* views, apply with LOAD, observe under live traffic, and decide later whether to persist.
Every routing change is a SQL statement, which means it’s reviewable as text, applied with one LOAD, and rolled back with another. PgBouncer’s INI files are also text, but the pipeline around them is edit, redeploy, reload across N hosts. That’s versionable, just not stage-able, and not atomic across the fleet.
Pooling — both do this, with different tradeoffs
Both tools share the basic premise of pooling: let many application-side connections share a much smaller set of PostgreSQL backend connections. That’s the whole reason a pooler is in the path, and both tools cover it. Where they differ is in how the pooling decision actually gets made.
-
PgBouncer picks one pool mode and applies it everywhere. The operator chooses
session,transaction, orstatementin the INI, and that’s the mode every client connecting to that database gets. Changing it requires editing the INI and reloading PgBouncer; it isn’t a runtime, per-command decision. Most production deployments runpool_mode = transaction. Statement mode exists, but the set of session features that survive it is narrow. -
ProxySQL decides per command, multiplexing by default and pinning only for as long as session state demands. Most queries release the backend the instant they finish. When a statement creates session state, ProxySQL holds onto the connection only as long as that state actually lives:
BEGINpins one backend untilCOMMITorROLLBACK.CREATE TEMP TABLEkeeps the session sticky untilDISCARD TEMP,DISCARD ALL, or session end. ProxySQL doesn’t track individual temp tables, so dropping them one by one withDROP TABLEdoesn’t release the stickiness.pg_advisory_lock(...)holds stickiness while the lock is held; the matching unlock releases it.- A
SETagainst a variable ProxySQL doesn’t track (a custom user variable, for instance) pins the session to that backend for the rest of its life. Without knowing the value, ProxySQL has no safe way to move the session to a different backend. - An Extended-Query frame (
Parse→Bind→Execute) pins one backend untilSync; the next frame can land on a different backend, because query rules are re-evaluated at everyBindandExecute(lifecycle, PS cache).
Once the relevant state ends and the session is safe to reuse, the backend rejoins the pool. There’s no operator-chosen pool mode; the proxy adapts on a per-command basis.
The thing that makes a session resist multiplexing is backend-local state: a temp table exists only on the backend that ran CREATE TEMP TABLE, an advisory lock is held by exactly one connection, an open transaction is bound to its backend, and a SET of an untracked variable lives only on the backend that received it. ProxySQL keeps multiplexing as long as the state is either absent or something the proxy can replay on a fresh backend. Tracked session variables like search_path and time_zone get re-applied for free. As soon as the session produces state ProxySQL can’t reproduce, the bullets above take over and the session pins.
PgBouncer’s pool mode is set per database. ProxySQL’s pool behavior is set per user (fast_forward), per session (the multiplex-or-pin decision), and per query rule (pgsql_query_rules.multiplex). The same database can serve three users with three different pool behaviors at the same time.
%%{init: {'theme':'base', 'themeVariables': {'fontFamily':'Trebuchet MS, Segoe UI, sans-serif', 'fontSize':'18px', 'lineColor':'#475569'}, 'flowchart': {'padding': 40, 'nodeSpacing': 30, 'rankSpacing': 50, 'curve': 'basis', 'htmlLabels': true, 'wrap': true, 'subGraphTitleMargin': {'top': 8, 'bottom': 12}}}}%%
flowchart TB
subgraph PB_POOL["<span style='font-size:24px;font-weight:700;color:#1e3a8a'>PgBouncer pooling <br/></span>"]
direction TB
PB_PICK["Operator chooses <br/>one pool_mode "]
PB_SESS["Session "]
PB_TXN["Transaction <br/>common production default "]
PB_STMT["Statement <br/>narrow safe envelope "]
PB_PICK --> PB_SESS
PB_PICK --> PB_TXN
PB_PICK --> PB_STMT
end
classDef pgb fill:#dbeafe,stroke:#2563eb,stroke-width:2px,color:#1e3a8a;
class PB_PICK,PB_SESS,PB_TXN,PB_STMT pgb;
style PB_POOL fill:#eef6ff,stroke:#60a5fa,stroke-width:3px,color:#1e3a8a
linkStyle default stroke:#475569,stroke-width:2.4px,color:#334155
%%{init: {'theme':'base', 'themeVariables': {'fontFamily':'Trebuchet MS, Segoe UI, sans-serif', 'fontSize':'18px', 'lineColor':'#475569'}, 'flowchart': {'padding': 40, 'nodeSpacing': 30, 'rankSpacing': 50, 'curve': 'basis', 'htmlLabels': true, 'wrap': true, 'subGraphTitleMargin': {'top': 8, 'bottom': 12}}}}%%
flowchart TB
subgraph PS_POOL["<span style='font-size:24px;font-weight:700;color:#065f46'>ProxySQL pooling <br/></span>"]
direction TB
PS_MUX["Default path <br/>backend released after each <br/>statement or extended-query frame "]
PS_DECIDE{"Pin backend needed? "}
PS_REUSE["No <br/>multiplex "]
PS_PIN["Yes <br/>sticky "]
PS_RETURN[("Connection pool <br/>available to any session ")]
PS_HELD(["Held for this session <br/>next query lands here "])
PS_MUX --> PS_DECIDE
PS_DECIDE -->|No| PS_REUSE --> PS_RETURN
PS_DECIDE -->|Yes| PS_PIN --> PS_HELD
end
classDef proxy fill:#d5f5ff,stroke:#0891b2,stroke-width:2px,color:#164e63;
classDef decision fill:#fff4cc,stroke:#d97706,stroke-width:2px,color:#7c2d12;
classDef pool fill:#dcfce7,stroke:#16a34a,stroke-width:2px,color:#14532d;
classDef held fill:#fed7aa,stroke:#ea580c,stroke-width:2px,color:#7c2d12;
class PS_MUX,PS_REUSE,PS_PIN proxy;
class PS_DECIDE decision;
class PS_RETURN pool;
class PS_HELD held;
style PS_POOL fill:#ecfdf5,stroke:#34d399,stroke-width:3px,color:#065f46
linkStyle default stroke:#475569,stroke-width:2.4px,color:#334155
The practical effect is workload-dependent. For high-concurrency workloads dominated by stateless queries, ProxySQL’s reuse rate runs higher because the default is statement-level. For long-open transactions, both tools pin and the gap shrinks. SET-heavy workloads land somewhere in between: ProxySQL replays tracked variables on a fresh backend and keeps multiplexing, while untracked variables (custom user variables and anything else ProxySQL doesn’t model) pin the session for its lifetime.
Closing thoughts
The argument for ProxySQL is consolidation: one flexible process that handles pooling, routing, traffic policy, monitoring, and the observability layer in one place. You’re not committed to using all of it. Running ProxySQL as a pooling-only proxy is a perfectly reasonable starting point, and the rest of the capabilities can turn on as the operational need shows up. The case for migrating gets strong when read/write splitting, traffic policy, or proxy-side query visibility is already work being handled in application code, sidecar scripts, or operator runbooks. The case is strongest of all for teams already running ProxySQL on the MySQL side — adding PostgreSQL to an existing fleet is the cheapest version of this migration. The ProxySQL PostgreSQL documentation covers the full configuration surface for teams ready to dig in.
What’s next
Coming up in Part 3: the benchmarks — stay tuned.