Part 1 - PgBouncer to ProxySQL: Rethinking the PostgreSQL Middle Tier
The common shape today is PgBouncer for pooling, with routing, failover awareness, and most of the observability story scattered across adjacent tools and application code. The alternative this post argues for is a ProxySQL fleet that pulls more of that logic into the proxy itself. If pooling is all you need, PgBouncer is still the cleaner default — that point doesn’t go away. But once you need routing, traffic policy, and shared query visibility at the proxy layer, ProxySQL is worth a hard look.
PostgreSQL has had pooling and proxy options for years. What’s newer is a single middle-tier component that combines pooling, routing, monitoring, and staged runtime control in one place.
The 3 AM Page
Your pager goes off. Primary database is pinned near 100% CPU. Latency is climbing across half the fleet. On-call drops into the runbook.
pg_stat_activity tells the story fast enough: one Grafana dashboard query is scanning a hot table every second, across too much data, from too many users at once. No lock-manager mystery — just a bad query hitting production hard enough to become everybody else’s outage.
If you’re running a PgBouncer-based stack, the next twenty minutes usually look like this:
- Find the exact statement and the client source.
- Find the owning team or dashboard.
- Kill sessions while the application or dashboard reconnects and issues the same query again.
- Rate-limit the caller somewhere outside PostgreSQL, if you can.
- If you can’t, block the user, host, or connection path more bluntly than you wanted.
Identifying the query is the easy part. The pain is containment. PostgreSQL itself isn’t where most teams keep statement-level traffic policy, and PgBouncer deliberately doesn’t try to be that layer either.
With ProxySQL in the path, the same incident can become a runtime policy change instead of an application release:
INSERT INTO pgsql_query_rules
(rule_id, active, username, match_pattern, error_msg, apply, comment)
VALUES
(10, 1, 'grafana', '^SELECT .* FROM large_events .* ORDER BY ts DESC$',
'temporarily blocked during incident', 1, 'incident containment');
LOAD PGSQL QUERY RULES TO RUNTIME;
That assumes you’ve already identified the offending statement from stats_pgsql_query_digest, logs, or the backend itself. The exact regex isn’t the interesting part — what matters is that the proxy gives you a place to express live traffic policy without editing application code or hitting the database with broader controls than the incident calls for. We’ve watched MySQL teams use this exact escape hatch for years: a query rule applied at 3 AM to stop the bleeding while the real fix lands in the next deploy. The PostgreSQL story is newer, but the operating pattern carries over.
That’s the case for rethinking the PostgreSQL middle tier with ProxySQL. Not that it makes PostgreSQL magically easy — nothing does. The case is that the proxy layer can absorb work a PgBouncer-based stack tends to leave scattered across scripts, services, and operator habit.
Where proxy-layer responsibility moves
Keep the rest of the stack steady. PostgreSQL stays as the database, and Patroni, repmgr, pg_auto_failover, or a managed-service control plane still decides promotions. We’re only evaluating the middle tier in front of PostgreSQL.
The before state is the common composed setup: PgBouncer for pooling, plus a stable front door, an HA orchestrator, exporters, and routing or retry logic outside the proxy. The after state keeps the same cluster and the same HA authority but moves more of the routing, policy, and observation into a ProxySQL fleet.
Before and after architecture
Before getting into features, count the operational surfaces. This is a comparison of two middle-tier shapes, not two databases. In the diagram, BEFORE is the common PgBouncer composition. AFTER keeps the same PostgreSQL cluster and the same external HA orchestrator, but swaps in a ProxySQL fleet that owns more pooling, routing, policy, and stats.
%%{init: {'theme':'base', 'themeVariables': {'fontFamily':'Trebuchet MS, Segoe UI, sans-serif', 'fontSize':'18px', 'lineColor':'#475569'}, 'flowchart': {'padding': 40, 'nodeSpacing': 30, 'rankSpacing': 50, 'curve': 'basis', 'htmlLabels': true, 'wrap': true, 'subGraphTitleMargin': {'top': 8, 'bottom': 12}}}}%%
flowchart TB
subgraph BEFORE["<span style='font-size:24px;font-weight:700;color:#1e3a8a'>PgBouncer-centered middle tier <br/></span>"]
direction TB
AppB["Apps and services"]
LogicB["Read/write logic<br/>outside proxy"]
FrontB["Stable endpoint<br/>(optional)"]
PoolB["PgBouncer fleet<br/>pooling only"]
ObsB["Observability<br/>exporter"]
DbB["PostgreSQL <br/>cluster <br/>"]
HAB["HA orchestrator<br/>(off data path)"]
AppB --> LogicB
AppB --> FrontB
FrontB --> PoolB
PoolB --> DbB
LogicB -. chooses path .-> FrontB
ObsB -. proxy metrics .-> PoolB
HAB -. promotes / observes .-> DbB
end
classDef app fill:#fff1c2,stroke:#d97706,stroke-width:2px,color:#7c2d12;
classDef policy fill:#ffedd5,stroke:#ea580c,stroke-width:2px,color:#7c2d12;
classDef front fill:#f8fafc,stroke:#94a3b8,stroke-width:2px,color:#334155;
classDef pgb fill:#dbeafe,stroke:#2563eb,stroke-width:2.5px,color:#1e3a8a;
classDef monitor fill:#ccfbf1,stroke:#0f766e,stroke-width:2px,color:#134e4a;
classDef db fill:#ffe4e6,stroke:#e11d48,stroke-width:2.5px,color:#881337;
classDef ha fill:#ede9fe,stroke:#7c3aed,stroke-width:2px,color:#4c1d95;
class AppB app;
class LogicB policy;
class FrontB front;
class PoolB pgb;
class ObsB monitor;
class DbB db;
class HAB ha;
style BEFORE fill:#eef6ff,stroke:#60a5fa,stroke-width:3px,color:#1e3a8a
linkStyle default stroke:#475569,stroke-width:2.4px,color:#334155
%%{init: {'theme':'base', 'themeVariables': {'fontFamily':'Trebuchet MS, Segoe UI, sans-serif', 'fontSize':'18px', 'lineColor':'#475569'}, 'flowchart': {'padding': 40, 'nodeSpacing': 30, 'rankSpacing': 50, 'curve': 'basis', 'htmlLabels': true, 'wrap': true, 'subGraphTitleMargin': {'top': 8, 'bottom': 12}}}}%%
flowchart TB
subgraph AFTER["<span style='font-size:24px;font-weight:700;color:#065f46'>ProxySQL-centered middle tier <br/></span>"]
direction TB
AppA["Apps and services<br/>one DB endpoint"]
FrontA["Stable endpoint<br/>(optional)"]
subgraph ProxyA["ProxySQL fleet"]
direction TB
POOL_A["<span style='font-size:14px'> Pooling </span>"]
ROUTE_A["<span style='font-size:14px'> Routing </span>"]
RULES_A["<span style='font-size:14px'> Query rules </span>"]
MON_A["<span style='font-size:14px'> Monitor </span>"]
STATS_A["<span style='font-size:14px'> Stats </span>"]
CACHE_A["<span style='font-size:14px'> Cache </span>"]
POOL_A ~~~ MON_A
ROUTE_A ~~~ STATS_A
RULES_A ~~~ CACHE_A
end
DbA["PostgreSQL <br/>cluster <br/>"]
HAA["HA orchestrator<br/>(off data path)"]
AppA --> FrontA
FrontA --> ProxyA
ProxyA --> DbA
HAA -. promotes / observes .-> DbA
end
classDef app fill:#fff1c2,stroke:#d97706,stroke-width:2px,color:#7c2d12;
classDef front fill:#f8fafc,stroke:#94a3b8,stroke-width:2px,color:#334155;
classDef proxy fill:#d5f5ff,stroke:#0891b2,stroke-width:2.5px,color:#164e63;
classDef db fill:#ffe4e6,stroke:#e11d48,stroke-width:2.5px,color:#881337;
classDef ha fill:#ede9fe,stroke:#7c3aed,stroke-width:2px,color:#4c1d95;
class AppA app;
class FrontA front;
class POOL_A,ROUTE_A,RULES_A,MON_A,STATS_A,CACHE_A proxy;
class DbA db;
class HAA ha;
style ProxyA fill:#e0f2fe,stroke:#0891b2,stroke-width:2.5px,color:#164e63
linkStyle 0,1,2 stroke-width:0px,stroke-opacity:0,fill:none
style AFTER fill:#ecfdf5,stroke:#34d399,stroke-width:3px,color:#065f46
linkStyle default stroke:#475569,stroke-width:2.4px,color:#334155
This isn’t really “five boxes versus one box,” which would be a misleading framing. A real ProxySQL deployment still wants multiple proxy nodes, a stable front door, and an HA orchestrator off the data path.
The shift is about where proxy-layer responsibility lives. In the PgBouncer setup, pooling stays in the proxy but routing, proxy observability, and some backend-state handling tend to live elsewhere. With ProxySQL, more of that moves into one proxy layer.
Concretely: hostgroups can stay aligned with role state, lagging replicas can be shunned by policy, and live query rules can be applied without waiting on an application release. PgBouncer is excellent at pooling — that’s what it’s built for, and it deliberately doesn’t try to be a broader PostgreSQL control plane.
Per node: process fleet vs. thread pool
Now zoom in from deployment shape to a single proxy node. The database topology hasn’t changed; only the design of the proxy tier has.
The PgBouncer fleet pattern isn’t accidental — it falls directly out of its per-process design.
%%{init: {'theme':'base', 'themeVariables': {'fontFamily':'Trebuchet MS, Segoe UI, sans-serif', 'fontSize':'18px', 'lineColor':'#475569'}, 'flowchart': {'padding': 40, 'nodeSpacing': 30, 'rankSpacing': 50, 'curve': 'basis', 'htmlLabels': true, 'wrap': true, 'subGraphTitleMargin': {'top': 8, 'bottom': 30}}}}%%
flowchart TB
subgraph PGB["<span style='font-size:24px;font-weight:700;color:#1e3a8a'>PgBouncer: process fleet <br/></span>"]
direction TB
PH["One proxy node<br/>multiple PgBouncer processes"]
P1["Process #1<br/>1 thread<br/>1 CPU core"]
P2["Process #2<br/>1 thread<br/>1 CPU core"]
P3["Process #3<br/>1 thread<br/>1 CPU core"]
PN["Process #N<br/>1 thread<br/>1 CPU core"]
PH --> P1
PH --> P2
PH --> P3
PH --> PN
end
classDef pgbShell fill:#dbeafe,stroke:#2563eb,stroke-width:2.5px,color:#1e3a8a;
classDef pgbProc fill:#eff6ff,stroke:#60a5fa,stroke-width:2px,color:#1e40af;
class PH pgbShell;
class P1,P2,P3,PN pgbProc;
style PGB fill:#eef6ff,stroke:#60a5fa,stroke-width:3px,color:#1e3a8a
linkStyle default stroke:#475569,stroke-width:2.4px,color:#334155
%%{init: {'theme':'base', 'themeVariables': {'fontFamily':'Trebuchet MS, Segoe UI, sans-serif', 'fontSize':'18px', 'lineColor':'#475569'}, 'flowchart': {'padding': 40, 'nodeSpacing': 30, 'rankSpacing': 50, 'curve': 'basis', 'htmlLabels': true, 'wrap': true, 'subGraphTitleMargin': {'top': 8, 'bottom': 30}}}}%%
flowchart TB
subgraph PSQL["<span style='font-size:24px;font-weight:700;color:#065f46'>ProxySQL: thread pool <br/></span>"]
direction TB
PX["One ProxySQL instance<br/>multi-threaded"]
ADM["Admin thread"]
MON["Monitor thread"]
W1["Worker #1 "]
W2["Worker #2 "]
W3["Worker #3 "]
WN["Worker #N "]
ADM --> PX
MON --> PX
PX --> W1
PX --> W2
PX --> W3
PX --> WN
end
classDef proxyShell fill:#d5f5ff,stroke:#0891b2,stroke-width:2.5px,color:#164e63;
classDef proxyProc fill:#ccfbf1,stroke:#0f766e,stroke-width:2px,color:#134e4a;
class PX proxyShell;
class ADM,MON,W1,W2,W3,WN proxyProc;
style PSQL fill:#ecfdf5,stroke:#34d399,stroke-width:3px,color:#065f46
linkStyle default stroke:#475569,stroke-width:2.4px,color:#334155
Deployment note: the diagram is only about one proxy node. Both designs can still run multiple proxy nodes behind an LB or VIP. The per-node difference is internal: PgBouncer usually means multiple processes on a node, while ProxySQL means one multi-threaded instance.
PgBouncer is single-threaded by design — one instance uses exactly one CPU core. On a multi-core server, a single PgBouncer process can’t fully utilize the hardware, and at high throughput it hits a ceiling regardless of how much CPU you’ve thrown at the box. The documented scaling path is to run multiple PgBouncer processes per node, often behind SO_REUSEPORT or an external load balancer.
ProxySQL takes the opposite per-node approach: one multi-threaded process with worker threads, plus separate admin and monitoring paths. One instance can use all the cores on the node, scaling vertically as the hardware grows. That doesn’t mean one proxy for the whole deployment — you still run multiple ProxySQL nodes behind a load balancer or VIP for HA and aggregate capacity.
Neither approach is inherently better in all circumstances. The operational trade-off:
- PgBouncer: each process is small and easy to reason about. The cost: at scale you run multiple processes per node, and keeping their configs in sync (and watching for drift between processes) becomes part of the operations job.
- ProxySQL: one process per node means one config surface and a broader feature set. The cost: that single process now matters more — uptime and correctness become more critical to operate.
Three jobs that usually leak out of the pooler
This is the clearest day-to-day difference between the two designs. With PgBouncer, even though there’s a pooler in the path, topology knowledge, routing decisions, and much of the reconnect behavior still tend to live outside it — in application code or adjacent infrastructure. With ProxySQL, more of that moves down into the proxy layer, while promotion stays with an external HA orchestrator.
%%{init: {'theme':'base', 'themeVariables': {'fontFamily':'Trebuchet MS, Segoe UI, sans-serif', 'fontSize':'28px', 'lineColor':'#475569'}, 'flowchart': {'padding': 40, 'nodeSpacing': 30, 'rankSpacing': 50, 'curve': 'basis', 'htmlLabels': true, 'wrap': true, 'subGraphTitleMargin': {'top': 8, 'bottom': 36}}}}%%
flowchart TB
subgraph APP_OWNS["<span style='font-size:36px;font-weight:700;color:#9f1239'>PgBouncer: scattered jobs <br/></span>"]
direction TB
AppB["Application<br/>and adjacent tooling"]
PGB0["PgBouncer <br/>pooling only "]
T1["Topology knowledge <br/>which host is writer? <br/>which are readers? "]
R1["Routing policy<br/>which traffic can read from replicas?<br/>which must hit primary?"]
F1["Backend-state reaction<br/>reconnects, retries,<br/>endpoint churn"]
AppB --> PGB0
AppB --> T1
AppB --> R1
AppB --> F1
end
classDef app fill:#fff1c2,stroke:#d97706,stroke-width:2px,color:#7c2d12;
classDef pgb fill:#dbeafe,stroke:#2563eb,stroke-width:2.5px,color:#1e3a8a;
classDef burden fill:#ffe4e6,stroke:#e11d48,stroke-width:2px,color:#881337;
class AppB app;
class PGB0 pgb;
class T1,R1,F1 burden;
style APP_OWNS fill:#fff5f7,stroke:#fb7185,stroke-width:2.5px,color:#9f1239
linkStyle default stroke:#475569,stroke-width:2.4px,color:#334155
%%{init: {'theme':'base', 'themeVariables': {'fontFamily':'Trebuchet MS, Segoe UI, sans-serif', 'fontSize':'22px', 'lineColor':'#475569'}, 'flowchart': {'padding': 40, 'nodeSpacing': 30, 'rankSpacing': 90, 'curve': 'basis', 'htmlLabels': true, 'wrap': true, 'subGraphTitleMargin': {'top': 8, 'bottom': 36}}}}%%
flowchart TB
subgraph PROXY_OWNS["<span style='font-size:30px;font-weight:700;color:#065f46'>ProxySQL: consolidated jobs <br/></span>"]
direction TB
AppA["Application<br/>one endpoint"]
subgraph ProxyBox["ProxySQL"]
direction TB
POOL["Connection<br/>pooling"]
T2["Topology knowledge "]
R2["Routing rules"]
F2["Backend-state reaction"]
end
HA["External HA controller<br/>still promotes the new primary"]
AppA --> ProxyBox
HA -. role change .-> F2
end
classDef app fill:#fff1c2,stroke:#d97706,stroke-width:2px,color:#7c2d12;
classDef gain fill:#dcfce7,stroke:#10b981,stroke-width:2px,color:#065f46;
classDef ha fill:#ede9fe,stroke:#7c3aed,stroke-width:2px,color:#4c1d95;
class AppA app;
class POOL,T2,R2,F2 gain;
class HA ha;
style PROXY_OWNS fill:#ecfdf5,stroke:#34d399,stroke-width:2.5px,color:#065f46
style ProxyBox fill:#d5f5ff,stroke:#0891b2,stroke-width:2.5px,color:#164e63
linkStyle default stroke:#475569,stroke-width:2.4px,color:#334155
The three responsibilities worth pulling down into the proxy layer:
Topology knowledge. Which host is the writer? Which ones are readers? Which replica disappeared last week? Every service, proxy, or failover script that hardcodes or indirectly depends on those answers becomes part of the database team’s change surface.
Routing policy. Which traffic is allowed to hit replicas? This is the subtle one. Not every SELECT is replica-safe. Read-after-write paths, consistency-sensitive reads, and session-dependent logic still need policy. A proxy helps by centralizing where that policy lives — it doesn’t infer business semantics out of thin air.
Backend-state reaction. When a replica dies, a backend is shunned, or a primary changes role, somebody has to notice and adapt. PgBouncer pools through some of this, but it isn’t trying to be the place where routing and traffic policy live. A fuller proxy can absorb more of that churn than leaving each service or surrounding control layer to rediscover it independently.
This is where ProxySQL earns its keep. It gives infrastructure a place to hold the routing and backend-state logic that PostgreSQL teams otherwise spread across service code, middleware, health-check scripts, and operator habits.
What it doesn’t do, and shouldn’t be expected to: decide who becomes primary. Promotion stays with Patroni, repmgr, pg_auto_failover, an RDS or Aurora control plane, or an operator script. ProxySQL reacts to role state and rewires traffic; it isn’t the authority making the HA decision.
Database routing becomes runtime policy
In a PgBouncer stack, pooling lives in the proxy, but routing changes and traffic policy usually live somewhere else — application code, middleware, HA scripts, or operator runbooks. Operationally, proxy changes still tend to follow a traditional file-based workflow: edit a config file, reload the process, verify behavior.
ProxySQL changes follow a different workflow: modify servers, users, hostgroups, and query rules through SQL tables in memory, inspect the pending changes, activate them in runtime, verify the live state through runtime tables and stats, and only then persist them to disk.
The difference here isn’t just speed — it’s control. ProxySQL lets you stage a routing or policy change in memory, activate it at runtime, observe the effect under live traffic, and decide later whether it belongs in the permanent configuration. That matters during incidents, but also during backend maintenance, replica-policy changes, and temporary traffic shaping. If that kind of live control belongs in your middle tier, ProxySQL has a real architectural advantage. If your goal is to keep the proxy layer narrow, static, and easy to reason about, PgBouncer still has the cleaner operating model.
%%{init: {'theme':'base', 'themeVariables': {'fontFamily':'Trebuchet MS, Segoe UI, sans-serif', 'fontSize':'22px', 'lineColor':'#475569'}, 'flowchart': {'padding': 40, 'nodeSpacing': 30, 'rankSpacing': 50, 'curve': 'basis', 'htmlLabels': true, 'wrap': true, 'subGraphTitleMargin': {'top': 8, 'bottom': 36}}}}%%
flowchart LR
subgraph LIFECYCLE["<span style='font-size:30px;font-weight:700;color:#7c2d12'>Three-layer configuration lifecycle <br/></span>"]
direction LR
M["MEMORY<br/>working copy"]
R["RUNTIME<br/>active state"]
D["DISK<br/>persisted state"]
M ==>|"LOAD ... TO RUNTIME"| R
R ==>|"SAVE ... TO DISK"| D
D -. "LOAD ... FROM DISK" .-> M
M -. "edit / compare" .-> R
end
classDef memory fill:#fff1c2,stroke:#d97706,stroke-width:2.5px,color:#7c2d12;
classDef runtime fill:#dbeafe,stroke:#2563eb,stroke-width:2.5px,color:#1e3a8a;
classDef disk fill:#ede9fe,stroke:#7c3aed,stroke-width:2.5px,color:#4c1d95;
class M memory;
class R runtime;
class D disk;
style LIFECYCLE fill:#fffaf0,stroke:#f59e0b,stroke-width:2.5px,color:#7c2d12
linkStyle default stroke:#475569,stroke-width:2.4px,color:#334155
The practical outcome is a staged change model:
- edit in
pgsql_servers,pgsql_users, orpgsql_query_rules - compare MEMORY with
runtime_tables before activation LOAD ... TO RUNTIMEto applySAVE ... TO DISKonly when you’re ready to persist
On a single proxy, runtime activation is atomic. Across multiple proxy nodes, ProxySQL Cluster converges them through checksum-based synchronization. That isn’t a fleet-wide atomic commit, but it’s a more controlled rollout model than editing and reloading independent poolers one host at a time.
What you actually get from consolidation
Consolidation here doesn’t mean fewer PostgreSQL servers, and it doesn’t mean ditching Patroni or another HA orchestrator. It means fewer proxy-layer responsibilities split between PgBouncer, exporters, sidecar scripts, and service code.
For teams already running ProxySQL on the MySQL side, PostgreSQL support also brings one familiar proxy operating model across both environments — that’s been the most common feedback we hear from existing users.
The practical gains:
Built-in backend monitoring and traffic steering. ProxySQL continuously watches backend state through checks like ping, read-only status, and replication lag, then uses that data to keep writer and reader hostgroups aligned with backend role.
A SQL-native configuration surface. PgBouncer exposes an admin console for stats and control commands, but ProxySQL goes further: servers, users, hostgroups, and query rules are managed through admin tables, with staged activation and persistence rather than a file-edit-and-reload workflow.
Live traffic policy. Query rules let you block, reroute, rewrite, or cache eligible traffic without waiting on a code release.
Shared query visibility. stats_pgsql_query_digest gives you one place to ask what’s expensive, who’s issuing it, and which hostgroup it’s hitting.
Operational changes through commands, not releases. Drain a backend, reduce weight, shun a lagging replica, or adjust routing rules — all without turning database topology into an application rollout problem.
When pooling alone stops being enough
This is the real decision point. PgBouncer doesn’t become the wrong tool just because the environment gets larger. The case for ProxySQL gets strong only when the work you kept out of the proxy starts reappearing elsewhere as routing logic, failover glue, traffic controls, and visibility gaps.
If pooling is still the only shared responsibility, PgBouncer is the cleaner choice. The inflection point is when the pooler’s narrowness starts pushing too much operational logic into application code, middleware, scripts, and runbooks.
That usually shows up as one or more of these becoming normal:
- read/write routing lives in application code, middleware, or sidecar scripts
- failover adaptation depends on health-check scripts, reconnect behavior, or operator muscle memory
- adding, draining, or shunning a backend means touching multiple tools or coordinating multiple teams
- incident-time traffic policy means an application release or a blunt database-level block
- query visibility across services is fragmented enough that simple questions take too long to answer
If none of that is true, PgBouncer’s narrowness is still a virtue. If several of those are already true, the simplicity hasn’t disappeared — it’s just been pushed outward, into your own systems. At that point the comparison isn’t really pooler versus proxy. It’s whether you want that operational logic scattered around the stack or concentrated in the middle tier.
Tradeoffs and caveats
ProxySQL’s PostgreSQL path is real and increasingly capable, but the tradeoffs are concrete. It can centralize routing policy, backend-state reaction, and query visibility, but it still can’t infer your application’s consistency requirements for you.
It’s newer on PostgreSQL than PgBouncer. PgBouncer has the longer PostgreSQL track record. ProxySQL’s PostgreSQL path is younger but builds on years of MySQL-side production experience — the operational model and control plane aren’t new, just the database underneath.
You still need real HA machinery. ProxySQL observes role state and moves traffic based on it. It doesn’t promote a new primary, fence an old one, or solve split-brain.
You still deploy a proxy fleet. A production setup still means multiple proxy nodes and a stable front door. The simplification is control-surface consolidation, not replacing the whole middle tier with one box.
Consolidation concentrates responsibility. If pooling, routing, monitoring, and incident controls all live in one proxy layer, that layer becomes more operationally important. The benefit is a simpler surrounding stack; the cost is that the proxy has to be treated as a more critical control point.
Proxy-layer query caching is a real differentiator, but a narrow one. A PgBouncer stack doesn’t offer query caching in the middle tier. ProxySQL does, which can help with incident absorption and a limited class of repetitive reads. It’s still TTL-based with no schema-aware invalidation, so it isn’t a replacement for Redis or application-aware caching.
The staged configuration model demands discipline. Separating working, active, and persisted state is powerful, but it also creates room for operator error. LOAD ... TO RUNTIME without SAVE ... TO DISK leaves a live change that disappears on restart. SAVE ... TO DISK without the intended LOAD can leave persisted state out of sync with what’s actually active.
When this trade is worth making
Part 1 isn’t an argument that every PgBouncer deployment should migrate. It’s an argument that once routing, failover handling, and traffic policy are already spread across code, scripts, and operator runbooks, the PgBouncer-based design isn’t necessarily the simpler operational model anymore — even though it looks simpler on a diagram.
What’s next
Part 2 leaves architecture behind and gets concrete: configuration tables, hostgroups, query rules, and what runtime policy actually looks like in practice.