jemalloc Memory Allocator
ProxySQL uses jemalloc as its memory allocator. Jemalloc is built in to all official ProxySQL packages and requires no additional installation. It provides superior performance and fragmentation characteristics compared to the system allocator, and exposes detailed per-arena statistics that ProxySQL surfaces through its stats interface.
Querying jemalloc Statistics
Memory statistics are available in the stats.stats_memory_metrics table. To view only jemalloc metrics:
SELECT * FROM stats.stats_memory_metrics WHERE Variable_Name LIKE 'jemalloc%';
For the full set of memory metrics (including ProxySQL internal allocations), omit the WHERE clause. See the stats_memory_metrics reference for a complete description of all variables in that table.
Metrics Reference
jemalloc_resident
Total bytes in physically resident data pages mapped by the allocator. This is the most direct measure of the physical RAM consumed by ProxySQL’s heap. It includes all pages that are currently mapped and have been written to — whether in use by the application, cached for future allocations, or holding jemalloc metadata.
Watch for: jemalloc_resident growing unboundedly over time (without a corresponding growth in active connections or query load) is a sign of memory fragmentation or a memory leak.
jemalloc_active
Bytes in pages that are currently allocated to the application (rounded up to the nearest page boundary). This is always a multiple of the system page size and is always greater than or equal to jemalloc_allocated.
Watch for: A large and persistent gap between jemalloc_active and jemalloc_allocated indicates internal fragmentation — memory has been handed to the application in chunks larger than actually needed.
jemalloc_allocated
Bytes actually requested by the application and currently in use. This is the “true” working-set size: what ProxySQL itself believes it is holding. The difference between jemalloc_active and jemalloc_allocated represents per-object internal fragmentation.
jemalloc_mapped
Total bytes in extents mapped by the allocator via mmap (or the equivalent). This is always greater than or equal to jemalloc_resident because it includes virtual address space that may not yet be backed by physical pages (e.g., memory that has been logically released back to the OS but whose virtual mapping has been retained).
jemalloc_metadata
Bytes dedicated to jemalloc’s own bookkeeping structures: chunk headers, run descriptors, bin information, and so on. On a well-functioning system this value should be small relative to jemalloc_allocated (typically 1–3%). A disproportionately large metadata footprint can occur when there is extreme allocation-size diversity creating many small, sparsely populated arenas.
jemalloc_retained
Bytes in virtual memory mappings that were retained by jemalloc rather than returned to the operating system after being freed by the application. Jemalloc keeps these mappings for future reuse to avoid the overhead of repeated mmap/munmap calls. Retained memory does not count as jemalloc_resident and does not consume physical RAM until it is reused.
Interpreting the Metrics Together
A healthy ProxySQL instance typically shows:
jemalloc_allocated≈jemalloc_active(low internal fragmentation)jemalloc_residentmodestly abovejemalloc_active(small allocator overhead)jemalloc_mapped≥jemalloc_resident(some retained virtual space is normal)- Stable values over time under steady load
Situations that warrant investigation:
| Symptom | Likely cause |
|---|---|
jemalloc_resident grows without bound | Memory leak or severe external fragmentation |
Large jemalloc_active − jemalloc_allocated gap | Internal fragmentation from mixed allocation sizes |
jemalloc_mapped >> jemalloc_resident | Large retained pool; usually benign but can mask true RSS |
jemalloc_metadata > 5% of jemalloc_allocated | Many tiny, sparse arenas; consider tuning arena count |
For deeper analysis, consult the official jemalloc documentation, in particular the epoch mallctl and the stats.* namespace for programmatic introspection.