Richard Huxton <dev@xxxxxxxxxxxx> writes: > James Im wrote: >> What am I missing to limit the memory taken by session to 1MB? > You can't. In particular, work_mem is memory *per sort* so can be > several times that. If you're trying to get PG to run in 64MB or > something like that, I think you're going to be disappointed. Yeah. I think the working RAM per backend is approaching a megabyte these days just for behind-the-scenes overhead (catalog caches and so forth), before you expend even one byte on per-query structures that work_mem would affect. Something else to consider: I dunno what tool you were using on Windows to look at memory usage or how it counts shared memory, but on Unix a lot of process-monitoring tools tend to count shared memory against each process touching that shared memory. Which leads to artificially bloated numbers. The default PG shared memory block size these days is order-of-10-megabytes I think; if a backend has touched any significant fraction of that since it started, that could dwarf the backend's true private workspace size. If you're concerned about total memory footprint for a pile of backends, usually the right answer is to put some connection-pooling software in front of them, not try to hobble each backend to work in a tiny amount of space. regards, tom lane