$ free -h # Before the query
total used free shared buffers cached
Mem: 7.8G 5.2G 2.6G 212M 90M 4.9G
-/+ buffers/cache: 248M 7.6G
Swap: 0B 0B 0B
$ free -h # After the query
total used free shared buffers cached
Mem: 7.8G 5.3G 2.5G 212M 90M 4.9G
-/+ buffers/cache: 312M 7.5G
Swap: 0B 0B 0B
[I.K >> ] Are you talking about buffers/cache increased? AFAIK this memory is used by kernel as buffer before any block device (HDD for example).
Postgres does not use this memory directly, it simply reads data from block device, and kernel caches it. Process can’t be OOMed because of it.
I am sure you should configure your Postgres to NEVER exceed available RAM. You may use tools like (http://pgtune.leopard.in.ua/) or calculate it manually.
I do not remember exact formula, but it should be something like “work_mem*max_connections + shared_buffers” and it should be around 80% of your machine RAM (minus RAM used by other processes and kernel).
It will save you from OOM.
If you face performance bottleneck after it, you fix it using tools like “log_min_duration_statement”, “track_io_timing” and system-provided tools.
Ilya Kazakevich
JetBrains
The Drive to Develop
Hi Ilya,
> Are you talking about buffers/cache increased? AFAIK this memory is used by kernel as buffer before any block device (HDD for example).
If I'm reading the output correctly, buffers/cached do not increase. I'm looking at the 248MB -> 312MB under the "used" column in the "-/+ buffers/cache" row. This number excludes the buffer/cached, so that can't explain the ~60MB increase. "Shared" also remains the same (212MB), so the shared buffers filling can't explain the increase either.
> I do not remember exact formula, but it should be something like “work_mem*max_connections + shared_buffers” and it should be around 80% of your machine RAM (minus RAM used by other processes and kernel). It will save you from OOM.
My Postgres is configured with *very* conservative values. work_mem (4MB) * max_connections (100) + shared buffers (512MB) = ~1GB, yet Postgres managed to fill up a 4GB server. I'm seeing workers consuming hundreds of MBs of memory (and not releasing any of it until the connection closes), despite work_mem being 4MB.
— Theron
On Thu, Aug 25, 2016 at 8:57 AM, Ilya Kazakevich <Ilya.Kazakevich@xxxxxxxxxxxxx> wrote: