Richard Yen wrote:
Hi All,
I've recently run into problems with my kernel complaining that I ran
out of memory, thus killing off postgres and bringing my app to a
grinding halt.
I'm on a 32-bit architecture with 16GB of RAM, under Gentoo Linux.
Naturally, I have to set my shmmax to 2GB because the kernel can't
support more (well, I could set it to 3GB, but I use 2GB for safety).
Shared_buffers is 200000 and max_connections is 600.
OK, that's ~ 1.6GB shared-memory
Here is a snippet of my log output (I can give more if necessary):
Sep 5 18:38:57 tii-db2.oaktown.iparadigms.com Out of Memory: Kill
process 11696 (postgres) score 1181671 and children.
OK, you've run out of memory at some point.
My understanding is that if any one postgres process's memory usage,
plus the shared memory, exceeds the kernel limit of 4GB, then the kernel
will kill the process off. Is this true? If so, would postgres have
some prevention mechanism that would keep a particular process from
getting too big? (Maybe I'm being too idealistic, or I just simply
don't understand how postgres works under the hood)
You've got max_connections of 600 and you think individual backends are
using more than 2.4GB RAM each? Long before that you'll run out of
actual RAM+Swap. If you actually had 600 backends you'd be able to
allocate ~24MB to each. You'd actually want much less, to allow for
disk-cache in the OS.
The important information missing is:
1. How much memory is in use, and by what (vmstat/top output)
2. What memory settings do you have in your postgresql.conf (work_mem,
maintenance_work_mem)
3. What was happening at the time (how many connections etc)
--
Richard Huxton
Archonet Ltd
---------------------------(end of broadcast)---------------------------
TIP 5: don't forget to increase your free space map settings