On 8/24/07, Jeff Amiel <becauseimjeff@xxxxxxxxx> wrote: > Over last 2 days, have spotted 10 "Out of Memory" > errors in postgres logs (never saw before with same > app/usage patterns on tuned hardware/postgres under > FreeBSD) > > Aug 22 18:08:24 db-1 postgres[16452]: [ID 748848 > local0.warning] [6-1] 2007-08-22 18:08:24 CDT ERROR: > out of memory. > Aug 22 18:08:24 db-1 postgres[16452]: [ID 748848 > local0.warning] [6-2] 2007-08-22 18:08:24 CDT > DETAIL: Failed on request of size 536870910. > > What I found interesting is that It's ALWAYS the same > size....536870910 > > I am running autovacuum and slony.....but I see > nothing in the logs anywhere near the "out of memory" > errors related to either (autovacuum used to under > 8.0.X log INFO messages every time it vacuumed which > came in handy...I assume it doesn't so this any more?) > > > The events are fairly spread out...and cannot (by > looking at app logs and rest of DB logs) correlate to > any specific query or activity. > > Any help would be appreciated I've experienced something similar. The reason turned out to be combination of overcommit=off, big maint_mem and several parallel vacuums for fast-changing tables. Seems like VACUUM allocates full maint_mem before start, whatever the actual size of the table. Fix was to put "set maint_mem=32M" before small vacuums and serialize some of them. -- marko ---------------------------(end of broadcast)--------------------------- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match