Jason Lustig wrote:
I lowered the maintenance_work_mem to 50MB and am still getting the same
errors:
Oct 16 09:26:57 [16402]: [1-1] user=,db= ERROR: out of memory
Oct 16 09:26:57 [16402]: [1-2] user=,db= DETAIL: Failed on request of
size 52428798.
Oct 16 09:27:57 [16421]: [1-1] user=,db= ERROR: out of memory
Oct 16 09:27:57 [16421]: [1-2] user=,db= DETAIL: Failed on request of
size 52428798.
Oct 16 09:29:44 [16500]: [1-1] user=,db= ERROR: out of memory
Oct 16 09:29:44 [16500]: [1-2] user=,db= DETAIL: Failed on request of
size 52428798.
Hmm - it's now failing on a request of 50MB, which shows it is in fact
maintenance_work_mem that's the issue.
Looking at my free memory (from TOP) I find
Mem: 2062364k total, 1846696k used, 215668k free, 223324k buffers
Swap: 2104496k total, 160k used, 2104336k free, 928216k cached
So I don't think that I'm running out of memory total... it seems like
it's continually trying to do it. Is there a reason why Postgres would
be doing something without a username or database? Or is that just how
autovacuum works?
I've not seen an error at startup before, but if it's not connected yet
then that would make sense.
I'm guessing this is a per-user limit that the postgres user is hitting.
If you "su" to user postgres and run "ulimit -a" that should show you if
you have any limits defined. See "man bash" for more details on ulimit.
--
Richard Huxton
Archonet Ltd
---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match