greigwise <greigwise@xxxxxxxxxxx> writes: > Hello, I'm running postgres 9.6.10 on Centos 7. Seeing the occasional out > of memory error trying to run a query. In the logs I see something like > this: > Grand total: 462104832 bytes in 795 blocks; 142439136 free (819860 chunks); > 319665696 used > 2018-09-20 18:08:01 UTC xxxx 5ba3e1a2.7a8a dbname ERROR: out of memory > 2018-09-20 18:08:01 UTC xxxx 5ba3e1a2.7a8a dbname DETAIL: Failed on request > of size 2016. > If I have 142439136 free, then why am I failing on a request of size 2016? The free space must be in contexts other than the one that last little request wanted space in. Overall, you've got about 460MB of space consumed in that session, so it's not *that* surprising that you got OOM. (At least, it's unsurprising on a 32-bit machine. If the server is 64-bit I'd have thought the kernel would be a bit more liberal.) But anyway, this looks like a mighty inefficient usage pattern at best, and maybe a memory leak at worst. Can you create a self-contained test case that does this? regards, tom lane