Gunther <raj@xxxxxxxx> writes: > For weeks now, I am banging my head at an "out of memory" situation. > There is only one query I am running on an 8 GB system, whatever I try, > I get knocked out on this out of memory. It is extremely impenetrable to > understand and fix this error. I guess I could add a swap file, and then > I would have to take the penalty of swapping. But how can I actually > address an out of memory condition if the system doesn't tell me where > it is happening? > You might want to see the query, but it is a huge plan, and I can't > really break this down. It shouldn't matter though. But just so you can > get a glimpse here is the plan: Is that the whole plan? With just three sorts and two materializes, it really shouldn't use more than more-or-less 5X work_mem. What do you have work_mem set to, anyway? Is this a 64-bit build of PG? Also, are the estimated rowcounts shown here anywhere close to what you expect in reality? If there are any AFTER INSERT triggers on the insertion target table, you'd also be accumulating per-row trigger queue entries ... but if there's only circa 35K rows to be inserted, it's hard to credit that eating more than a couple hundred KB, either. > Might this be a bug? It's conceivable that you've hit some memory-leakage bug, but if so you haven't provided nearly enough info for anyone else to reproduce it. You haven't even shown us the actual error message :-( https://wiki.postgresql.org/wiki/Guide_to_reporting_problems regards, tom lane