Jeff Janes <jeff.janes@xxxxxxxxx> writes: > On Mon, Apr 15, 2019 at 9:49 PM Gunther <raj@xxxxxxxx> wrote: >> Isn't there some other way? > I wonder of valgrind or something like that could be of use. I don't know > enough about those tools to know. One problem is that this is not really a > leak. If the query completely successfully, it would have freed the > memory. And when the query completed with an error, it also freed the > memory. So it just an inefficiency, not a true leak, and leak-detection > tools might not work. But as I said, I have not studied them. valgrind is a useful idea, given that Gunther is building his own postgres (so he could compile it with -DUSE_VALGRIND + --enable-cassert, which are needed to get valgrind to understand palloc allocations). I don't recall details right now, but it is possible to trigger a valgrind report intra-session similar to what you get by default at process exit. You could wait till the memory has bloated a good deal and then ask for one of those reports that classify allocations by call chain (I think you want the memcheck tool for this, not the default valgrind tool). However --- at least for the case involving hash joins, I think we have a decent fix on the problem location already: it seems to be a matter of continually deciding to increase nbatch, and now what we need to investigate is why that's happening. If there's a leak that shows up without any hash joins in the plan, then that's a separate matter for investigation. regards, tom lane