Search Postgresql Archives

Re: Memory exhaustion due to temporary tables?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



[ please don't re-quote the entire thread when replying ]

Thomas Carroll <tomfecarroll@xxxxxxxxx> writes:
>     On Monday, December 10, 2018, 7:45:07 PM EST, Tom Lane <tgl@xxxxxxxxxxxxx> wrote:  
>> You said you'd been able to reproduce this problem outside production.
>> Any chance you could boil that down to a sharable test case?

> The test case is simple - so simple that I am surprised no one has encountered
> this before, and therefore I am really more fearing that I am doing something
> stupid in my effort to make it shareable and simple.  In creating this example
> I have not run the code till it crashed the server; instead I used the
> measuring methodology you will see below (monitoring the smaps file).

Hmm.  So what this test case is doing is repeatedly creating a temp table
with ON COMMIT DROP, querying that table, and then closing the transaction
(allowing the temp table to go away).  It does leak memory in
CacheMemoryContext, and as far as I can tell, the leakage consists
entirely of negative catcache entries for pg_statistic rows.  That happens
because while querying the temp table, the planner tries to look up
statistics for the table; but there are none, since you haven't ANALYZEd
it.  So negative catcache entries get made in hopes of saving the cost of
probing pg_statistic again later.  But, when we drop the table, those
catcache entries don't get cleared because they do not match any catalog
rows that get deleted during the drop.

In a lot of use-cases, this wouldn't matter too much, either because the
session isn't long-lived enough to accumulate huge numbers of negative
entries, or because other catalog activity causes the entries to get
flushed anyway.  But if you don't have much DDL going on other than
this temp table activity, then yeah it could build up.

Not sure about good ways to fix this.  I can think of various more-or-less
klugy fixes that are specific to the pg_statistic case.  There's been some
ongoing discussion about trying to limit accumulation of negative catcache
entries more generally, but that's not very close to being committable
I think.

In the meantime, you might think about switching over to some process
that doesn't create and drop the same table constantly.  Perhaps
along the lines of

create temp table if not exists tt_preTally (...) on commit delete rows;

if (tt_preTally contains no rows) then
   insert into tt_preTally select ...
end if;

This'd have the advantage of reducing catalog churn in other catalogs
besides pg_statistic, too.

			regards, tom lane





[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]

  Powered by Linux