After running some further standalone tests using temp tables, I'm not convinced the problem is specific to temp table usage. In fact it looks like generic SQL activity degrades over time.
Having said that, what kinds of things should I be looking for that could deteriorate/bloat over time? Ordinarily the culprit might be infrequent vacuuming or analyzing, but that wouldn't be corrected by a restart of Postgres. In our case, restarting Postgres gives us a huge performance improvement (for a short while, anyways).
By the way, we are using PG 7.4.6 on FreeBSD 5.30.0170. The temp table has 15 columns: a timestamp, a double, and the rest integers. It has no indexes.
Thanks,
Steve
On 12/13/06, Tom Lane <tgl@xxxxxxxxxxxxx> wrote:
"Steven Flatt" <steven.flatt@xxxxxxxxx> writes:
> Any idea where the bloat is happening? I believe that if we were dropping
> and re-creating the temp table over and over, that could result in pg_class
> bloat (among other catalog tables), but what is going wrong if we use the
> same table over and over and truncate it?
That seems very strange --- I too would have expected a TRUNCATE to
bring you back to ground zero performance-wise. I wonder whether the
issue is not directly related to the temp table but is just some generic
resource leakage problem in a very long-running backend. Have you
checked to see if the backend process bloats memory-wise, or perhaps has
a huge number of files open (I wonder if it could be leaking open file
handles to the successive generations of the temp table)? Are you sure
that the slowdown is specific to inserts into the temp table, as opposed
to generic SQL activity?
Also, which PG version is this exactly (" 7.4" is not specific enough)?
On what platform? Can you show us the full schema definition for the
temp table and any indexes on it?
regards, tom lane