My transaction calls the same stored procedure many times over. Over
the lifetime of the transaction, that stored procedure slows down by
roughly 2 orders of magnitude. The procedure itself tries to look up
several strings in dictionary tables, and if the strings aren't there
(most of them will be) it inserts them. All those dictionary tables
have indexes. After it has converted most of the strings into ids, it
does another lookup on a table and if it finds a matching row (should
be the common case) it updates a timestamp column of that row;
otherwise, it inserts a new row.
So.... there isn't much table size changing, but there are a lot of
updates. Based on pg_stat_user_tables I suspect that the procedure is
using indexes more than table scans. Is there a better way to know?
On Nov 1, 2006, at 1:31 AM, Richard Huxton wrote:
Ben wrote:
I've got a long-running, update-heavy transaction that
increasingly slows down the longer it runs. I would expect that
behavior, if there was some temp file creation going on. But
monitoring vmstat over the life of the transaction shows virtually
zero disk activity. Instead, the system has its CPU pegged the
whole time.
So.... why the slowdown? Is it a MVCC thing? A side effect of
calling stored proceedures a couple hundred thousand times in a
single transaction? Or am I just doing something wrong?
You'll need to provide some more information before anyone can come
up with something conclusive. What queries slow down, by how much
and after what updates (for example). It could be an update/vacuum-
related problem, or it could be that your stored procedures aren't
coping with changes in table size (if table(s) are changing size).
--
Richard Huxton
Archonet Ltd