On Wed, Feb 8, 2012 at 2:59 PM, Ofer Israeli <oferi@xxxxxxxxxxxxxx> wrote: > The settings we used were not in the postgresql.conf file, but rather an update of the pg_autovacuum table where we set the vac_cost_limit to 2000. The reason for this being that we wanted this definition only for the big (TOASTed) table I was referring to. > > The logged settings in the ~400 second case were: > autovac_balance_cost(pid=6224 db=16385, rel=17881, cost_limit=10, cost_delay=1) > > Which comes as quite a surprise as it seems that the cost_limit is not set or am I missing something? That doesn't look right, but without step-by-step directions it will be hard for anyone to reproduce this. Also, what version are you testing on? pg_autovacuum was removed in PostgreSQL 8.4, so you must be using PostgreSQL 8.3 or earlier. You might at least want to make sure you're running a late enough minor version to have this fix: Author: Tom Lane <tgl@xxxxxxxxxxxxx> Branch: master Release: REL9_1_BR [b58c25055] 2010-11-19 22:29:44 -0500 Branch: REL9_0_STABLE Release: REL9_0_2 [b5efc0940] 2010-11-19 22:28:25 -0500 Branch: REL8_4_STABLE Release: REL8_4_6 [fab2af30d] 2010-11-19 22:28:30 -0500 Branch: REL8_3_STABLE Release: REL8_3_13 [6cb9d5113] 2010-11-19 22:28:35 -0500 Fix leakage of cost_limit when multiple autovacuum workers are active. When using default autovacuum_vac_cost_limit, autovac_balance_cost relied on VacuumCostLimit to contain the correct global value ... but after the first time through in a particular worker process, it didn't, because we'd trashed it in previous iterations. Depending on the state of other autovac workers, this could result in a steady reduction of the effective cost_limit setting as a particular worker processed more and more tables, causing it to go slower and slower. Spotted by Simon Poole (bug #5759). Fix by saving and restoring the GUC variables in the loop in do_autovacuum. In passing, improve a few comments. Back-patch to 8.3 ... the cost rebalancing code has been buggy since it was put in. Also: > And one more thing that seems a bit strange - after a 1-minute run, we would > expect to see 1700 Tuples Updated (100*17), but instead we see 1700 Tuples > Inserted (and no deletes). I don't think TOAST ever updates chunks in place. It just inserts and deletes; or at least I think that's what it does. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance