It was theoretical, my current database does what you suggest, but I might increase workers as about 10 tables see a heavy update rate and are quite large compared to the others.
Sébastien
On Fri, Sep 14, 2012 at 5:49 PM, Josh Berkus <josh@xxxxxxxxxxxx> wrote:
I really doubt you want to be vacuuming a large table every 10,000 rows.
> I am pondering about this... My thinking is that since *_scale_factor need
> to be set manually for largish tables (>1M), why not
> set autovacuum_vacuum_scale_factor and autovacuum_analyze_scale_factor, and
> increase the value of autovacuum_vacuum_threshold to, say, 10000, and
> autovacuum_analyze_threshold
> to 2500 ? What do you think ?
Or analyzing every 2500 rows, for that matter. These things aren't
free, or we'd just do them constantly.
Manipulating the analyze thresholds for a large table make sense; on
tables of over 10m rows, I often lower autovacuum_analyze_scale_factor
to 0.02 or 0.01, to get them analyzed a bit more often. But vacuuming
them more often makes no sense.
You might also want to consider more autovacuum workers. Although if
> Also, with systems handling 8k-10k tps and dedicated to a single database,
> would there be any cons to decreasing autovacuum_naptime to say 15s, so
> that the system perf is less spiky ?
you've set the thresholds as above, that's the reason autovacuum is
always busy and not keeping up ...
--
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com
--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance