> I am pondering about this... My thinking is that since *_scale_factor need > to be set manually for largish tables (>1M), why not > set autovacuum_vacuum_scale_factor and autovacuum_analyze_scale_factor, and > increase the value of autovacuum_vacuum_threshold to, say, 10000, and > autovacuum_analyze_threshold > to 2500 ? What do you think ? I really doubt you want to be vacuuming a large table every 10,000 rows. Or analyzing every 2500 rows, for that matter. These things aren't free, or we'd just do them constantly. Manipulating the analyze thresholds for a large table make sense; on tables of over 10m rows, I often lower autovacuum_analyze_scale_factor to 0.02 or 0.01, to get them analyzed a bit more often. But vacuuming them more often makes no sense. > Also, with systems handling 8k-10k tps and dedicated to a single database, > would there be any cons to decreasing autovacuum_naptime to say 15s, so > that the system perf is less spiky ? You might also want to consider more autovacuum workers. Although if you've set the thresholds as above, that's the reason autovacuum is always busy and not keeping up ... -- Josh Berkus PostgreSQL Experts Inc. http://pgexperts.com -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance