Ok, I'm still working on the possibility of turning autovacuum on.
Question, I have a tables that are multi GB. When I run the threshold calculations using "normal" values, it seems that my large important tables would almost never be vacuumed. Here is an example, in my largest database, my most important table has 1,855,970 tuples in it. However, this is a table with 3 years of data. It has data loaded into it every day and is key to the majority of our queries. In the past 16 hours or so I have had the following activity (via pg_stat_user_indexes) - 2960 tuples inserted and 40389 tuples updated, and no tuples deleted.
If I understand the threshold calculations, then I need to run with autovacuum_vacuum_scale_factor set to between .005 and .001. Does this seem to be to small?
Here is the math:
vacuum threshold = vacuum base threshold + vacuum scale factor * number of tuples
vacuum_threshold = 200 + (.001*1,855,970)
vacuum_threshold = 2,056
vacuum_threshold = 200 + (.005*1,855,970)
vacuum_threshold = 9,480
It would seem with the numbers given that I should set it somewhere in this range. Do you agree?
If you don't agree, where would you recomment setting them.
This is my largest database and is over 110GB. It has tables with 305,702,000 reltuples down to about 500,000 reltuples (for important tables).
I'm just not sure how to properly set the numbers.
My servers have databases in the folowing ranges:
Server 1 66 GB to 100 MB
Server 2 117 GB to 1.70 GB.
Server 3 95 GB to 4 GB
Server 4 83 GB to 1.2 GB
While the sizes differ, the ratios should be the same since they are identical databases.
Any suggestions about recommendations for the autovacuum values? Also, should I set the autovacuum just at the cluster level, or should I also set some of the larger tables with table specific values.
Thanks,
Chris