On 08/13/2018 04:24 PM, Ashu Pachauri wrote:
+ pgsql-general
Thanks and Regards,
Ashu Pachauri
---------- Forwarded message ---------
From: *Ashu Pachauri* <ashu210890@xxxxxxxxx <mailto:ashu210890@xxxxxxxxx>>
Date: Mon, Aug 13, 2018 at 7:53 PM
Subject: Re: is there any adverse effect on DB if I set autovacuum scale
factor to zero?
To: <raghavendrajsv@xxxxxxxxx <mailto:raghavendrajsv@xxxxxxxxx>>
The way I see *autovacuum_vacuum_scale_factor* is not in terms of
absolute number but as the percentage of any table that can consist of
updated / deleted tuples to make it eligible for vacuuming. A factor of
0.1 ensures that your tables would be eligible for vacuuming if more
than 10% of the tuples are deleted/updated.
1. If you think that 10% is too high for you in terms of storage cost,
you can decrease the number or set it to zero. But, I would advise to
increase the value of *autovacuum_vacuum_threshold* to something
reasonable if you do that, otherwise you pay the CPU cost frequent
vacuuming across all tables.
2. However, if your issue is not the fixed 10% overhead but the lack of
throughput i.e. you see the number of deleted/updated tuples keeps
increasing in an unbounded fashion, the right way to deal with it is a)
Having higher value of *autovacuum_max_workers* b) lower value for
*autovacuum_naptime*.
Increasing autovacuum_max_workers is unlikely to solve the issue with
throughput, because all the workers are throttled together - there's a
limit on the amount of work that can be done per second. Increasing the
number of workers is akin to allowing more cars on a highway, but also
lowering the speed limit.
You need to increase the limit on amount of work, and lowering naptime
is one way to do that.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services