From: Michael Lewis <mlewis@xxxxxxxxxxx>
On Wed, Jan 6, 2021 at 10:29 AM Rob Northcott <Rob.Northcott@xxxxxxxxxxxxxx> wrote:
>You may need more workers, and a higher cost limit before work is paused for cost_delay. Depending how many tables per database >in the cluster, more workers would likely be ideal, or *maybe* a smaller naptime
if there are tons of tables overall and all of them >are relatively small/see little changes. >It really depends on your workload and *why* the tables aren't getting analyzed as frequently as you need. If your cost limit/delay >mean that the auto vacuum/analyze is rather throttled (and default settings
would be that situation given today's I/O throughput on >any decent production machine), and you have some large tables with many large indexes are constantly in need of vacuuming and >you don't have sufficient maintenance work memory configured to avoid re-scanning
the indexes repeatedly to get the work done... >you may never be getting around to the other tables. If you have a table that is (nearly) all inserts, then a periodic >vacuum/analyze done manually is prudent before PG13. >Are you logging all auto vaccums/analyzes and able to run a pg badger or similar analysis on it? It would be helpful to see some stats >on what is going on currently. Thanks for the tips. I don’t think it’s being logged unfortunately (but we could always turn it on if we need more info), but what you’ve said at least confirms that manual analyze shouldn’t be necessary. I’ll have to go and read up on
autovacuum settings (it’s not something I’ve really looked into in detail before and just left the default settings, which look like they’re not doing what we need). |