On Wed, Jan 6, 2021 at 10:29 AM Rob Northcott <Rob.Northcott@xxxxxxxxxxxxxx> wrote:
We have an application that uses a Postgres database (currently 9.6). All the autovacuum/analyze settings are just left at the defaults.
We’ve had a few instances recently where users have complained of slow performance and running a manual Analyze (not vacuum) on their database has resolved it.
Should it be considered normal that this needs to be done manually periodically even when autovacuum is on?
It’s not a very heavy application by a lot of people’s standards (it’s just a point-of-sale/stock-control system) but there are a couple of hundred databases on the server, all in fairly constant use through the day.
You may need more workers, and a higher cost limit before work is paused for cost_delay. Depending how many tables per database in the cluster, more workers would likely be ideal, or *maybe* a smaller naptime if there are tons of tables overall and all of them are relatively small/see little changes.
It really depends on your workload and *why* the tables aren't getting analyzed as frequently as you need. If your cost limit/delay mean that the auto vacuum/analyze is rather throttled (and default settings would be that situation given today's I/O throughput on any decent production machine), and you have some large tables with many large indexes are constantly in need of vacuuming and you don't have sufficient maintenance work memory configured to avoid re-scanning the indexes repeatedly to get the work done... you may never be getting around to the other tables. If you have a table that is (nearly) all inserts, then a periodic vacuum/analyze done manually is prudent before PG13.
Are you logging all auto vaccums/analyzes and able to run a pg badger or similar analysis on it? It would be helpful to see some stats on what is going on currently.