Hi all
The company I work for has a large (50+ instances, 2-4 TB each) Postgres install. One of the key problems we are facing in vanilla Postgres is vacuum behavior on high QPS (20K writes/s), random index access on UUIDs.
In one case the table is 50Gb and has 3 indexes which are also 50Gb each. It takes 20 hours to vacuum the entire thing, where bulk of the time is spent doing 'index vacuuming'. The table is then instantly vacuumed again.
I increased work_mem to 2Gb, decreased sleep threshold to 2ms and increased the IO limit to 2000. I also changed the autovacuum thresholds for this table.
I understand that doing random index writes is not a good strategy, but, 20 hours to vacuum 200Gb is excessive.
My question is: what is the recommended strategy to deal with such cases in Postgres?
Thanks very much!!
Thanks very much!!