I just do it like this per table. Might not solve your exact issue, but another option. You can scale down the analyze factor to something very small like 0.00000001
-- Find current setting (this is at database level)
select * from pg_settings where name in ('autovacuum','autovacuum_analyze_scale_factor','autovacuum_analyze_threshold','autovacuum_vacuum_scale_factor');
select current_setting('autovacuum_vacuum_scale_factor') as "analyze_scale_factor",current_setting('autovacuum_vacuum_threshold') as "vacuum_threshold";
select current_setting('autovacuum_analyze_scale_factor') as "analyze_scale_factor", current_setting('autovacuum_analyze_threshold') as "analyze_threshold";
-- Note: The smaller number = more aggressive = vacuum more frequence
-- autovacuum_analyze_scale_factor = 0.05 ---> 0.002
-- autovacuum_vacuum_scale_factor = 0.1 ---> 0.001
-- Fine Tune at table level = ALTER TABLE mytable SET (autovacuum_analyze_scale_factor = 0.02);
ALTER TABLE your_schema.your_table SET (autovacuum_enabled = true,autovacuum_analyze_scale_factor = 0.002,autovacuum_vacuum_scale_factor = 0.001);
-- Put it back to use global setting
ALTER TABLE your_schema.your_table RESET (autovacuum_enabled,autovacuum_analyze_scale_factor,autovacuum_vacuum_scale_factor);
From: richard coleman <rcoleman.ascentgl@xxxxxxxxx>
Sent: Tuesday, August 1, 2023 9:36 AM
To: Laurenz Albe <laurenz.albe@xxxxxxxxxxx>
Cc: Pgsql-admin <pgsql-admin@xxxxxxxxxxxxxxxxxxxx>
Subject: [EXTERNAL] Re: analyze foreign tables
Thanks for taking the time to respond.
Right now I'm stuck with cronning a script to manually run analyze on every foreign table in every database, which in our case is most of them.
Would it be possible to transfer table statistics between the same version of PostgreSQL, ex: source is pg15, target is pg15?
Otherwise, anything that can be done to speed this up would be very helpful.
On Tue, 2023-08-01 at 09:47 -0400, richard coleman wrote:
> In PostgreSQL foreign tables are not automatically analyzed and analyze must be
> specifically called on each table. In the case of sharing tables between
> PostgreSQL clusters, there is the use_remote_estimate foreign server option.
> In some of our multi terabyte databases, manually running analyze on all of
> the foreign tables can take more than a day. This is per database containing
> the foreign schema. Since we have certain large schema that we have centrally
> located and share to all of our other database clusters, this really adds up.
> use_remote_estimate isn't really a solution as it adds way too much overhead
> and processing time to every query run.
> Since these tables are being continuously analyzed in the database that hosts
> the data, is there some way that they statistics could be easily passed through
> the foreign server mechanism to the remote database that's calling the query?
> Unless I'm missing something we can either:
> 1. manually run analyze on each foreign table in each database that points to
> the host table
> 2. set use_remote_estimate = true which will cause PostgreSQL to re-obtain
> statistics on a per query basis.
> What I am hoping for is either:
> 1. pass through the results of analyze from the source database to the one
> where the foreign query is being run
> 2. add the ability to automatically run analyze on foreign tables just as they
> are currently run on local tables.
> Of the two, #1 would seem to be the easiest and least wasteful of resources.
Unfortunately, both your wishes don't look feasible:
- Transferring table statistics would mean that PostgreSQL understands statistics
from other server versions. This is complicated, and we have decided not to
do this for pg_upgrade, so I don't think we'll try to do it here.
- Autoanalyzing foreign tables would mean that we have some idea how much data
has changed on the remote server. How should we do that?
What I can imagine is that instead of reading the complete remote table during
ANALYZE, PostgreSQL applies TABLESAMPLE to fetch only part. That could be a