Re: Shouldn't we have a way to avoid "risky" plans?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 4/19/11 7:29 AM, Robert Haas wrote:
> Another thought is that we might want to consider reducing
> autovacuum_analyze_scale_factor.  The root of the original problem
> seems to be that the table had some data churn but not enough to cause
> an ANALYZE.  Now, if the data churn is random, auto-analyzing after
> 10% churn might be reasonable, but a lot of data churn is non-random,
> and ANALYZE is fairly cheap.

I wouldn't reduce the defaults for PostgreSQL; this is something you do
on specific tables.

For example, on very large tables I've been known to set
analyze_scale_factor to 0 and analyze_threshold to 5000.

And don't assume that analyzing is always cheap.  If you have an 800GB
table, most of which is very cold data, and have statistics set to 5000
for some columns, accessing many of the older blocks could take a while.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com

-- 
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux