Hi people,
The whole topic of messing with stats makes my head spin but I am concerned
about some horridly performing queries that have had bad rows estimates and
others which always choose seq scans when indexes are available. Reading up
on how to improve planner estimates, I have seen references to
default_statistics_target being changed from the default of 10 to 100.
Our DB is large, with thousands of tables, but the core schema has about 100
tables and the typical row counts are in the millions of rows for the whole
table. We have been playing endless games with tuning this server - but with
all of the suggestions, I don't think the issue of changing
default_statistics_target has ever come up. Realizing that there is a
performance hit associated with ANALYZE, are there any other downsides to
increasing this value to 100, and is this a common setting for large DBs?
Thanks,
Carlo
--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance