On 08/28/2017 09:31 AM, Don Seiler wrote: > I think what I'm failing to understand is how it got so big without an > autoanalyze kicking in at some point to keep relatively in sync. To my > knowledge, autovacuum was never disabled here (I'm new, so I can't say > with any certainty that it never was). I would expect as the inserted > rows grow past a certain threshold, it would kick in an occasional > analyze so it wouldn't get over 2 orders of magnitude off from 170K to 71M. I can't say without knowing the history of this table and your server. Autoanalyze with default settings should kick in at 0.1 (10%) of the table changing. Lets say your table exists with 71 million rows. Then at some point your server lost power. The info in the stats collector gets reset on recovery. Now you modify 170k rows. That is now the change that autoanalyze will base its decision on. It is looking for 0.1 x 70 million rows to change which is 7 million. You currently only have 170k that have changed. Bottom line is as I said earlier -- the default values for autovac and autoanalyze scaling factor are ridiculously high for a table this large. Do a manual VACUUM ANALYZE on it, and modify the settings for this table. Joe -- Crunchy Data - http://crunchydata.com PostgreSQL Support for Secure Enterprises Consulting, Training, & Open Source Development
Attachment:
signature.asc
Description: OpenPGP digital signature