Rural Hunter <ruralhunter@xxxxxxxxx> wrote: > I'm on Ubuntu 12.04.1 64bit with 32 cores and 377G memory. The > data is stored on several rai10 SAS 15k disks. With a machine that beefy I have found it necessary to make the autovacuum settings more aggressive. Otherwise the need for vacuuming can outpace the ability of autovacuum to keep up. > autovacuum_freeze_max_age | 2000000000 | configuration file > vacuum_freeze_table_age | 1000000000 | configuration file There's your problem. You left so little space between when autovacuum would kick in for wraparound prevention (2 billion transactions) and when the server prevents new transactions in order to protect your data (2 ^ 31 - 1000000 transactions) that autovacuum didn't have enough time to complete its effort to do so. Changing a setting to ten times its default value is something which should always be approached with caution. In this case you changed the threshold for starting the work to prevent data loss from a little under 10% of the distance to the disastrous condition to a little under 100% of that distance. You could play with non-standard setting for these, but if you go anywhere near this extreme you risk downtime like you have just experienced. Personally, I have never had a reason to change these from the defaults. To ensure that autovacuum can keep up with the activity on a machine like this, I have generally gone to something like: autovacuum_cost_limit = 800 If you have more than a couple large tables which take long enough to scan to prevent small, frequently-updated tables from getting attention soon enough, you might want to boost autovacuum_max_workers, too. -- Kevin Grittner EDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-admin mailing list (pgsql-admin@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin