Thanks for the help. The applied solution follows. We will be taking a
number of maintenance steps to manage these very high update tables
which I will summarize later as I suspect we are not the only ones with
this challenge.
http://www.postgresql.org/docs/current/interactive/routine-vacuuming.html#VACUUM-FOR-WRAPAROUND
http://www.postgresql.org/docs/current/interactive/catalog-pg-autovacuum.html
data_store=# SELECT relname, oid, age(relfrozenxid) FROM pg_class WHERE
relkind = 'r';
...
hour_summary | 16392 | 252934596
percentile_metadata | 20580 | 264210966
(51 rows)
data_store=# insert into pg_autovacuum values
(16392,false,350000000,2,350000000,1,200,200,350000000,500000000);
INSERT 0 1
data_store=# insert into pg_autovacuum values
(20580,false,350000000,2,350000000,1,200,200,350000000,500000000);
INSERT 0 1
data_store=#
hubert depesz lubaczewski wrote:
On Tue, Aug 26, 2008 at 10:45:31AM -0600, Jerry Champlin wrote:
This makes sense. What queries can I run to see how close to the limit
we are? We need to determine if we should stop the process which
updates and inserts into this table until after the critical time this
afternoon when we can perform the required maintenance on this table.
select datname, age(datfrozenxid) from pg_database;
Best regards,
depesz