I recently inherited a server with a PostgreSQL 8.1.8 database. The reason I inherited it was because "it wasn't working anymore". A quick look in the logfiles showed the following: LOG: transaction ID wrap limit is 2147484146, limited by database "vsa" WARNING: database "vsa" must be vacuumed within 998573 transactions HINT: To avoid a database shutdown, execute a full-database VACUUM in "vsa". FATAL: database is not accepting commands to avoid wraparound data loss in database "vsa" HINT: Stop the postmaster and use a standalone backend to vacuum database "vsa". I found out quick enough what that means... I did as instructed, and fired up the standalone backend. I then started VACUUM. About four days later, the standalone backend terminated with the message: WARNING: database "vsa" must be vacuumed within 997403 transactions HINT: To avoid a database shutdown, execute a full-database VACUUM in "vsa". WARNING: database "vsa" must be vacuumed within 997402 transactions HINT: To avoid a database shutdown, execute a full-database VACUUM in "vsa". WARNING: database "vsa" must be vacuumed within 997401 transactions HINT: To avoid a database shutdown, execute a full-database VACUUM in "vsa". WARNING: terminating connection because of crash of another server process DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory. HINT: In a moment you should be able to reconnect to the database and repeat your command. CONTEXT: writing block 465 of relation 1663/16384/863912 -bash-3.00$ I used lsof to monitor which files the backend was actually working on. It took two of the four days for it to vacuum a single table with 43 one-gigabyte extents. I have one table with over 300 extents. I'm looking at a vacuum process which can ultimately take weeks (if not months) to complete. Is there an easier way of monitoring VACUUM's progress in a standalone backend? As far as I can tell, absolutely no tuning was done to Postgres and autovacuum was in the default "off" state. This explains why it wasn't working anymore. The server itself has 8gb of RAM, but a very poor I/O channel. The physical size of the db on the disk is 856gb. Bottom line. Is there *any* way of faking out the 1 million transaction limit which prevents the postmaster from running, long enough for me to use pg_dump to rescue the data? I found one article which described xidStopLimit, but I have no idea where to go to reset this, or if changing it would have any beneficial effect. Is there any way of rescuing the contents of this database, or do I need to accept inevitability? If I restart VACUUM in the standalone backend, does it have to repeat all the work it did before the abend above? One other possibility I came up with is to use COPY from the standalone back end on each table, but we're talking about 100+ tables for each of 15 schemas. Thanks in advance for any suggestions, advice, or words of condolence you can offer, --- Steve ___________________________________________________________________________________ Steven Rosenstein IT Architect/Developer | IBM Virtual Server Administration Voice/FAX: 845-689-2064 | Cell: 646-345-6978 | Tieline: 930-6001 Text Messaging: 6463456978 @ mobile.mycingular.com Email: srosenst @ us.ibm.com "Learn from the mistakes of others because you can't live long enough to make them all yourself." -- Eleanor Roosevelt ---------------------------(end of broadcast)--------------------------- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq