Oh my god!....
DB is pg 7.4.6 on linux
2005-10-27 05:55:55 WARNING: some databases have not been vacuumed in
2129225822 transactions
HINT: Better vacuum them within 18257825 transactions, or you may have
a wraparound failure.
2005-10-28 05:56:58 WARNING: some databases have not been vacuumed in
over 2 billion transactions
DETAIL: You may have already suffered transaction-wraparound data loss.
We have cronscripts that perform FULL vacuums
# vacuum template1 every sunday
35 2 * * 7 /usr/local/pgsql/bin/vacuumdb --analyze --verbose template1
# vacuum live DB every day
35 5 * * * /usr/local/bin/psql -c "vacuum verbose analyze" -d bp_live -U
postgres --output /home/postgres/cronscripts/live/vacuumfull.log
Questions:
1) Why do have we data corruption? I thought we were doing everything we
needed to stop any wraparound... Are the pg docs inadequate, or did I
misunderstand what needed to be done?
2) What can I do to recover the data?
I have full daily backups from midnight each day using
/usr/local/pgsql/bin/pg_dump $DATABASE > $BACKUPFILE
plus I have this database replicated using Slon 1.1.0 to another 7.4.6
database.
I can failover to the slave server, but what do I need to do to rebuild
the original database?
Should I failover now?!! And then start rebuilding the old master
database (using slon, I presume)?
How do I stop this EVER happening again??!!!
Thanks for help
John
---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match