Recovery from Transaction wraparound failure

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Hi All,


One of the production databases that we own is having transactions wraparound issue. This is a large database with more than 2000 tables, 70TB in size. Most of the tables run between 14 to 20GB in size. This database is predominantly read-only work load, but gets millions of new rows each day in batches. There are about 20 tables that were never vacuumed manually or picked up by auto vacuum, and I suspect those may be the ones causing the issue. 


At first, I started seeing this error for all writes:

sqlerrm:database is not accepting commands to avoid wraparound data loss in database "db1" , sqlstate:54000


1) After reading the documentation, I thought I could try to vacuum those 20 tables that were never vacuumed individually ( with relfrozenid more than 2 billion) in single-user mode, but it is not working (error copied below). Is there a way to do vacuum of individual tables in single-user mode? I would rather not do vacuum of all tables ("vacuum" command) as that would probably take several days.


backend>  vacuum freeze schema.wd_p51_y2015;       

2017-02-07 23:25:15 EST [54045]: [8-1] user=,db=,app=,client= WARNING:  database "db1" must be vacuumed within 999999 transactions

2017-02-07 23:25:15 EST [54045]: [9-1] user=,db=,app=,client= HINT:  To avoid a database shutdown, execute a database-wide VACUUM in that database.

You might also need to commit or roll back old prepared transactions.


2)  If "vacuum" ing the whole db is my only option, how can i execute single-user mode in background with vacuum running? could someone please share an example? I can only execute "postgres --single DBNAME" in interactive mode and could not figure out sending it to background without the server terminating on me. I only have remote access to this db server, and needs a way to kick off the vacuum in single-user mode in the background. 


3) Restoring from the backup is an option for us, but given the db size and time to build indexes, but hoping to hear opinions from an expert or two. 


Thanks for your help in advance,

Ganesh Kannan 


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux