[no subject]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We are running Postgresql 9.3.8. We had something fairly surprising happen here the other day: we started reaching the vacuum limit, with our logs filling up with “WARNING:  database "mydb" must be vacuumed within 177009986 transactions” messages, hundreds of them per second, and we came dangerously close to experiencing a DB shutdown. *However*, there were only three things running that were writing to the database - a db-wide VACUUM that we manually started, a per-table autovacuum that postgres itself started, and a "write 1 row every 1-minute" cron job into a separate table that we use to track replication delay. 

Whenever we killed the autovacuum process, the very fast incrementation of the transaction IDs would stop. Then, a few minutes later, PG would restart the autovacuum process, and then very soon after that, the transaction IDs would start incrementing again very quickly. 

Modifying postgresql.conf to turn off autovacuums didn't prevent auto-vacuuming from restarting, in-line with documentation that says that autovacuums will start whenever postgres is in danger of running out of transaction IDs. 

We ended up writing a script that would check the running processes and continuously kill autovacuums as they started. We were able to get the manual vacuum to finish without running out of transaction IDs and without experiencing a shut-down, but the entire experience was slightly concerning.

Did we hit upon a known edge case where two vacuum processes running in parallel would increment transaction IDs very quickly?



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux