Re: pg_dump and XID limit

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Elliot Chance <elliotchance@xxxxxxxxx> writes:
> This is a hypothetical problem but not an impossible situation. Just curious about what would happen.

> Lets say you have an OLTP server that keeps very busy on a large database. In this large database you have one or more tables on super fast storage like a fusion IO card which is handling (for the sake of argument) 1 million transactions per second.

> Even though only one or a few tables are using almost all of the IO, pg_dump has to export a consistent snapshot of all the tables to somewhere else every 24 hours. But because it's such a large dataset (or perhaps just network congestion) the daily backup takes 2 hours.

> Heres the question, during that 2 hours more than 4 billion transactions could of occurred - so what's going to happen to your backup and/or database?

The DB will shut down to prevent wraparound once it gets 2 billion XIDs
in front of the oldest open snaphot.

			regards, tom lane

-- 
Sent via pgsql-admin mailing list (pgsql-admin@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux