Search Postgresql Archives

Re: recovery from xid wraparound

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Incidentally, how many passes of a table can vacuum make!  Its currently
on its third trip through the 20Gb of indices, meaning another 7 hours
till completion [of this table]!.

Assume it only does three passes?  (it chooses based on the table
continuing to be updated while vacuum is running)

S



-----Original Message-----
From: Martijn van Oosterhout [mailto:kleptog@xxxxxxxxx] 
Sent: 24 October 2006 10:24
To: Shane Wright
Cc: pgsql-general@xxxxxxxxxxxxxx
Subject: Re: [GENERAL] recovery from xid wraparound


On Tue, Oct 24, 2006 at 07:43:15AM +0100, Shane Wright wrote:
> Anyway - not noticed any data loss yet and was hoping it would be such

> that if all tables had been vacuumed recently (including system 
> catalog tables), that there would be no remaining rows that would 
> appear to have a future xid and so the database should be ok?

Running vacuum is the right solution, but I think you have to let it
finish. In particular, in that version a database-wide vacuum has to
complete before it will update the datfrozenxid (it's not tracked per
table).

> a) is my assumption about the database being ok correct - assuming all

> tables have been vacuumed recently, including catalog tables?

Should be ok, but apparently you missed one, or didn't do a database
wide vacuum.

> b) is it possible to safely abort my whole table vacuum now so I can 
> run it at the weekend when there's less traffic?

Aborting vacuum is safe, but you have to do a database-wide vacuum at
some point.

> c) if I have experienced data loss, on the assumption all the table 
> structure remains (looks like it does), and I have a working backup 
> from before the xid wraparound (I do), can I just reinsert any 
> detected-missing data at the application level without needing a 
> dump/reload?

A VACUUM will recover any data that slipped beyond the horizon less than
1 billion transactions ago, which I think covers you completely. The
only issue is that unique indexes may be confused because new
conflicting data may have been inserted while the old data was
invisible. Only you can say if that's going to be an issue.

Hope this helps,
-- 
Martijn van Oosterhout   <kleptog@xxxxxxxxx>   http://svana.org/kleptog/
> From each according to his ability. To each according to his ability 
> to litigate.


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux