I am having the same problem here and I am considering using SVN for DB backup, i.e. every snapshot (whichever way you do it, pg_dump seems OK)
gets checked in on a regular basis, then if a table is accidentally deleted, it can still be checked out and restored. In case you do not know,
SVN (Subversion, http://subversion.tigris.org) is like CVS (Concurrent version system).
Of course the SVN repository itself needs to be backed up somewhere.
Best regards,
Dmitri
On Wed, 2006-01-18 at 20:11 +0000, Chris Jewell wrote:
Hi, I'm trying to implement a backup strategy for a research database in order to prevent again users accidentally dropping their data. My preferred method would be to create regular snapshots of the data directory, and then send this to the backup server using rsync, with hard-linking backup rotation. The backup data directories could then be examined using a postmaster running on the backup server to extract any accidentally deleted tables. My problem is how to do these snapshots: is it enough to create a hard link to the directory, or is there still a risk that a currently running transaction might introduce inconsistencies? I guess I could use the pg_ctl -m 'Smart' command to stop the database after all clients have disconnected, but I sometimes have users leaving their clients connected all night. Is there any other way to suspend the postmaster such that it finishes its current transaction and queues any other transactions while the snapshot is taking place? Any other ideas of how I can create such snapshots? Thanks, Chris