On 02/15/2012 02:48 PM, Kevin Grittner wrote:
Khusro Jaleel<mailing-lists@xxxxxxxxxxxxxx> wrote: The PITR style backup you describe doesn't cause bloat or block DDL, and if you archive the WAL files you can restore to any point in time following the pg_stop_backup. pg_dump just gives you a snapshot as of the start of the dump, so if you use that you would need to start a complete dump every 30 minutes.
Sounds like my pg_start/rsync/pg_stop script solution every 30 mins might be better then, as long as the jobs don't overlap :-)
With PITR backups
and WAL archiving you could set your archvie_timeout to force timely archiving (or use streaming replication if you are on 9.0 or later) and effectively dump incremental database *activity* to stay up-to-date.
Well, I am already using streaming replication to a slave, and I also have archive_timeout set to 30 minutes, but it seems that writes occur more often, probably every minute or so. I'm not sure why that is, is it because of the replication, or is it because the Java app using the DB is perhaps changing something slightly in the DB every minute or so? Nobody is actually using this DB, I just brought it up, so there is no load, just two front-end Java app servers connected, doing nothing (I hope, but maybe they are).
Now, if 30 minutes of activity is more than the size of the database, pg_dump could, as Vladimir says, still be a good alternative.
I'm not sure I understand what you said there. I think you are saying that if the DB doubles or more in size in 30 minutes due to the activity, then pg_dump is still a good alternative?
-- Sent via pgsql-admin mailing list (pgsql-admin@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin