Khusro Jaleel <mailing-lists@xxxxxxxxxxxxxx> wrote: > Sounds like my pg_start/rsync/pg_stop script solution every 30 > mins might be better then, as long as the jobs don't overlap :-) That sounds like it's probably overkill. Once you have your base backup, you can just accumulate WAL files. We do a base backup once per week and keep the last two base backups plus all WAL files from the start of the first one. We can restore to any particular point in time after that earlier base backup. I've heard of people happily going months between base backups, and just counting on WAL file replay, although I'm slightly too paranoid to want to go that far. >> Now, if 30 minutes of activity is more than the size of the >> database, pg_dump could, as Vladimir says, still be a good >> alternative. > > I'm not sure I understand what you said there. I think you are > saying that if the DB doubles or more in size in 30 minutes due to > the activity, then pg_dump is still a good alternative? Not exactly. I was saying that if you have a very unusual situation where the database is very small but has very high volumes of updates (or inserts and deletes) such that it stays very small while generating a lot of WAL, it is within the realm of possibility that a pg_dump every 30 minutes could be your best option. I haven't seen such a database yet, but I was conceding the possibility that such could exist. -Kevin -- Sent via pgsql-admin mailing list (pgsql-admin@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin