On Mon, Apr 8, 2013 at 6:14 AM, Johann Spies <johann.spies@xxxxxxxxx> wrote:
I would appreciate some advice from the experts on this list about the best backup strategy for my database.
The setup:Size: might be about 200GbThe server uses a Tivoli backup client with daily backup
At the moment There are pg_dumps for each database on the server on a daily, weekly and monthly basis. All this gets backed up to the Tivoli server.
Hi Johann,
This backup pattern means that after a year of operation you will have about 23 pg_dump files, 90+% of which have nothing to do with your requirement to restore the database to within the last 24 hours.
In other words, your storage needs are being driven by your historical retention policy, not your immediate restoration policy. So, can you elaborate on your historical retention policy? For example, if you need to restore your database to the state it was in 9 months ago, how fast do you need to be able to do that? If you had a 12 month old pg_basebackup and 3 months of log files, how long would it take to replay those and how big would that many log files be? (Both of those questions depend on your specific usage, so it is hard to make general guesses about those--they are questions that can only be answered empirically.)
pg_basebackup will almost surely be larger than pg_dumps. For one thing, it contains all the index data, for another it contains any obsolete rows which have not yet been vacuum and reused. So switching to that will save you space only if you need to keep less of them than you do of the pg_dumps.
Cheers,
Jeff