Re: Backing up a replication set every 30 mins

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Khusro Jaleel <mailing-lists@xxxxxxxxxxxxxx> wrote:
> On 02/15/2012 12:58 PM, Vladimir Rusinov wrote:
>>
>> pg_dump won't block writes, thanks to MVCC. It may increase bloat
>> and it will block DDL operations (ALTER TABLE/etc), but if your
>> database is relatively small but have high load and you need
>> frequent backups, this may be a way to go.
 
> Thanks Vladimir. Would a simple script with 'pg_start_backup' and 
> 'pg_stop_backup' and an rsync job or tar job in between would
> work equally well? I thought that was the better way to do it,
> rather than pg_dump?
 
The PITR style backup you describe doesn't cause bloat or block DDL,
and if you archive the WAL files you can restore to any point in
time following the pg_stop_backup.  pg_dump just gives you a
snapshot as of the start of the dump, so if you use that you would
need to start a complete dump every 30 minutes.  With PITR backups
and WAL archiving you could set your archvie_timeout to force timely
archiving (or use streaming replication if you are on 9.0 or later)
and effectively dump incremental database *activity* to stay
up-to-date.
 
Now, if 30 minutes of activity is more than the size of the
database, pg_dump could, as Vladimir says, still be a good
alternative.
 
-Kevin

-- 
Sent via pgsql-admin mailing list (pgsql-admin@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux