On 2008-04-23 17:22, Terry Lee Tucker wrote: > On Wednesday 23 April 2008 11:14, Gabor Siklos wrote: >> The advantage of the first method would be that I would not have to wait >> for pg_dump (it takes quite long on our 60G+ database) and would just be >> able to configure the backup agent to monitor the data directory and do >> differential backups of the files there every hour or so. > > I would use pg_dump. It will ensure that you get a complete set of data and > not something half written. I'd doing a "pg_dump -b -F t" and then compute differences between previous and current backup using program "rdiff" from package "librsync". Then difference file is compressed, encrypted and shipped offsite nightly. But my database is much smaller than yours. But it works well for a 6GB backup files on entry level server hardware. Pozdrawiam Tometzky -- Best of prhn - najzabawniejsze teksty polskiego UseNet-u http://prhn.dnsalias.org/ Chaos zawsze pokonuje porządek, gdyż jest lepiej zorganizowany. [ Terry Pratchett ]