Search Postgresql Archives

Stuck trying to backup large database - best practice?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

We have a postgres 9.3.x box, with 1.3TB free space, and our database of around 1.8TB.  Unfortunately, we're struggling to back it up.

When we try a compressed backup with the following command:

pg_basebackup -D "$BACKUP_PATH/$TIMESTAMP" -Ft -Z9 -P -U "$DBUSER" -w

we get error:

pg_basebackup: could not get transaction log end position from server: ERROR: requested WAL segment 0000000400002B9F000000B4 has already been removed

This attempted backup reached 430GB before failing.

We were advised on IRC to try -Xs, but that only works with a plain (uncompressed) backup, and as you'll note from above, we don't have enough disk space for this.

Is there anything else we can do apart from get a bigger disk (not trivial at the moment)?  Any best practice?

I suspect that setting up WAL archiving and / or playing with the wal_keep_segments setting might help, but as you can probably gather, I'd like to be sure that I'm doing something sane before I dive in.

Happy to give more detail if required.

Antony

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux