On Fri, Jul 19, 2024 at 10:19 PM Thomas Simpson <ts@xxxxxxxxxxxxxx> wrote:
Hi Doug
On 19-Jul-2024 17:21, Doug Reynolds wrote:
Thomas—
Why are you using logical backups for a database this large? A solution like PgBackRest? Obviously, if you are going to upgrade, but for operational use, that seems to be a slow choice.In normal operation the server runs as a primary-replica and pgbackrest handles backups.
Expire the oldest pgbackrest, so as to free up space for a multithreaded pg_dump.
Right when disk space was used up, pgbackrest also took a backup during the failed vacuum so going back to it (or anything earlier) would also roll forward the WALs for recovery to date and put me right back where I am just now by running out of space part way through.
Who says you have to restore to the failure point? That's what the "--target" option is for.
For example, if you took a full backup on 7/14 at midnight, and want to restore to 7/18 23:00, run:
declare LL=detail
declare PGData=/path/to/datadeclare -i Threads=`nproc`-2
declare BackupSet=20240714-000003F
declare RestoreUntil="2024-07-18 23:00"
pgbackrest restore \
--stanza=localhost \
--log-level-file=$LL \
--log-level-console=$LL \
--process-max=${Threads}
--pg1-path=$PGData \
--set=$BackupSet \
--type=time --target="${RestoreUntil}"