Lawrence, First off, I strongly recommend that you figure out how to send regular plain-text emails, at least to this mailing list, as the whole "winmail.dat" thing is going to throw people off and you're unlikely to get many responses because of it. Regarding your question.. * Lawrence Cohan (LCohan@xxxxxxx) wrote: > What would be a recommended solution for backing up a very large Postgres > (~13TeraBytes) database in order to prevent from data deletion/corruption. > Current setup is only to backup/restore to a standby read-only Postgres server > via AWS S3 using wal-e however this does not offer the comfort of keeping a > full backup available in case we need to restore some deleted or corrupted > data. If the goal is to be able to do partial restores (such as just one table) then your best bet is probably to use pg_dump. Given the size of your database, you'll probably want to pg_dump in directory format and then send each of those files to S3 (assuming you wish to continue using S3 for backups). Note that pg_dump doesn't directly support S3 currently. Also, the pg_dump will hold open a transaction for a long time, which may be an issue depending on your environment. If you're looking for file-based backups of the entire cluster and don't mind using regular non-S3 storage then you might consider pgBackrest or barman. With file-based backups, you have to restore at least an entire database to be able to pull out data from it. We are working to add S3 support to pgBackrest, but it's not there today. Thanks! Stephen
Attachment:
signature.asc
Description: Digital signature