Right, without incremental or compressed backups, you'd have to have
room for 7 full copies of your database. Have you looked at what your
incrementals would be like with file-level incrementals and compression?
Most of our DBs can’t use partitioning over time-series fields, so we have a lot of datafiles in which only a few pages have been modified. So file-level increments didn’t really work for us. And we didn’t use compression in barman before patching it because single-threaded compression sucks.
How are you testing your backups..? Do you have page-level checksums
enabled on your database?
Yep, we use checksums. We restore latest backup with
recovery_target = 'immediate' and do
COPY tablename TO '/dev/null’ with checking exit code for each table in each database (in several threads, of course).
pgbackrest recently added the ability to
check PG page-level checksums during a backup and report issues.
Sounds interesting, should take a look.