Hi Stephen
No, I did not try that. Let me try that now and report the numbers here, both in terms of size and time taken.
Thanks for the suggestion.
On Tue, Oct 3, 2023 at 10:39 PM Stephen Frost <sfrost@xxxxxxxxxxx> wrote:
Greetings,On Mon, Oct 2, 2023 at 20:08 Abhishek Bhola <abhishek.bhola@xxxxxxxxxxxxxxx> wrote:As said above, I tested pgBackRest on my bigger DB and here are the results.Server on which this is running has the following config:Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 36
On-line CPU(s) list: 0-35
Thread(s) per core: 1
Core(s) per socket: 18
Socket(s): 2
NUMA node(s): 2Data folder size: 52 TB (has some duplicate files since it is restored from tapes)Backup is being written on to DELL Storage, mounted on the server.pgbackrest.conf with following options enabledrepo1-block=y
repo1-bundle=y
start-fast=yThanks for sharing! Did you perhaps consider using zstd for the compression..? You might find that you get similar compression in less time.Thanks.Stephen
This correspondence (including any attachments) is for the intended recipient(s) only. It may contain confidential or privileged information or both. No confidentiality or privilege is waived or lost by any mis-transmission. If you receive this correspondence by mistake, please contact the sender immediately, delete this correspondence (and all attachments) and destroy any hard copies. You must not use, disclose, copy, distribute or rely on any part of this correspondence (including any attachments) if you are not the intended recipient(s).