Re: backups

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We have been doing a zfs send piped to s3 uploads for backups. We use awscli for that, since it can take a stream from stdin. We have never considered using cephfs for that.

It ultimately ends up looking something like one of the following, depending full/incremental:

zfs send -wv $dataset@$snap | zstd -c | aws $ENDPOINT_ARGS s3 cp - s3://$BUCKET/$file zfs send -wv -i $dataset@$prev $dataset@$now | zstd -c | aws $ENDPOINT_ARGS s3 cp - s3://$BUCKET/$file

If you don't use native encryption you might want to throw in openssl in the chain or something, of course.

On 2022-12-22 17:30, Charles Hedrick wrote:
We have a ZFS file system with a billion (smallish) files. We backup
using zfs send / receive to a separate system, and write tapes with
zfs send. It stores files on HDD, but metadata on SSD. It would be
totally impractical to backup to tape using something like tar from
HDD with that many files. ( I calculate more than 100 days to do a
full.)

If we wanted to do this with cephfs, how would we do backup (or
something else that provides DR and protection against a software
failure that could corrupt the whole system).  Obviously it has to be
polssible to restore in a practical amount of time.

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux