Re: The best way of backup S3 buckets

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Its great. I have moved millions of objects between two cluster and its a
piece of art work by an awesome weirdo. Memory and cpu usage is epic. Very
fast and it can use metada, md5 etc.

But you need to write your own script İf you wanna crob job.

10 Eyl 2021 Cum 14:19 tarihinde huxiaoyu@xxxxxxxxxxxx <huxiaoyu@xxxxxxxxxxxx>
şunu yazdı:

> Thanks a lot for quick response.
>
> Will rclone be able to handle PB data backup? Does any have experience
> using rclone to backup massive S3 object store, and what lessons learned?
>
> best regards,
>
> Samuel
>
>
>
> ------------------------------
> huxiaoyu@xxxxxxxxxxxx
>
>
> *From:* mhnx <morphinwithyou@xxxxxxxxx>
> *Date:* 2021-09-10 13:07
> *To:* huxiaoyu <huxiaoyu@xxxxxxxxxxxx>
> *CC:* ceph-users <ceph-users@xxxxxxx>
> *Subject:* Re:  The best way of backup S3 buckets
> If you need instant backup and lifecycle rules then Multisite is the best
> choice.
>
> If you need daily backup and do not have different ceph cluster, then
> rclone will be your best mate.
>
> 10 Eyl 2021 Cum 13:56 tarihinde huxiaoyu@xxxxxxxxxxxx <
> huxiaoyu@xxxxxxxxxxxx> şunu yazdı:
>
>> Dear Ceph folks,
>>
>> This is closely related to my previous questions on how to do safely and
>> reliabely RadosGW remote replication.
>>
>> My major task is to backup S3 buckets. One obvious method is to use Ceph
>> RadosGW multisite replication. I am wondering whether this is the best way
>> to do S3 storage backup, or are there any better methods or alternatives? I
>> am dealing with ca. 5-8TB amount of new data per day
>>
>> thanks a lot in advance,
>>
>> Samuel
>>
>>
>>
>> huxiaoyu@xxxxxxxxxxxx
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux