Re: The best way of backup S3 buckets

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Be aware of rclone's limitations regarding metadata (ACLs, versions, etc.),
e.g.,
https://github.com/rclone/rclone/issues/1776
https://github.com/rclone/rclone/issues/4683
It is designed for efficient data transfer for a wide variety of backends,
not for S3 or Ceph specifically.

On Fri, 10 Sept 2021 at 12:36, mhnx <morphinwithyou@xxxxxxxxx> wrote:

> Its great. I have moved millions of objects between two cluster and its a
> piece of art work by an awesome weirdo. Memory and cpu usage is epic. Very
> fast and it can use metada, md5 etc.
>
> But you need to write your own script İf you wanna crob job.
>
> 10 Eyl 2021 Cum 14:19 tarihinde huxiaoyu@xxxxxxxxxxxx <
> huxiaoyu@xxxxxxxxxxxx>
> şunu yazdı:
>
> > Thanks a lot for quick response.
> >
> > Will rclone be able to handle PB data backup? Does any have experience
> > using rclone to backup massive S3 object store, and what lessons learned?
> >
> > best regards,
> >
> > Samuel
> >
> >
> >
> > ------------------------------
> > huxiaoyu@xxxxxxxxxxxx
> >
> >
> > *From:* mhnx <morphinwithyou@xxxxxxxxx>
> > *Date:* 2021-09-10 13:07
> > *To:* huxiaoyu <huxiaoyu@xxxxxxxxxxxx>
> > *CC:* ceph-users <ceph-users@xxxxxxx>
> > *Subject:* Re:  The best way of backup S3 buckets
> > If you need instant backup and lifecycle rules then Multisite is the best
> > choice.
> >
> > If you need daily backup and do not have different ceph cluster, then
> > rclone will be your best mate.
> >
> > 10 Eyl 2021 Cum 13:56 tarihinde huxiaoyu@xxxxxxxxxxxx <
> > huxiaoyu@xxxxxxxxxxxx> şunu yazdı:
> >
> >> Dear Ceph folks,
> >>
> >> This is closely related to my previous questions on how to do safely and
> >> reliabely RadosGW remote replication.
> >>
> >> My major task is to backup S3 buckets. One obvious method is to use Ceph
> >> RadosGW multisite replication. I am wondering whether this is the best
> way
> >> to do S3 storage backup, or are there any better methods or
> alternatives? I
> >> am dealing with ca. 5-8TB amount of new data per day
> >>
> >> thanks a lot in advance,
> >>
> >> Samuel
> >>
> >>
> >>
> >> huxiaoyu@xxxxxxxxxxxx
> >> _______________________________________________
> >> ceph-users mailing list -- ceph-users@xxxxxxx
> >> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >>
> >
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>

-- 
CONFIDENTIALITY
This e-mail message and any attachments thereto, is 
intended only for use by the addressee(s) named herein and may contain 
legally privileged and/or confidential information. If you are not the 
intended recipient of this e-mail message, you are hereby notified that any 
dissemination, distribution or copying of this e-mail message, and any 
attachments thereto, is strictly prohibited.  If you have received this 
e-mail message in error, please immediately notify the sender and 
permanently delete the original and any copies of this email and any prints 
thereof.
ABSENT AN EXPRESS STATEMENT TO THE CONTRARY HEREINABOVE, THIS 
E-MAIL IS NOT INTENDED AS A SUBSTITUTE FOR A WRITING.  Notwithstanding the 
Uniform Electronic Transactions Act or the applicability of any other law 
of similar substance and effect, absent an express statement to the 
contrary hereinabove, this e-mail message its contents, and any attachments 
hereto are not intended to represent an offer or acceptance to enter into a 
contract and are not otherwise intended to bind the sender, Sanmina 
Corporation (or any of its subsidiaries), or any other person or entity.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux