Re: RGW backup to tape

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Sep 20, 2019 at 11:10 AM Paul Emmerich <paul.emmerich@xxxxxxxx> wrote:
>
> Probably easiest if you get a tape library that supports S3. You might
> even have some luck with radosgw's cloud sync module (but I wouldn't
> count on it, Octopus should improve things, though)
>
> Just intercepting PUT requests isn't that easy because of multi-part
> stuff and load balancing. I.e., if you upload a large file you should
> be sending it in chunks and each chunk should go to a different
> server, that makes any "simple" solutions pretty messy.

I wasn't aware of any library being S3 aware, usually it's been part
of the backup software. Do you have any suggestions for multi PB
libraries that have the S3 feature?

The idea with the PUT was not to intercept them in the path, but to
basically have RGW log access to LogStash, then a job would run to
find all the objects that were PUT within a time frame, then read the
objects off the cluster and write them to tape. Maybe that's not as
easy as I'm thinking either.

----------------
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux