Re: Use case: one-way RADOS "replication" between two clusters by time period

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



In a normal setup, where radosgw-agent runs all the time, it will delete the objects and buckets fairly quickly after they're deleted in the primary zone.

If you shut down radosgw-agent, then nothing will update in the secondary cluster.  Once you re-enable radosgw-agent, it will eventually process the deletes (along with all the writes).

radosgw-agent is a relatively straight-forward python script.  It shouldn't be too difficult to ignore the deletes, or write them to a database and process them 6 months later.


I'm working on some snapshot capabilities for RadosGW (https://wiki.ceph.com/Planning/Blueprints/Hammer/rgw%3A_Snapshots).  Even if I (or my code) does something really stupid, I'll be able to go back and read the deleted objects from the snapshots.  It's not perfect, it won't protect against malicious actions, but it will give me a safety net.


On Mon, Oct 20, 2014 at 6:18 PM, Anthony Alba <ascanio.alba7@xxxxxxxxx> wrote:

Great information, thanks.

I would like to confirm that if I regularly delete older buckets off the LIVE primary system, the "extra" objects on the ARCHIVE secondaries are ignored during replication.

I.e. it does not behave like

rsync -avz --delete LIVE/ ARCHIVE/

Rather it behaves more like

rsync -avz LIVE/ ARCHIVE/


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux