You might want to have a look at this: https://github.com/camptocamp/ceph-rbd-backup/blob/master/ceph-rbd-backup.py I have a bash implementation of this, but it basically boils down to wrapping what peter said: an export-diff to stdout piped to an import-diff on a different cluster. The "transfer" node is a client of both clusters and simpy iterates over all rbd devices, snapshotting them daily, and exporting the diff between todays snap and yesterdays snap and layering that diff onto a sister rbd on the remote side. On Tue, Nov 1, 2016 at 5:23 AM, Peter Maloney <peter.maloney@xxxxxxxxxxxxxxxxxxxx> wrote: > On 11/01/16 10:22, Peter Maloney wrote: > > On 11/01/16 06:57, xxhdx1985126 wrote: > > Hi, everyone. > > I'm trying to write a program based on the librbd API that transfers > snapshot diffs between ceph clusters without the need for a temporary > storage which is required if I use the "rbd export-diff" and "rbd > import-diff" pair. > > > You don't need a temp file for this... eg. > > > oops forgot the "-" in the commands.... corrected: > > ssh node1 rbd export-diff rbd/blah@snap1 - | rbd import-diff - rbd/blah > ssh node1 rbd export-diff --from-snap snap1 rbd/blah@snap2 - | rbd > import-diff - rbd/blah > > > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- Respectfully, Wes Dillingham wes_dillingham@xxxxxxxxxxx Research Computing | Infrastructure Engineer Harvard University | 38 Oxford Street, Cambridge, Ma 02138 | Room 210 _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com