Well, I'm afraid that the image didn't replay continuously, which means I have some data lost. The "rbd mirror image status" shows the image is replayed and its time is just before I demote the primary image. I lost about 24 hours' data and I'm not sure whether there is an interval between the synchronization. I use version 14.2.9 and I deployed a one direction mirror. Zhenshi Zhou <deaderzzs@xxxxxxxxx> 于2020年6月5日周五 上午10:22写道: > Thank you for the clarification. That's very clear. > > Jason Dillaman <jdillama@xxxxxxxxxx> 于2020年6月5日周五 上午12:46写道: > >> On Thu, Jun 4, 2020 at 3:43 AM Zhenshi Zhou <deaderzzs@xxxxxxxxx> wrote: >> > >> > My condition is that the primary image being used while rbd-mirror sync. >> > I want to get the period between the two times of rbd-mirror transfer >> the >> > increased data. >> > I will search those options you provided, thanks a lot :) >> >> When using the original (pre-Octopus) journal-based mirroring, once >> the initial sync completes to transfer the bulk of the image data from >> a point-in-time dynamic snapshot, any changes post sync will be >> replayed continuously from the stream of events written to the journal >> on the primary image. The "rbd mirror image status" against the >> non-primary image will provide more details about the current state of >> the journal replay. >> >> With the Octopus release, we now also support snapshot-based mirroring >> where we transfer any image deltas between two mirroring snapshots. >> These mirroring snapshots are different from user-created snapshots >> and their life-time is managed by RBD mirroring (i.e. they are >> automatically pruned when no longer needed). This version of mirroring >> probably more closely relates to your line of questioning since the >> period of replication is at whatever period you create new mirroring >> snapshots (provided your two clusters can keep up). >> >> > >> > Eugen Block <eblock@xxxxxx> 于2020年6月4日周四 下午3:28写道: >> > >> > > The initial sync is a full image sync, the rest is based on the object >> > > sets created. There are several options to control the mirroring, for >> > > example: >> > > >> > > rbd_journal_max_concurrent_object_sets >> > > rbd_mirror_concurrent_image_syncs >> > > rbd_mirror_leader_max_missed_heartbeats >> > > >> > > and many more. I'm not sure I fully understand what you're asking, >> > > maybe you could rephrase your question? >> > > >> > > >> > > Zitat von Zhenshi Zhou <deaderzzs@xxxxxxxxx>: >> > > >> > > > Hi Eugen, >> > > > >> > > > Thanks for the reply. If rbd-mirror constantly synchronize changes, >> > > > what frequency to replay once? I don't find any options I can >> config. >> > > > >> > > > Eugen Block <eblock@xxxxxx> 于2020年6月4日周四 下午2:54写道: >> > > > >> > > >> Hi, >> > > >> >> > > >> that's the point of rbd-mirror, to constantly replay changes from >> the >> > > >> primary image to the remote image (if the rbd journal feature is >> > > >> enabled). >> > > >> >> > > >> >> > > >> Zitat von Zhenshi Zhou <deaderzzs@xxxxxxxxx>: >> > > >> >> > > >> > Hi all, >> > > >> > >> > > >> > I'm gonna deploy a rbd-mirror in order to sync image from >> clusterA to >> > > >> > clusterB. >> > > >> > The image will be used while syncing. I'm not sure if the >> rbd-mirror >> > > will >> > > >> > sync image >> > > >> > continuously or not. If not, I will inform clients not to write >> data >> > > in >> > > >> it. >> > > >> > >> > > >> > Thanks. Regards >> > > >> > _______________________________________________ >> > > >> > ceph-users mailing list -- ceph-users@xxxxxxx >> > > >> > To unsubscribe send an email to ceph-users-leave@xxxxxxx >> > > >> >> > > >> >> > > >> _______________________________________________ >> > > >> ceph-users mailing list -- ceph-users@xxxxxxx >> > > >> To unsubscribe send an email to ceph-users-leave@xxxxxxx >> > > >> >> > > >> > > >> > > >> > > >> > _______________________________________________ >> > ceph-users mailing list -- ceph-users@xxxxxxx >> > To unsubscribe send an email to ceph-users-leave@xxxxxxx >> >> >> >> -- >> Jason >> >> _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx