Hi,
could you share more information about your setup? How much bandwidth
does the uplink have? Are there any custom configs regarding
rbd_journal or rbd_mirror settings? If there were lots of changes on
those images the sync would always be behind per design. But if
there's no activity it should eventually catch up, I assume.
You can review the settings from this output:
ceph config show-with-defaults mgr.<MGR> | grep -E "rbd_mirror|rbd_journal"
I assume there aren't many journal entries in the pool?
rados -p <POOL> ls | grep journal
Although I'd expect a different status maybe the sync was interrupted
and a resync should be initiated? Or have you already tried that?
Regards,
Eugen
Zitat von Vikas Rana <vrana@xxxxxxxxxxxx>:
Hi Friends,
We have 2 Ceph clusters on campus and we setup the second cluster as the DR
solution.
The images on the DR side are always behind the master.
Ceph Version : 12.2.11
VMWARE_LUN0:
global_id: 23460954-6986-4961-9579-0f2a1e58e2b2
state: up+replaying
description: replaying, master_position=[object_number=2632711,
tag_tid=24, entry_tid=1967382595], mirror_position=[object_number=1452837,
tag_tid=24, entry_tid=456440697], entries_behind_master=1510941898
last_update: 2020-11-30 14:13:38
VMWARE_LUN1:
global_id: cb579579-13b0-4522-b65f-c64ec44cbfaf
state: up+replaying
description: replaying, master_position=[object_number=1883943,
tag_tid=28, entry_tid=1028822927], mirror_position=[object_number=1359161,
tag_tid=28, entry_tid=358296085], entries_behind_master=670526842
last_update: 2020-11-30 14:13:33
Any suggestion on tuning or any parameters we can set on RBD-mirror to speed
up the replication. Both cluster have very little activity.
Appreciate your help.
Thanks,
-Vikas
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx