On 5/12/22 13:25, ronny.lippold wrote: > hi arthur and thanks for answering, > > > Am 2022-05-12 13:06, schrieb Arthur Outhenin-Chalandre: >> Hi Ronny > >> >> Yes according to my test we were not able to have a good replication >> speed on a single image (I think it was 30Mb/s per image something like >> that). So you have probably a few image that write a lot and thus are >> much behind in term of replication... > > ok, you mean, that the growing came, cause of replication is to slow? > strange ... i thought our cluster is not so big ... but ok. > so, we cannot use journal ... > maybe some else have same result? If you want a bit more details on this you can check my slides here: https://codimd.web.cern.ch/p/-qWD2Y0S9#/. >> >> If you have no rbd-mirror running while snapshot mirroring is enabled, >> for me it means me that the load come from taking/deleting snapshots... >> At what interval did you configure for mirror snapshots? >> > > that was also my idea ... we use 20 min. > we had this 50days running and everything was fine. > we also tried a longer period, 2h. the result i/o load was much higher, > than before. > > after we set schule time to 1h, the load was higher. > the time between the snapshots had also a little bit higher load, than > before ... strange. > i mean: > 9h00 load 45% > 9h16 - 9h56 load 2,5% > > the first 50 days, we had a complete load 0,5-1%. Hmmm I think there are some plan to have a way to spread the snapshots in the provided interval in Reef (and not take every snapshots at once) but that's unfortunately not here today... The timing thing is a bit weird but I am not an expert on RBD snapshots implication in general... Maybe you can try to reproduce by taking snapshot by hand with `rbd mirror image snapshot` on some of your images, maybe that's something related to really big images? Or that there was a lot of write since the last snapshot? Cheers, -- Arthur Outhenin-Chalandre _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx