Re: rbd-mirror replay is very slow - but initial bootstrap is fast

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Mar 10, 2020 at 6:47 AM Ml Ml <mliebherr99@xxxxxxxxxxxxxx> wrote:
>
> Hello List,
>
> when i initially enable journal/mirror on an image it gets
> bootstrapped to my site-b pretty quickly with 250MB/sec which is about
> the IO Write limit.
>
> Once its up2date, the replay is very slow. About 15KB/sec and the
> entries_behind_maste is just running away:
>
> root@ceph01:~# rbd --cluster backup mirror pool status rbd-cluster6 --verbose
> health: OK
> images: 3 total
>     3 replaying
>
> ...
>
> vm-112-disk-0:
>   global_id:   60a795c3-9f5d-4be3-b9bd-3df971e531fa
>   state:       up+replaying
>   description: replaying, master_position=[object_number=623,
> tag_tid=3, entry_tid=345567], mirror_position=[object_number=35,
> tag_tid=3, entry_tid=18371], entries_behind_master=327196
>   last_update: 2020-03-10 11:36:44
>
> ...
>
> Write traffic on the source is about 20/25MB/sec.
>
> On the Source i run 14.2.6 and on the destination 12.2.13.
>
> Any idea why the replaying is sooo slow?

What is the latency between the two clusters?

I would recommend increasing the "rbd_mirror_journal_max_fetch_bytes"
config setting (defaults to 32KiB) on your destination cluster. i.e.
try adding add "rbd_mirror_journal_max_fetch_bytes = 4194304" to the
"[client]" section of your Ceph configuration file on the node where
"rbd-mirror" daemon is running, and restart it. It defaults to a very
small read size from the remote cluster in a primitive attempt to
reduce the potential memory usage of the rbd-mirror daemon, but it has
the side-effect of slowing down mirroring for links with higher
latencies.

>
> Thanks,
> Michael
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>


-- 
Jason
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux