Re: rbd-mirror replay is very slow - but initial bootstrap is fast

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>> 
>> FWIW when using rbd-mirror to migrate volumes between SATA SSD clusters, I found that
>> 
>> 
>>   rbd_mirror_journal_max_fetch_bytes:
>>    section: "client"
>>    value: "33554432"
>> 
>>  rbd_journal_max_payload_bytes:
>>    section: "client"
>>    value: “8388608"
> 
> Indeed, that's a good tweak that applies to the primary-side librbd
> client for the mirrored image for IO workloads that routinely issue
> large (> than the 16KiB default), sequential writes. This was another
> compromise configuration setting to reduce the potential memory
> footprint of the rbd-mirror daemon.

Direct advice from you last year ;)

Extrapolating for those who haven’t done much with rbd-mirror, or who find this thread in the future, these settings worked well for me migrating at most 2 active volumes at once, ones where I had no insight into client activity.  YMMV.

Setting these specific values when mirroring an entire pool could well be doubleplusungood.

— aad
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux