thanks a lot, Jason.
how much performance loss should i expect by enabling rbd mirroring? I really need to minimize any performance impact while using this disaster recovery feature. Will a dedicated journal on Intel Optane NVMe help? If so, how big the size should be?
cheers,
Samuel
huxiaoyu@xxxxxxxxxxxx
From: Jason DillamanDate: 2019-04-03 23:03CC: ceph-usersSubject: Re: How to tune Ceph RBD mirroring parameters to speed up replicationFor better or worse, out of the box, librbd and rbd-mirror areconfigured to conserve memory at the expense of performance to supportthe potential case of thousands of images being mirrored and only asingle "rbd-mirror" daemon attempting to handle the load.You can optimize writes by adding "rbd_journal_max_payload_bytes =8388608" to the "[client]" section on the librbd client nodes.Normally, writes larger than 16KiB are broken into multiple journalentries to allow the remote "rbd-mirror" daemon to make forwardprogress w/o using too much memory, so this will ensure large IOs onlyrequire a single journal entry.You can also add "rbd_mirror_journal_max_fetch_bytes = 33554432" tothe "[client]" section on the "rbd-mirror" daemon nodes and restartthe daemon for the change to take effect. Normally, the daemon triesto nibble the per-image journal events to prevent excessive memory usein the case where potentially thousands of images are being mirrored.On Wed, Apr 3, 2019 at 4:34 PM huxiaoyu@xxxxxxxxxxxx<huxiaoyu@xxxxxxxxxxxx> wrote:>> Hello, folks,>> I am setting up two ceph clusters to test async replication via RBD mirroring. The two clusters are very close, just in two buildings about 20m away, and the networking is very good as well, 10Gb Fiber connection. In this case, how should i tune the relevant RBD mirroring parameters to accelerate the replication?>> thanks in advance,>> Samuel> _______________________________________________> ceph-users mailing list> ceph-users@xxxxxxxxxxxxxx> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com--Jason
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com