Re: Multisite replication speed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Paul,

Thank you very much for pointing us at BBR ! We will definitely run some tests before and with the change applied to see if it's increasing a bit our transfer speed.

One additional question if you don't mind. As of today, our zonegroup configuration consist of two zones, a master zone with a HAProxy VIP, and a slave zone with another HAProxy VIP. Those VIPs are acting as endpoints for users for accessing the cluster but are also used for the replication traffic between the master and the slave zone, so it's one frontend IP with the RadosGWs configured as backends in the HAProxy configuration for both.

Let's say we configure multiples VIPs as endpoints in the zonegroup configuration, will Ceph load-balance the traffic between those endpoints for the replication traffic or take advantages of the multiples endpoints to multithread the replication traffic between those endpoints, thus increasing the overall replication speed ?

Our initial tests show that no matter how many endpoints you specify in the configuration, it will only use one source IP and destination IP at a time and fuel the replication traffic throught this one, and only this one.

Is it the expected behavior or we are missing something ?

Thanks again !

Cheers,

Nicolas

Téléchargez Outlook pour iOS<https://aka.ms/o0ukef>
________________________________
De : Paul Mezzanini <pfmeec@xxxxxxx>
Envoyé : Thursday, October 8, 2020 6:51:36 PM
À : Nicolas Moal <nicolas.moal@xxxxxxxxxxx>; ceph-users <ceph-users@xxxxxxx>
Objet : Re: Multisite replication speed

With a long distance link I would definitely look into switching to BBR for your congestion control as your first step.

Well, your _first_ step is to do an iperf and establish a baseline....

A quick search and this link seems to explain it not-too-bad
https://www.cyberciti.biz/cloud-computing/increase-your-linux-server-internet-speed-with-tcp-bbr-congestion-control/

We have used it before with great success for long distance, high throughput transfers.

-paul
--
Paul Mezzanini
Sr Systems Administrator / Engineer, Research Computing
Information & Technology Services
Finance & Administration
Rochester Institute of Technology
o:(585) 475-3245 | pfmeec@xxxxxxx

CONFIDENTIALITY NOTE: The information transmitted, including attachments, is
intended only for the person(s) or entity to which it is addressed and may
contain confidential and/or privileged material. Any review, retransmission,
dissemination or other use of, or taking of any action in reliance upon this
information by persons or entities other than the intended recipient is
prohibited. If you received this in error, please contact the sender and
destroy any copies of this information.
------------------------

________________________________________
From: Nicolas Moal <nicolas.moal@xxxxxxxxxxx>
Sent: Thursday, October 8, 2020 10:36 AM
To: ceph-users
Subject:  Multisite replication speed

Hello everybody,

We have two Ceph object clusters replicating over a very long-distance WAN link. Our version of Ceph is 14.2.10.
Currently, replication speed seems to be capped around 70 MiB/s even if there's a 10Gb WAN link between the two clusters.
The clusters themselves don't seem to suffer from any performance issue.

The replication traffic leverages HAProxy VIPs, which means there's a single endpoint (the HAProxy VIP) in the multisite replication configuration.

So, my questions are:
- Is it possible to improve replication speed by adding more endpoints in the multisite replication configuration? The issue we are facing is that the secondary cluster is way behind the master cluster because of the relatively slow speed.
- Is there anything else I can do to optimize replication speed ?

Thanks for your comments !

Nicolas

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux