On 10/1/20 1:05 AM, William Brown wrote:
On 1 Oct 2020, at 05:25, Paul Whitney <paul.whitney@xxxxxxx> wrote:
Hi Eugen,
I think that is what was tested by Red Hat and not necessarily a hard limit.
Correct, however Red Hat also tested up to 60 for FreeIPA. Certainly I know there is a deployment in the world that has scaled up past 1000 servers.
Generally once you get past say 8 servers, you need to think about your replication topology and how the data will flow. The largest sites I know tend to go to maximum 8 write-accepting servers, and then replicate to N hubs and replicas that are read only past that for the best performance and reliability.
That is right, there is no hard limit regarding the number of masters.
Even the number of hosts (master,hub,consumer) in the topology is not
limited.
Freeipa tested with up to 60 masters this is why there are some
recommendations in the doc. Some considerations with a high number of
hosts is the #replication agreement and the max hop to replicate an
update from/to hosts. #replication agreement will create "pressure" on
the changelog and could compete with changelog updates (can trigger
retry). #hop will create latency. Some common deployments are 4-8
masters, 4-8 hubs and many consumers.
regards
thierry
Regards,
Paul M. Whitney
paul.whitney@xxxxxxx
Sent from my Mac Book Pro
On Sep 30, 2020, at 9:56 AM, Eugen Lamers <eugen.lamers@xxxxxxxxxxxxxxxxx> wrote:
Hi,
We use the 389 Directory Server version 1.4.2.15.
In the documentation of the Red Hat Directory Server it says, as many as 20 masters are supported in an MMR. It sounds to be a hardcoded limitation defined to avoid overloaded servers and network. Shouldn't it be depending on the MMR topology, i.e. rather on the total number of replication agreements within the whole scenario? Or does "20 masters" indeed mean that the limitation of the MMR topology is the number of 380 replication agreements as shown in the "fully connected mesh" scenario (https://access.redhat.com/documentation/en-us/red_hat_directory_server/10/html/deployment_guide/designing_the_replication_process-common_replication_scenarios) with 20 masters and 20x19 replication agreements?
Is there someone who possibly has experience with scenarios of more than 20 master servers?
Thanks,
Eugen
_______________________________________________
389-users mailing list -- 389-users@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to 389-users-leave@xxxxxxxxxxxxxxxxxxxxxxx
Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/389-users@xxxxxxxxxxxxxxxxxxxxxxx
_______________________________________________
389-users mailing list -- 389-users@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to 389-users-leave@xxxxxxxxxxxxxxxxxxxxxxx
Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/389-users@xxxxxxxxxxxxxxxxxxxxxxx
—
Sincerely,
William Brown
Senior Software Engineer, 389 Directory Server
SUSE Labs, Australia
_______________________________________________
389-users mailing list -- 389-users@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to 389-users-leave@xxxxxxxxxxxxxxxxxxxxxxx
Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/389-users@xxxxxxxxxxxxxxxxxxxxxxx
_______________________________________________
389-users mailing list -- 389-users@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to 389-users-leave@xxxxxxxxxxxxxxxxxxxxxxx
Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/389-users@xxxxxxxxxxxxxxxxxxxxxxx