Re: how many monitor should to deploy in a 1000+ osd cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 9/26/19 5:05 AM, zhanrzh_xt@xxxxxxxxxxxxxx wrote:
> Thanks for your reply.
> We don't maintain it frequently. 
> My confusion is whether the more monitor is more advantage for client(osd,rbdclient...) to get clustermap.
> Do All clients comunicate with  one monitor  of the  cluster at the mean time ? If not  how client to decide to communicat with which monitor?
> 

Clients talk with one monitor, but each client to a different Monitor.
They select the Monitor they talk to randomly.

Wido

> From: Nathan Fish
> Date: 2019-09-26 00:52
> To: 展荣臻(信泰)
> CC: ceph-users
> Subject: Re:  how many monitor should to deploy in a 1000+ osd cluster
> You don't need more mons to scale; but going to 5 mons would make the
> cluster more robust, if it is cheap for you to do so.
> If you assume that 1 mon rebooting for updates or maintenance is
> routine, then 2/3 is vulnerable to one failure. 4/5 can survive an
> unexpected additional failure while one is down for maintenance.
> Considering your scale, this improvement in uptime might be worthwhile.
>  
> On Wed, Sep 25, 2019 at 10:26 AM 展荣臻(信泰) <zhanrzh_xt@xxxxxxxxxxxxxx> wrote:
>>
>>
>> hi all:
>>    I have a  production cluster, and it had 24 hosts (528 osds,3mons) at a former.
>>    Now we want to add 36 hosts so the osd increase to 1320 .
>>    does the monitor need to increase?how many numbers of monitor node is recommended?
>>    Another question is which monitor does monclient  commnuicate with? And how it decide?
>>    Any suggestions are welcome!
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> 
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux