Re: Monitor as local VM on top of the server pool cluster?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jul 11, 2017 at 3:44 AM, David Turner <drakonstein@xxxxxxxxx> wrote:
> Mons are a paxos quorum and as such want to be in odd numbers.  5 is
> generally what people go with.  I think I've heard of a few people use 7
> mons, but you do not want to have an even number of mons or an ever growing

Unless your cluster is very large three should be sufficient.

> number of mons.  The reason you do not want mons running on the same
> hardware as osds is resource contention during recovery.  As long as the Xen
> servers you are putting the mons on are not going to cause any source of
> resource limitation/contention, then virtualizing them should be fine for
> you.  Make sure that you aren't configuring the mon to run using an RBD for
> its storage, that would be very bad.
>
> The mon Quorum elects a leader and that leader will be in charge of the
> quorum.  Having local mons doesn't do anything as the clients will still be
> talking to the mons as a quorum and won't necessarily talk to the mon
> running on them.  The vast majority of communication to the cluster that
> your Xen servers will be doing is to the OSDs anyway, very little
> communication to the mons.
>
> On Mon, Jul 10, 2017 at 1:21 PM Massimiliano Cuttini <max@xxxxxxxxxxxxx>
> wrote:
>>
>> Hi everybody,
>>
>> i would like to separate MON from OSD as reccomended.
>> In order to do so without new hardware I'm planning to create all the
>> monitor as a Virtual Machine on top of my hypervisors (Xen).
>> I'm testing a pool of 8 nodes of Xen.
>>
>> I'm thinking about create 8 monitor and pin one monitor for one Xen node.
>> So, i'm guessing, every Ceph monitor'll be local for each node client.
>> This should speed up the system by local connecting monitors with a
>> little overflown for the monitors sync between nodes.
>>
>> Is it a good idea have a local monitor virtualized on top of each
>> hypervisor node?
>> Did you see any understimation or wrong design in this?
>>
>> Thanks for every helpfull info.
>>
>>
>> Regards,
>> Max
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



-- 
Cheers,
Brad
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux