Re: [Openais] [RFC] quorum implementation in corosync

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 23/09/2011, at 3:10 AM, Steven Dake wrote:

> On 09/21/2011 10:23 PM, Andrew Beekhof wrote:
>> Stupid question, but if quorum is being calculated in quorumd,
>> shouldn't clients get it from there directly?
>> What's the advantage to corosync being involved (other than as a client itself)?
>> 
> 
> There are two problems with getting it directly from quorum daemon
> 1) The quorum state and cpg timing  are not synced

There was a followup that got lost in the great LF outage...
What I was proposing was that quorumd not just provide a boolean quorum value, but also the membership which was used to calculate it.
Basically the CPG API with an extra boolean.

So people wanting /only/ membership would use cpg (and get notified of changes immediately), people wanting quorum too would /only/ use quorumd's API (and be notified of membership changes after a new quorum value has also been calculated).

> 2) more generically, it would be helpful to get all the low level
> cluster info from one location

As one of the primary consumers, I don't see much value to be honest.
Particularly at the cost of the interdependencies and round-tripping that get introduced.

> 
> 1 is the deal killer though.  Note we have this problem today in
> corosync because quorum and cpg are not synchronized.  We plan to
> synchronize quorum delivery into the cpg callback stream so that cpg
> applications that need quorum will only obtain callbacks if they are
> part of the primary component.
> 
> In order to do this synchronization, the quorum information has to come
> from corosync (and more specifically the cpg interface).
> 
> Regards
> -steve
> 
> 
>>> 
>>> On Wed, Aug 17, 2011 at 6:26 PM, Fabio M. Di Nitto <fdinitto@xxxxxxxxxx> wrote:
>>>> Hi all,
>>>> 
>>>> for a long time cman has been the quorum provider within RHCS. cman is
>>>> going to be obsoleted in the long term and a replacement needs to be
>>>> implemented.
>>>> 
>>>> In this proposal I left out API names.. they are not important at this
>>>> stage and can be defined later on (also because some interfaces like
>>>> confdb/objdb might change in 2.0).
>>>> 
>>>> I am also assuming that we want the option to plug different quorum
>>>> providers into the system (network based, disk based, etc) and different
>>>> algorithms to calculate quorum (YKD, etc).
>>>> 
>>>> Attached to this email there is a small pdf with the data flow diagram
>>>> as one picture can explain better than 1000 words (at least given my
>>>> level of itaglish ;))
>>>> 
>>>> Keep always in mind that:
>>>> 
>>>> 1) At any given time, only one "cluster view provider" feeds information
>>>> to quorumd. The provider must be the same across all nodes.
>>>> 
>>>> 2) At any given time, only one "quorum calculation algorithm" can be
>>>> used and it must be the same across all nodes.
>>>> 
>>>> 3) disk based provider can either be a separate daemon or run within
>>>> quorumd. Due to the nature of the provider, the implementation needs
>>>> either threads or libaio (that´s not very portable) and therefor it
>>>> cannot run within corosync directly for blocking reasons.
>>>> 
>>>> 4) a quorum state change, has to trigger a cpg_1 notification (assuming
>>>> that we will use cpg as notification method, but that would save the
>>>> issue of synchronizing cpg notifications with quorum ones).
>>>> 
>>>> 5) dispatch of notification between cpg_0 and cpg_1 has to be
>>>> synchronized to allow quorumd to act on network based cluster view
>>>> provider. In theory the only user for cpg_0 is quorumd.
>>>> 
>> 
>> _______________________________________________
>> discuss mailing list
>> discuss@xxxxxxxxxxxx
>> http://lists.corosync.org/mailman/listinfo/discuss
> 

_______________________________________________
discuss mailing list
discuss@xxxxxxxxxxxx
http://lists.corosync.org/mailman/listinfo/discuss



[Index of Archives]     [Linux Clusters]     [Corosync Project]     [Linux USB Devel]     [Linux Audio Users]     [Photo]     [Yosemite News]    [Yosemite Photos]    [Linux Kernel]     [Linux SCSI]     [X.Org]

  Powered by Linux