Re: RFC: Extending corosync to high node counts

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> On 27 Mar 2015, at 6:31 pm, Jan Friesse <jfriesse@xxxxxxxxxx> wrote:
> 
> Christine Caulfield napsal(a):
>> On 26/03/15 10:36, Christine Caulfield wrote:
>>> On 25/03/15 01:24, Steven Dake wrote:
>>>> I think if you dont care about performance, you can have a daemon
>>>> process (second process) connect as a cpg service and maintain an
>>>> overlay network on top of CPG.  Then many other external endpoints could
>>>> connect to this server over TCP.
>>> 
>>> That's an interesting idea that I quite like. And it might be nice and
>>> easy to get a proof-of-concept up and running.
>>> 
>>> It would probably require a different API to the normal corosync one
>>> (I'm not sure that emulating libcpg etc for a different daemon would be
>>> sensible).
>>> 
>>> How does that sound to the Pacemaker team?
>>> 
>> 
>> 
>> I've been thinking about Steven Dake's idea most of today and I really
>> like it. It's clean, doesn't interfere with corosync internals and will
>> be easier to implement and maintain. Also it won't break the on-wire
>> protocol.
>> 
>> The one main drawback I see is that the CPG membership will not include
>> the satellite nodes (unless the parent joins the CPG once for each
>> parent, which seems excessive). Looking at the pacemaker code this
>> doesn't seem to be a problem. We can still send node up/down
>> notifications if needed, even if a satellite joins the cluster, it would
>> just show the same list of central nodes each time.
>> 
>> I'm less worried about the performance hit for this sort of
>> implementation though it does need to be borne in mind. I'll forward an
>> updated document early next week for perusal if David or Andrew chip in
>> about Pacemaker requirements above.
>> 
>> thoughts?
>> 
> 
> Looong time ago there was also idea about remote cpg. So basically application uses special cpg which just forwards request to master node. So only difference between normal and remote cpg would be IPC layer. In theory this may be also way to go because it's essentially same idea as additional daemon but without need for daemon. I can imagine to enrich CPG so not only nodeid is sent but also IP of sending client so it would be possible to find out which satellite "nodes" (clients) exists.
> 
> What would be probably very hard to achieve (both daemon/remote cpg) seems to be a possibility to have only ONE configuration file across all nodes (master/satellite) and ability to promote one of satellite node to master node when one of master nodes die.

Shouldn't be too bad as long as we still have the control plane connect to the satellites (and not the other way around).


_______________________________________________
discuss mailing list
discuss@xxxxxxxxxxxx
http://lists.corosync.org/mailman/listinfo/discuss




[Index of Archives]     [Linux Clusters]     [Corosync Project]     [Linux USB Devel]     [Linux Audio Users]     [Photo]     [Yosemite News]    [Yosemite Photos]    [Linux Kernel]     [Linux SCSI]     [X.Org]

  Powered by Linux