On 26/03/15 22:38, Andrew Beekhof wrote: > >> On 27 Mar 2015, at 3:19 am, Christine Caulfield <ccaulfie@xxxxxxxxxx> wrote: >> >> On 26/03/15 10:36, Christine Caulfield wrote: >>> On 25/03/15 01:24, Steven Dake wrote: >>>> I think if you dont care about performance, > > We care about performance in that we don't want the same characteristics that totem currently suffers from. > Surely an essential part of being able to scale is caring about performance? > I don't actually think the performance hit will be that bad. And compared with having a ring of 200 nodes to traverse, having an extra copy inside each machine to get to a satellite is probably not significant. It's something we need to be aware of though. Chrissie >>>> you can have a daemon >>>> process (second process) connect as a cpg service and maintain an >>>> overlay network on top of CPG. Then many other external endpoints could >>>> connect to this server over TCP. >>> >>> That's an interesting idea that I quite like. And it might be nice and >>> easy to get a proof-of-concept up and running. >>> >>> It would probably require a different API to the normal corosync one >>> (I'm not sure that emulating libcpg etc for a different daemon would be >>> sensible). >>> >>> How does that sound to the Pacemaker team? >>> >> >> >> I've been thinking about Steven Dake's idea most of today and I really >> like it. It's clean, doesn't interfere with corosync internals and will >> be easier to implement and maintain. Also it won't break the on-wire >> protocol. >> >> The one main drawback I see is that the CPG membership will not include >> the satellite nodes (unless the parent joins the CPG once for each >> parent, which seems excessive). Looking at the pacemaker code this >> doesn't seem to be a problem. We can still send node up/down >> notifications if needed, even if a satellite joins the cluster, it would >> just show the same list of central nodes each time. > > How would we find out who the list of satellite nodes is? > >> >> I'm less worried about the performance hit for this sort of >> implementation though it does need to be borne in mind. I'll forward an >> updated document early next week for perusal if David or Andrew chip in >> about Pacemaker requirements above. >> >> thoughts? >> >> Chrissie >> >> _______________________________________________ >> discuss mailing list >> discuss@xxxxxxxxxxxx >> http://lists.corosync.org/mailman/listinfo/discuss > _______________________________________________ discuss mailing list discuss@xxxxxxxxxxxx http://lists.corosync.org/mailman/listinfo/discuss