Re: Glusterd 'Management Volume' proposal

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> 1) With the current scheme in glusterd the O(N^2) is because the
> configuration is replicated to every peer in the cluster, correct?

No, the O(n^2) behavior is for the probe/heartbeat stuff.  Config
replication is only O(n) but it's problematic because it doesn't
handle partitions and consistency very well.

> - We do have the limitation now that some clients _may_ not have got the
> latest graph (one of the configuration items here), with the new
> proposal, is there any thought regarding resolving the same? Is it
> required? I assume brick nodes, have this strict enforcement today as
> well as in the future.

What I would expect is that the *servers* hear about updates through
some sort of "watch" mechanism, then each is responsible for notifying
its own clients.  Note that a client which is connected to multiple
servers might therefore get multiple notifications for the same event,
so we need to recognize that a "change" to the same graph as before is
a no-op and respond accordingly (which I believe we already do).

> 2) With a >1000 node setup, is it intended that we have a cascade
> functionality to handle configuration changes? I.e there are a defined
> set of _watchers_ to the configuration cluster, and each in turn serve a
> set of peers for their _watch_ functionality?
> 
> This maybe an overkill (i.e requiring cascading), but is it required
> when we consider cases like Geo-rep or tiers in different data centers
> etc. that need configuration updates and all of them watching the
> configuration cluster maybe a problem requiring attention?

Cascading seems like overkill as long as we're talking about simple
notification and not some more complex sort of thing like 2PC.  A
single config server notifying 1000 other servers directly shouldn't
be all that big a deal.
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-devel




[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux