Vikas Gorur wrote:
Gordan Bobic wrote:
That is plausible since I am using single-process client and server.
Is there a way to tell on a running glfs cluster which node is the
current lock master? The process creating the load was running on the
first listed volume, so I would have expected this to be the primary
lock server.
The rule for lock servers is:
Lock servers = {The first n subvolumes of replicate that are up}
where n is given by the options "data-lock-server-count",
"metadata-lock-server-count",
and "entry-lock-server-count", which all default to 1.
All the n lock servers in this set will be identical in their state.
I thought there were always at least 2 lock servers, no? And they rotate
around depending on which servers are up (when the current lock server
dies, it fails over to the next one). Is that not the case? Also if the
servers leave and re-join, does the lock server functionality rotate
back or does it remain on the failed-over server?
Gordan