You must achieve quorum before the cluster services are enabled. That
means 1/2 of your machines listed in cluster.conf +1 must come up.
After that, if you do a 'cman_tool leave remove' on the machines you
don't need, you should be able to go down as low as you want. (I think
there is a way to make the cman initscript do this automatically... I
also think you can specify a node count or quorum number in
cluster.conf to control when it's allowed to start the service - but
this last one might be unlikely, given that it could lead to
split-brain.)
Hopefully, someone with more knowledge will expand on this.
brassow
On Jan 24, 2007, at 1:34 PM, isplist@xxxxxxxxxxxx wrote:
Perhaps I've missed this somewhere but here I ask :).
I don't have a fixed number of nodes in my cluster. I sometimes use 4
machines, sometimes 8, sometimes more. I could split the clusters up
but it
would be simpler for me if I could only turn up the nodes I need as I
need
them.
The problem;
When I start my cluster, it locks up at DLM waiting on other nodes to
come up.
If I don't need them all, then I only fire up the ones I need but then
DLM
remains locked up until I turn up yet more nodes.
I believe this is related to my cluster.conf file but I've yet to find
enough
information that will explain how I can get around this problem.
Can someone shed some light?
Mike
--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster
--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster