On Wed, 2005-06-08 at 13:51 -0500, Nate Carlson wrote: > So this should be one journal for any node I may want to mount in the > future? Is there any problem with having this set to a high number? (I've > currently got 4 nodes, but will be running Xen domains on top of each of > those nodes which will also be running GFS, so I could have 15-20 nodes > easily). Need one journal per node, physical or virtual. So, 20 is your magic number. > Second, I see that the docs are using lock_dlm for everything; is this the > recommended approach now? What are the major differences between lock_dlm > and lock_gulm? lock_dlm is symmetric, lock_gulm is client/server. Generally, your lock servers should not be accessing the GFS file system, though it is allowed. This gives a lower chance of failures for the lock servers. > Third, just a question about how voting and such works. Again, with the > possibility of 15-20 nodes, I'd still like to be able to use the GFS > filesystem even if only one node is currently up. Is there anything I need > to tweak to be able to do that? You need one of the following: (a) a totally asymmetric setup: one node having 21 votes, all others having 1. If the node with 21 votes goes down, everyone loses quorum though, and no one can access the file system. (b) an asymmetric setup with a separate lock server cluster. GFS Clients connecting to the lock server cluster have no "votes" at all, so as long as a majority of the lock server cluster is online, any one of the GFS cluster nodes can come online and access the file system. With most lock_dlm setups, nodes have one vote. In that case, you must have... floor((n+1)/2) ...nodes online for a quorum to form (except in the 2-node case). -- Lon -- Linux-cluster@xxxxxxxxxx http://www.redhat.com/mailman/listinfo/linux-cluster