> On Wed, 8 Jun 2005, Lon Hohberger wrote: > > Need one journal per node, physical or virtual. So, 20 is > your magic > > number. > > Got'cha. Is there any way to expand this at a later date, or > I want to set it to the highest possible number right away? You can add additional journals at a later time provided that you have free disk space on the filesystem. At least, that was the case with GFS 6. > > lock_dlm is symmetric, lock_gulm is client/server. Generally, your > > lock servers should not be accessing the GFS file system, > though it is > > allowed. This gives a lower chance of failures for the > lock servers. > > <...> > > > (b) an asymmetric setup with a separate lock server cluster. GFS > > Clients connecting to the lock server cluster have no > "votes" at all, > > so as long as a majority of the lock server cluster is > online, any one > > of the GFS cluster nodes can come online and access the file system. > > I think this sounds like a reasonable way to go - make the > physical servers (or a couple of them) the lock servers, and > set up the GFS clients as client-only. For this case, will I > need to do lock_gulm? If so, I'll have to do some research on > how to set that up. With lock_gulm, you can run with a single lock manager or redundant lock managers. In a redundant lock manager config, you generally have 3-5 lock managers. One is elected the master lock manager and the others are slaves. If the master loses conenctivity to the other nodes, the majority of the remaining nodes will elect a new master. The other consideration when using lock_gulm with RLM is that lockserver nodes must be fenced from network and the storage, so simply fencing on a fibre switch port is not suffiencient. These means that you need network power switches to fence the lock servers. If you have Dell PowerEdge servers, you can also fence them with the Embedded Remote Access controllers. I wrote a little PERL script that does so if you're interested. Theres also a similar script in the fence CVS, but I like mine better. 8) > > With most lock_dlm setups, nodes have one vote. In that case, you > > must have... > > > > floor((n+1)/2) > > > > ...nodes online for a quorum to form (except in the 2-node case). > > Great - thanks! > > -------------------------------------------------------------- > ---------- > | nate carlson | natecars@xxxxxxxxxxxxxxx | > http://www.natecarlson.com | > | depriving some poor village of its idiot since 1981 > | > -------------------------------------------------------------- > ---------- > > -- > > Linux-cluster@xxxxxxxxxx > http://www.redhat.com/mailman/listinfo/linux-cluster > -- Linux-cluster@xxxxxxxxxx http://www.redhat.com/mailman/listinfo/linux-cluster