<snip> cluster.ccs: cluster { lock_gulm { .... lt_high_locks = <int> } } The highwater mark is an attempt to keep the amount of memory lock_gulmd uses down. When the highwater is hit, the lock server tells all gfs mounts to try and release locks. It does this every 10 seconds until the lock count falls below the highwater mark. This requires cycles, and so not doing it means less cycles used. The higher the highwater mark is, the more memory the gulm lock servers and gfs will use to store locks. The number is just the count of locks (in <=6.0) and not an actual representation of ram used. In short summery, in your case, a higher highwater mark may give some performance gained, at the loss of some memory available to other programs. </snip> I just bounced the storage servers using the lt_high_locks directive as above. The cluster.ccs looks like the following: cluster { name = "hopkins" lock_gulm { servers = ["front-0", "front-1", "enigma"] } lt_high_locks = 2097152 heartbeat_rate = 30 allowed_misses = 4 } gulm_tool getstats front-1:lt000 returns the following: [root@front-0 root]# gulm_tool getstats front-1:lt000 I_am = Master run time = 831 pid = 4073 verbosity = Default id = 0 partitions = 1 out_queue = 0 drpb_queue = 0 locks = 80640 unlocked = 9267 exclusive = 19 shared = 71354 deferred = 0 lvbs = 9274 expired = 0 lock ops = 1805398 conflicts = 3 incomming_queue = 0 conflict_queue = 0 reply_queue = 0 free_locks = 87162 free_lkrqs = 60 used_lkrqs = 0 free_holders = 125909 used_holders = 81895 highwater = 1048576 Unless I'm mis-reading this, the lt_high_locks directive didn't do anything, unless the bottom number will change once it's breached? My apologies to the list for my verbosity, btw - I'm just under the gun trying to get this stable and working. -- Jerry Gilyeat, RHCE Systems Administrator Molecular Microbiology and Immunology Johns Hopkins Bloomberg School of Public Health
<<winmail.dat>>