Re: GFS 6.0 lt_high_locks value in cluster.ccs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Yes, issue #2 could definitely be the cause of your first issue. Unfortunately you'll need to bring down your cluster to change the value of lt_high_locks. What is its value currently? And how much memory do you have on your gulm lock servers? You'll need about 256M of RAM for gulm for every 1 Million locks (plus enough for any other process and kernel).

On each of the gulm clients you can also cat /proc/gulm/lockspace to see which client is using most of the locks.

Let us know what you find out.

Thanks!
Chris

Jonathan Woytek wrote:
Issue #2 MAY be the cause of Issue #1. This is hard to determine right now. Issue #2 is that we are now hitting the highwater mark for locks in lock_gulmd almost all day long. This used to happen only occasionally, so we didn't worry about it too much. When it used to happen in the past, it would cause the user experience for Samba users to display the hangs during navigation (though nobody ever mentioned a problem copying files to the system).

So, now to my question: I read on this list in a previous post about the lt_high_locks value in cluster.ccs. Is this a value that can be changed during runtime, or am I going to have to bring all the lock_gulmd's down to change this value?

jonathan

--

Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux