Re: GFS: drop_count and drop_period tuning

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks Michael, I've set this option on my filesystems. How should this impact to the system performance/behaviour? More/less memory usage? I guess that, by trimming the 50% of unused locks every 5 secs, it should cut off memory usage too.. am I right? 

If this works, I could also raise the drop_count value?

2007/9/10, Hagmann, Michael < Michael.Hagmann@xxxxxxxxx>:
Hi
 
When you are on RHEL4.5 then I highly suggest you to use the new glock_purge Parameter for every gfs Filesystem add to /etc/rc.local
-------
gfs_tool settune / glock_purge 50
gfs_tool settune /scratch glock_purge 50
-------
 
also this Parameter has to set new on every mount. That mean when you umount it and then mount it again, run the /etc/rc.local again, otherway the parameter are gone!
 
 
mike


From: linux-cluster-bounces@xxxxxxxxxx [mailto:linux-cluster-bounces@redhat .com] On Behalf Of Claudio Tassini
Sent: Montag, 10. September 2007 13:19
To: linux clustering
Subject: GFS: drop_count and drop_period tuning

Hi all,

 
I have a four-nodes GFS cluster on RH 4.5 (last versions, updated yesterday). There are three GFS filesystems ( 1 TB, 450 GB and 5GB), serving some mail domains with postfix/courier imap in a "maildir" configuration.

 
As you can suspect, this is not exactly the best for GFS: we have a lot (thousands) of very small files (emails) in a very lot of directories. I'm trying to tune up things to reach the best performance. I found that tuning the drop_count parameter in /proc/cluster/lock_dlm/drop_period , setting it to a very large value (it was 500000 and now, after a memory upgrade, I've set it to 1500000 ), uses a lot of memory (about 10GB out of 16 that I've installed in every machine) and seems to "boost" performance limiting the iowait CPU usage.

 
The bad thing is that when I umount a filesystem, it must clean up all that locks (I think), and sometimes it causes problems to the whole cluster, with the other nodes that stop writes to the filesystem while I'm umounting on one node only. 
Is this normal? How can I tune this to clean memory faster when I umount the FS? I've read something about setting more gfs_glockd daemons per fs with the num_glockd mount option, but it seems to be quite deprecated because it shouldn't be necessary..

 


--
Claudio Tassini

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster



--
Claudio Tassini
--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux