Re: gfs tuning

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Ross Vandegrift wrote:
On Mon, Jun 16, 2008 at 11:45:51AM -0500, Terry wrote:
I have 4 GFS volumes, each 4 TB.  I am seeing pretty high load
averages on the host that is serving these volumes out via NFS.  I
notice that gfs_scand, dlm_recv, and dlm_scand are running with high
CPU%.  I truly believe the box is I/O bound due to high awaits but
trying to dig into root cause.  99% of the activity on these volumes
is write.  The number of files is around 15 million per TB.   Given
the high number of writes, increasing scand_secs will not help.  Any
other optimizations I can do?


A similar case two years ago was solved by the following two tunables:

shell> gfs_tool settune <mount_point> demote_secs <seconds>
(e.g. "gfs_tool settune /mnt/gfs1 demote_secs 200").
shell> gfs_tool settune <mount_point> glock_purge <percentage>
(e.g. "gfs_tool settune /mnt/gfs1 glock_purge 50")

The example above will trim 50% of inode away for every 200 seconds interval (default is 300 seconds). Do this on all the GFS-NFS servers that show this issues. It can be dynamically turned on (non-zero percentage) and off (0 percentage).

As I recalled, the customer used a very aggressive percentage (I think it was 100%) but please start from middle ground (50%) to see how it goes.

-- Wendy

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux