Re: gfs tuning

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jun 16, 2008 at 2:48 PM, Wendy Cheng <s.wendy.cheng@xxxxxxxxx> wrote:
> Ross Vandegrift wrote:
>>
>> On Mon, Jun 16, 2008 at 11:45:51AM -0500, Terry wrote:
>>
>>>
>>> I have 4 GFS volumes, each 4 TB.  I am seeing pretty high load
>>> averages on the host that is serving these volumes out via NFS.  I
>>> notice that gfs_scand, dlm_recv, and dlm_scand are running with high
>>> CPU%.  I truly believe the box is I/O bound due to high awaits but
>>> trying to dig into root cause.  99% of the activity on these volumes
>>> is write.  The number of files is around 15 million per TB.   Given
>>> the high number of writes, increasing scand_secs will not help.  Any
>>> other optimizations I can do?
>>>
>>
>>
>
> A similar case two years ago was solved by the following two tunables:
>
> shell> gfs_tool settune <mount_point> demote_secs <seconds>
> (e.g. "gfs_tool settune /mnt/gfs1 demote_secs 200").
> shell> gfs_tool settune <mount_point> glock_purge <percentage>
> (e.g. "gfs_tool settune /mnt/gfs1 glock_purge 50")
>
> The example above will trim 50% of inode away for every 200 seconds interval
> (default is 300 seconds). Do this on all the GFS-NFS servers that show this
> issues. It can be dynamically turned on (non-zero percentage) and off (0
> percentage).
>
> As I recalled, the customer used a very aggressive percentage (I think it
> was 100%) but please start from middle ground (50%) to see how it goes.
>
> -- Wendy

I am still seeing some high load averages.  Here is an example of a
gfs configuration.  I left statfs_fast off as it would not apply to
one of my volumes for an unknown reason.  Not sure that would have
helped anyways.  I do, however, feel that reducing scand_secs helped a
little:

[root@omadvnfs01a ~]# gfs_tool gettune /data01a
ilimit1 = 100
ilimit1_tries = 3
ilimit1_min = 1
ilimit2 = 500
ilimit2_tries = 10
ilimit2_min = 3
demote_secs = 200
incore_log_blocks = 1024
jindex_refresh_secs = 60
depend_secs = 60
scand_secs = 30
recoverd_secs = 60
logd_secs = 1
quotad_secs = 5
inoded_secs = 15
glock_purge = 50
quota_simul_sync = 64
quota_warn_period = 10
atime_quantum = 3600
quota_quantum = 60
quota_scale = 1.0000   (1, 1)
quota_enforce = 0
quota_account = 0
new_files_jdata = 0
new_files_directio = 0
max_atomic_write = 4194304
max_readahead = 262144
lockdump_size = 131072
stall_secs = 600
complain_secs = 10
reclaim_limit = 5000
entries_per_readdir = 32
prefetch_secs = 10
statfs_slots = 128
max_mhc = 10000
greedy_default = 100
greedy_quantum = 25
greedy_max = 250
rgrp_try_threshold = 100
statfs_fast = 0

Given the high number of files (10-15 million per TB) would it be
smarter to use ext3?  My NFS cluster is set up as an active/passive
anyways so only 1 node will have access to the data at any one time.
Thoughts?  Opinions?

Anyone have an NFS cluster that is active/active?  Thoughts?  I am not
certain that nfsd and locking is cluster friendly.  That said, I don't
feel my application (nfs clients) will be requesting a file in write
(or even read) mode at the same time so my locking concerns aren't
high.

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux