Re: gfs tuning

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jun 16, 2008 at 2:16 PM, Ross Vandegrift <ross@xxxxxxxxxxx> wrote:
> On Mon, Jun 16, 2008 at 11:45:51AM -0500, Terry wrote:
>> I have 4 GFS volumes, each 4 TB.  I am seeing pretty high load
>> averages on the host that is serving these volumes out via NFS.  I
>> notice that gfs_scand, dlm_recv, and dlm_scand are running with high
>> CPU%.  I truly believe the box is I/O bound due to high awaits but
>> trying to dig into root cause.  99% of the activity on these volumes
>> is write.  The number of files is around 15 million per TB.   Given
>> the high number of writes, increasing scand_secs will not help.  Any
>> other optimizations I can do?
>
> Are you running multi-threaded/multi-process writes to the same files
> on various nodes?
>
> During benchmarking and testing a cluster I recently built, I noticed
> a very large performance hit when performing multi-threaded I/O to
> overlapping areas of the filesystem.
>
> If you can randomize the order that different nodes are accessing
> the filesystem, you'll go a long way to reducing contention.  That
> will improve your performance.
>
> However, I suspect with NFS you won't have too much choice, since
> file access will be governed by client read/write patterns...
>
>
> --
> Ross Vandegrift
> ross@xxxxxxxxxxx

I won't have a choice, unfortunately.  Here is what I set so far:

gfs_tool settune $i statfs_slots 128
gfs_tool settune $i scand_secs 30
gfs_tool settune $i glock_purge 50

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux