Re: gfs2_tool settune demote_secs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Steve,
    Thanks for the prompt reply.  Like Kaerka, I'm running on large-memory servers and decreasing demote_secs from 300 to 20 resulted in significant performance improvements because locks get freed much more quickly (I assume), resulting in much better response.  It could certainly be that changing demote_secs was a workaround for a different bug that has now been fixed, which would be great.  I'll try some tests today and see how "rm -rf" on a large directory behaves.

-- scooter

Kaerka Phillips wrote:
If in gfs2 glocks are purged based upon memory constraints, what happens if it is run on a box with large amounts of memory? i.e. RHEL5.x with 128gb ram?  We ended up having to move away from GFS2 due to serious performance issues with this exact setup, and our performance issues were largely centered around commands like ls or rm against gfs2 filesystems with large directory structures and millions of files in them.

In our case, something as simple as copying a whole filesystem to another filesystem would cause a load avg of 50 or more, and would take 8+ hours to complete.  The same thing on NFS or ext3 would take usually 1 to 2 hours.  Netbackup of 10 of those filesystems took ~40 hours to complete, so we were getting maybe 1 good backup per week, and in some cases the backup itself caused cluster crash.

We are still using our GFS1 clusters, since as long as their network is stable, their performance is very good, but we are phasing out most of our GFS2 clusters to NFS instead.

On Fri, Oct 9, 2009 at 1:01 PM, Steven Whitehouse <swhiteho@xxxxxxxxxx> wrote:
Hi,

On Fri, 2009-10-09 at 09:55 -0700, Scooter Morris wrote:
> Hi all,
>     On RHEL 5.3/5.4(?) we had changed the value of demote_secs to
> significantly improve the performance of our gfs2 filesystem for certain
> tasks (notably rm -r on large directories).  I recently noticed that
> that tuning value is no longer available (part of a recent update, or
> part of 5.4?).  Can someone tell me what, if anything replaces this?  Is
> it now a mount option, or is there some other way to tune this value?
>
> Thanks in advance.
>
> -- scooter
>
> --
> Linux-cluster mailing list
> Linux-cluster@xxxxxxxxxx
> https://www.redhat.com/mailman/listinfo/linux-cluster

Nothing replaces it. The glocks are disposed of automatically on an LRU
basis when there is enough memory pressure to require it. You can alter
the amount of memory pressure on the VFS caches (including the glocks)
but not specifically the glocks themselves.

The idea is that is should be self-tuning now, adjusting itself to the
conditions prevailing at the time. If there are any remaining
performance issues though, we'd like to know so that they can be
addressed,

Steve.


--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster


-- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster

begin:vcard
fn:John "Scooter" Morris, Ph.D.
n:Morris;Scooter
org:University of California, San Francisco MC 2240;Resource on Biocomputing, Visualization, and Informatics
adr:;;600 16th St.;San Francisco;CA;94158-2517;USA
email;internet:scooter@xxxxxxxxxxxx
title:Executive Director
tel;work:+1-415-514-4406
x-mozilla-html:TRUE
url:http://www.cgl.ucsf.edu
version:2.1
end:vcard

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux