Re: GFS2 and D state HTTPD processes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Sep 24, 2009 at 11:30:38AM +0100, Gavin Conway wrote:
>         <gfs_controld plock_ownership="1" plock_rate_limit="0"/>

plock_ownership doesn't work right in 5.3, you need to keep it 0 until you
upgrade to 5.4.  (and remember that value can only be changed with the cluster
off line.)

(plock_ownership also only improves plock performance for highly localized
workloads, otherwise it actually harms plock perforamance.)


> Can anyone give me some pointers on what we should be investigating for why
> this is failing? I've had our networks team crawl over the networking and
> that all seems fine. The MTU is set correctly on the MD3000i and on the
> individual nodes. I've also used the ping_pong tool and on a single file on
> the GFS cluster we can get around 90K locks on a file. If I run ping_pong
> against the same file from two nodes that then drops to around 70 locks per
> second. I don't think that's the issue though.

What leads you to believe your performance issues are related to posix locks?
That would be very surprising to me.  You can use strace to measure the time
system calls are taking; if fcntl(SETLK) is on top, then it's worth looking at
plocks.

Dave

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux