Re: Cluster Project FAQ - GFS tuning section]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 1/23/07, David Teigland <teigland@xxxxxxxxxx> wrote:
On Tue, Jan 23, 2007 at 08:39:32AM -0500, Wendell Dingus wrote:
> I don't know where that breaking point is but I believe _we've_ stepped
> over it.

The number of files in the fs is a non-issue; usage/access patterns is
almost always the issue.

> 4-node RHEL3 and GFS6.0 cluster with (2) 2TB filesystems (GULM and no
> LVM) versus
> 3-node RHEL4 (x86_64) and GFS6.1 cluster with (1) 8TB+ filesystem (DLM
> and LVM and way faster hardware/disks)
>
> This is a migration from the former to the latter, so quantity/size of
> files/dirs is mostly identical. Files being transferred from customer
> sites to the old servers never cause more than about 20% CPU load and
> that usually (quickly) falls to 1% or less after the initial xfer
> begins. The new servers run to 100% where they usually remain until the
> transfer completes. The current thinking as far as reason is the same
> thing being discussed here.

This is strange, are you mounting with noatime?  Also, try setting this on
each node before it mounts gfs:

echo "0" > /proc/cluster/lock_dlm/drop_count
What does this do?



Dave

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster



--
Jon

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux