Re: gfs2 tuning

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

On Tue, 2010-12-07 at 19:03 +0000, yvette hirth wrote:
> hi all,
> 
> we've now defined three nodes with two more being added soon, and the 
> GFS2 filesystems are shared between all nodes.
> 
> and, of course, i have questions.  8^O
> 
> the servers are HP DL380 G6's.  initially i used ipmi_lan as the fence 
> manager, with limited success; now i'm using ILO as the fence manager, 
> and at boot, fenced takes forever (well, 5 min or so, which in IT time 
> is forever) to start.  is this normal?  the ilo2 connections are all on 
> a separate unmanaged dell 2624 switch, which has only the three ILO2 
> node connections, and nothing else.
> 
> next, we've added SANbox2 as a backup fencing agent, and the fibre 
> switch is an HP 8/20q (QLogic).  i'm not sure if the SANbox2 support is 
> usable on the 8/20q.  anyone have any experience with this?  if this is 
> supported, wouldn't it be faster to fence/unfence than ip-based fencing?
> 
> we've got ping_pong downloaded and tested the cluster.  we're getting 
> about 2500-3000 locks/sec when ping_pong runs on one node; on two, the 
> locks/sec drops a bit; and on all three nodes, the most we've seen with 
> ping_pong running on all three nodes is ~1800 locks/sec.  googling has 
> produced claims of 200k-300k locks/sec when running ping_pong on one node...
> 
Don't worry too much about the performance of this test. It probably
isn't that important for most real applications, particularly since you
seem to be using larger files. The total time is likely to be dominated
by the actual data operation on the file, rather than fcntl locking
overhead.

> most of the GFS2 filesystems (600-6000 resource groups) store a 
> relatively small number of very large (2GB+) files.  the extremes among 
> the GFS2 filesystems are:  86 files comprising 800GB, to ~98k files 
> comprising 256GB.  we've googled "gfs2 tuning" but don't seem to be 
> coming up with anything specific, and rather than "experiment" - which 
> on GFS2 filesystems can take "a while" - i thought i'd ask, "have we 
> done something wrong?"
> 
Normally performance issues tend to relate to the way in which the
workload is distributed across the nodes and the I/O pattern which
arises. That can result in a bottleneck of a single resource. The
locking is done on a per-inode basis, so sometimes directories can be
the source of contention if there are lots of creates/deletes in that
directory from multiple nodes in a relatively short period.

> finally, how does the cluster.conf resource definitions interact with 
> GFS2?  is it only for "cluster operation"; i.e., only when fencing / 
> unfencing?  we specified "noatime,noquota,data=writeback" on all GFS2 
> filesytems (journals = 5).  is this causing our lock rate to fall?  and 
> even tho we've changed the resource definition in cluster.conf and set 
> the same parms on /etc/fstab, when mounts are displayed, we do not see 
> "noquota" anywhere...
> 
> thanks in advance for any info y'all can provide us!
> 
> yvette
> 
You might find that the default data=ordered is faster than writeback,
depending on the workload. There shouldn't be anything in cluster.conf
which is likely to affect the filesystem's performance beyond the limit
on fcntl locks, which you must have already set correctly in order to
get the fcntl locking rates that you mention above,

Steve.

> --
> Linux-cluster mailing list
> Linux-cluster@xxxxxxxxxx
> https://www.redhat.com/mailman/listinfo/linux-cluster


--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster


[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux