gfs2 tuning

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



hi all,

we've now defined three nodes with two more being added soon, and the GFS2 filesystems are shared between all nodes.

and, of course, i have questions.  8^O

the servers are HP DL380 G6's. initially i used ipmi_lan as the fence manager, with limited success; now i'm using ILO as the fence manager, and at boot, fenced takes forever (well, 5 min or so, which in IT time is forever) to start. is this normal? the ilo2 connections are all on a separate unmanaged dell 2624 switch, which has only the three ILO2 node connections, and nothing else.

next, we've added SANbox2 as a backup fencing agent, and the fibre switch is an HP 8/20q (QLogic). i'm not sure if the SANbox2 support is usable on the 8/20q. anyone have any experience with this? if this is supported, wouldn't it be faster to fence/unfence than ip-based fencing?

we've got ping_pong downloaded and tested the cluster. we're getting about 2500-3000 locks/sec when ping_pong runs on one node; on two, the locks/sec drops a bit; and on all three nodes, the most we've seen with ping_pong running on all three nodes is ~1800 locks/sec. googling has produced claims of 200k-300k locks/sec when running ping_pong on one node...

most of the GFS2 filesystems (600-6000 resource groups) store a relatively small number of very large (2GB+) files. the extremes among the GFS2 filesystems are: 86 files comprising 800GB, to ~98k files comprising 256GB. we've googled "gfs2 tuning" but don't seem to be coming up with anything specific, and rather than "experiment" - which on GFS2 filesystems can take "a while" - i thought i'd ask, "have we done something wrong?"

finally, how does the cluster.conf resource definitions interact with GFS2? is it only for "cluster operation"; i.e., only when fencing / unfencing? we specified "noatime,noquota,data=writeback" on all GFS2 filesytems (journals = 5). is this causing our lock rate to fall? and even tho we've changed the resource definition in cluster.conf and set the same parms on /etc/fstab, when mounts are displayed, we do not see "noquota" anywhere...

thanks in advance for any info y'all can provide us!

yvette

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster


[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux