Re: GFS performance test

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Ray,
thank for your answer.
We are using GFS1 on a Red Hat 5.4 cluster. GFS filesystem is mounted on /mnt/gfs, and when we created such filesystem we used parameter "-p lock_dlm". Anyway, look at this output :

[root@parmenides ~]# gfs_tool getsb /mnt/gfs
 .........................
  no_addr = 26
  sb_lockproto = lock_dlm
  sb_locktable = hr-pm:gfs01
  no_formal_ino = 24
  no_addr = 24
  ...............

For you information my cluster.conf file is:

-------------------------------------------------------------------------------------------------------------------------------
<?xml version="1.0"?>
<cluster config_version="4" name="hr-pm">
<fence_daemon post_fail_delay="0" post_join_delay="3"/>
<clusternodes>
<clusternode name="zipi" nodeid="1" votes="1">
<fence>
<method name="1">
<device modulename="" name="DRAC_heraclito"/>
</method>
</fence>
</clusternode>
<clusternode name="zape" nodeid="2" votes="1">
<fence>
<method name="1">
<device modulename="" name="DRAC_parmenides"/>
</method>
</fence>
</clusternode>
</clusternodes>
<cman expected_votes="1" two_node="1"/>
<fencedevices>
<fencedevice agent="fence_drac" ipaddr="10.0.0.207" login="root" name="DRAC_heraclito" passwd="*****"/> <fencedevice agent="fence_drac" ipaddr="10.0.0.208" login="root" name="DRAC_parmenides" passwd="******"/> <fencedevice agent="fence_ipmilan" auth="md5" ipaddr="10.0.0.207" login="root" name="IPMILan_heraclito" passwd="*"/> <fencedevice agent="fence_ipmilan" auth="md5" ipaddr="10.0.0.208" login="root" name="IPMILan_parmenides" passwd="*"/>
</fencedevices>
<rm>
<failoverdomains/>
<resources/>
</rm>
</cluster>
-------------------------------------------------------------------------------------------------------------
Shared disk is a LUN on a fibber channel SAN.
The most surprising thing is that we have another similar cluster, and there we get "98 locks/sec" always, starting the ping_pong in one or in both nodes. Buf! I'm lost! What is happening?

Frank


Date: Wed, 2 Dec 2009 06:58:43 -0800 From: Ray Van Dolson <rvandolson@xxxxxxxx> Subject: Re: GFS performance test To: linux-cluster@xxxxxxxxxx Message-ID: <20091202145842.GA16292@xxxxxxxx> Content-Type: text/plain; charset=us-ascii On Wed, Dec 02, 2009 at 03:53:46AM -0800, frank wrote:
>  Hi,
>  after seeing some posts related to GFS performance, we have decided to
>  test our two-node GFS filesystem with ping_pong program.
>  We are worried about the results.
> > Running the program in only one node, without parameters, we get between
>  800000 locks/sec and 900000 locks/sec
>  Running the program in both nodes over the same file on the shared
>  filesystem, the lock rate did not drop and it is the same in both nodes!
>  What does this mean? Is there any problem with locks ?
> > Just for you info, GFS filesystem is /mnt/gfs and what I run in both
>  nodes is:
> > ./ping_pong /mnt/gfs/tmp/test.dat 3 > > Thanks for your help. >
Wow, that doesn't sound right at all (or at least not consistent with
results I've gotten:)

Can you provide details of your setup, and perhaps your cluster.conf
file?  Have you done any other GFS tuning?  Are we talking GFS1 or
GFS2?

I get in the 3000-5000 locks/sec range with my GFS2 filesystem (using
nodiratime,noatime and reducing the lock limit to 0 from 100 in my
cluster.conf file).

The numbers you provide I'd expect to see on a local filesystem.

Ray


--
Aquest missatge ha estat analitzat per MailScanner
a la cerca de virus i d'altres continguts perillosos,
i es considera que està net.
For all your IT requirements visit: http://www.transtec.co.uk

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux