Re: GFS2 locking issues

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Actually...I added both

<dlm plock_ownership="1" plock_rate_limit="0"/>
<gfs_controld plock_rate_limit="0"/>

to cluster.conf and rebooted every node. Now running ping_pong gives me roughly 3500 locks/sec when running it on more then one node (running it on just one node gives me around 5000 locks/sec) which according to the samba wiki are about in line with what it should be.

Thanks,

--Dennis

Quoting "Dennis B. Hopp" <dhopp@xxxxxxxxxx>:

That didn't work, but I changed it to:

        <dlm plock_ownership="1" plock_rate_limit="0"/>

And I'm getting different results, but still not good performance.
Running ping_pong on one node

[root@sc2 ~]# ./ping_ping /mnt/backup/test.dat 4
    5870 locks/sec

I think that should be much higher, but as soon as I start it on
another node it drops to 97 locks/sec

Any other ideas?

--Dennis

Quoting Abhijith Das <adas@xxxxxxxxxx>:

Dennis,

You seem to be running plock_rate_limit=100 that limits the number of
plocks/sec to 100 to avoid network flooding due to plocks.

Setting this as <gfs_controld plock_rate_limit="0"/> in cluster.conf
should give you better plock performance.

Hope this helps,
Thanks!
--Abhi

Dennis B. Hopp wrote:
We have a three node nfs/samba cluster that we seem to be having very
poor performance on GFS2.  We have a samba share that is acting as a
disk to disk backup share for Backup Exec and during the backup
process the load on the server will go through the roof until the
network requests timeout and the backup job fails.

I downloaded the ping_pong utility and ran it and seem to be getting
terrible performance:

[root@sc2 ~]# ./ping_ping /mnt/backup/test.dat 4
      97 locks/sec

The results are the same on all three nodes.

I can't seem to figure out why this is so bad. Some additional information:

[root@sc2 ~]# gfs2_tool gettune /mnt/backup
new_files_directio = 0
new_files_jdata = 0
quota_scale = 1.0000   (1, 1)
logd_secs = 1
recoverd_secs = 60
statfs_quantum = 30
stall_secs = 600
quota_cache_secs = 300
quota_simul_sync = 64
statfs_slow = 0
complain_secs = 10
max_readahead = 262144
quota_quantum = 60
quota_warn_period = 10
jindex_refresh_secs = 60
log_flush_secs = 60
incore_log_blocks = 1024
demote_secs = 600

[root@sc2 ~]# gfs2_tool getargs /mnt/backup
data 2
suiddir 0
quota 0
posix_acl 1
num_glockd 1
upgrade 0
debug 0
localflocks 0
localcaching 0
ignore_local_fs 0
spectator 0
hostdata jid=0:id=262146:first=0
locktable
lockproto lock_dlm

      97 locks/sec
[root@sc2 ~]# rpm -qa | grep gfs
kmod-gfs-0.1.31-3.el5
gfs-utils-0.1.18-1.el5
gfs2-utils-0.1.53-1.el5_3.3

[root@sc2 ~]# uname -r
2.6.18-128.1.10.el5

Thanks,

--Dennis


--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster



--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster




--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster



--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux