Hi, On Mon, 2009-10-12 at 13:07 +0300, Kaloyan Kovachev wrote: > Hi, > > On Fri, 09 Oct 2009 18:01:36 +0100, Steven Whitehouse wrote > > Hi, > > > > The idea is that is should be self-tuning now, adjusting itself to the > > conditions prevailing at the time. If there are any remaining > > performance issues though, we'd like to know so that they can be > > addressed, > > > > I have noticed a possible performance issue while experimenting with > ping_pong, but the test is representing normal operation. > The ping_pong test uses fcntl() locks. These go through dlm_controld and are independent of the filesystem, whether you are using GFS/GFS2 (or maybe even OCFS2 now as well). So these are not the same as the glocks that the last message was referring to. > The setup: > 3 node cluster (Node1, Node2 and Node3) with shared GFS2 partition > > 1. Starting ping_pong on one of the nodes (Node1) i get several thousands > (30k+) of locks per second > > 2. Stopping it after a while and immediately starting (moving) it to Node2 > (just like a shared service resource after failover) the number of locks goes > below 2000 > Probably because the locks are held on Node1, but then even after hours it > does not go back to 30k+ locks per second and stays at <2000 > > 3. Stopping ping_pong on Node2 and starting it again on the same or another > node (Node1 or Node3) after 10-20min there are again 30k+ locks per second > > Not sure if demote_secs would help, because i can't test, but it would be > great to have the locks released from Node1 to Node2 after some time at step > 2. not 3. > What options have you got in your cluster.conf relating to plocks? What kernel are you using? Steve. -- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster