On Thu, Sep 13, 2007 at 09:14:29AM +0200, Marc Grimme wrote: > Hi Dave, > you might also want to have a look at: > http://www.opensharedroot.org/Members/marc/blog/blog-on-dlm/red-hat-dlm-__find_lock_by_id/oprofile-analysis > Perhaps you have something to add there as well. Thanks, this is great information. In this case it's the size of the "lkbtbl" hash table that you could try increasing. By default it's 1024, I might start by trying 2048 and see what changes. This is, of course, all driven by the number of locks that gfs is using. It would be interesting to see what that number is. Over the last several years, since we originally picked the size of these hash tables, the size of gfs fs's and the amount of memory on machines has grown quite a lot (the VA Linux machines I was using when first writing this code had 256 MB of memory.) So, the number of locks in a gfs cluster has grown, too. It may be time to increase the default sizes of these hash tables. Another problem is the way the dlm creates and uses lock id's. This isn't quite as simple to solve. Because the lock id's are only 32 bits, the counters easily wrap around, which means that whenever a new lock id is chosen, we have to search all existing lock id's to prevent duplicates (these searches are per hash chain). There may be a smarter technique we could use to do this more efficiently. One idea I've had is to keep a list of deleted lkb's and recycle them -- this would mean that we don't often search for a new lock id after the system has run for a while. A tree structure instead of a hash table may also be helpful. Dave -- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster