On Tue, 8 Apr 2008, Wendy Cheng wrote: > The more memory you have, the more gfs locks (and their associated gfs file > structures) will be cached in the node. It, in turns, will make both dlm and > gfs lock queries take longer. The glock_purge (on RHEL 4.6, not on RHEL 4.5) > should be able to help but its effects will be limited if you ping-pong the > locks quickly between different GFS nodes. Try to play around with this > tunable (start with 20%) to see how it goes (but please reset gfs_scand and > gfs_inoded back to their defaults while you are experimenting glock_purge). > > So assume this is a build-compile cluster, implying large amount of small > files come and go, The tricks I can think of: > > 1. glock_purge ~ 20% > 2. glock_inode shorter than default (not longer) > 3. persistent LVS session if all possible What is glock_inode? Does it exist or something equivalent in cluster-2.01.00? Isn't GFS_GL_HASH_SIZE too small for large amount of glocks? Being too small it results not only long linked lists but clashing at the same bucket will block otherwise parallel operations. Wouldn't it help increasing it from 8k to 65k? Best regards, Jozsef -- E-mail : kadlec@xxxxxxxxxxxx, kadlec@xxxxxxxxxxxxxxxxx PGP key: http://www.kfki.hu/~kadlec/pgp_public_key.txt Address: KFKI Research Institute for Particle and Nuclear Physics H-1525 Budapest 114, POB. 49, Hungary -- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster