All, I've been monitoring performance of my GFS cluster and noticed something weird. I created 16 million directories with a script very similar to this: for ($a=0; $a<256; $a++) { for ($b=0; $b<256; $b++) { for ($c=0; $c<256; $c++) { mkdir -p /mnt/gfs/hex($a)/hex($b)/hex($c); } } } This code wont work but you all get the idea. Anyway I was running chown on all the directories with a script like this: chdir /mnt/gfs DIRS = `ls -la | awk 'print {$9}'` for dir in $DIRS; do chown -R newuser $dir/* echo $dir done This script will execute chown on 64K directories and print the current parent directory. This script starts to perform slower and slower as time goes on. I ran gfs_tool -c counters /mnt/gfs to monitor locks and the number of locks is over 5.7million and growing. The first few 64K directories completed chowning within 45sec...now its over 8-9 minutes...and increasing every 64K. Is there a way to remove these locks? /proc/cluster/lock_dlm/drop_count is 50000 Would setting this value to 0 help? -- Jon -- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster