Hi, > However, it took ages to list the subdirectory on an > absolute idle cluster node. See below: > > # time ls -la | wc -l > 31767 > > real 3m5.249s > user 0m0.628s > sys 0m5.137s > > There are about 3 minutes spent on somewhere. Does > anyone have any clue what the system was waiting for? Did you tune glock's? I found that it's very important for performance of GFS. I'm doing the following tunings currently: gfs_tool settune /export/data/etp quota_account 0 gfs_tool settune /export/data/etp glock_purge 50 gfs_tool settune /export/data/etp demote_secs 200 gfs_tool settune /export/data/etp statfs_fast 1 Switch off quota off course only if you don't need it. All this tunings have to be done every time after mounting, so do it in a init.d script running after GFS mount, and of course do it on every node. Here is the link to the glock paper: http://people.redhat.com/wcheng/Patches/GFS/readme.gfs_glock_trimming.R4 The glock tuning (glock_purge and demote_secs parameters) definitly solved a problem we had here with the Tivoli Backup Client. Before it was running for days and sometimes even did give up. We observed heavy lock traffic. After changing the glock parameters times for the backup did go down dramatically, we now can run a Incremental Backup on a 4 TByte filesystem in under 4 hours. So give it a try. There is some more tuning, which could be done unfortunately just on creation of filesystem. The default number of Resource Groups is ways too large for nowadays TByte Filesystems. Sincerly, Klaus -- Klaus Steinberger Beschleunigerlaboratorium Phone: (+49 89)289 14287 Am Coulombwall 6, D-85748 Garching, Germany FAX: (+49 89)289 14280 EMail: Klaus.Steinberger@xxxxxxxxxxxxxxxxxxxxxx URL: http://www.physik.uni-muenchen.de/~Klaus.Steinberger/
Attachment:
smime.p7s
Description: S/MIME cryptographic signature
-- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster