I understand that a df, or ls -l that requires statting files should be slow. However, I'm seeing ridiculous performance of just an ls, or anything doing file globbing in directory reads. For example: dylanv@iscsi0 /srv/rancid/logs $ time ls [snip] real 0m4.434s user 0m0.000s sys 0m0.000s dylanv@iscsi0 /srv/rancid/logs $ ls | wc -l 412 dylanv@iscsi0 /srv/rancid/logs $ I consider 400 files a fairly small directory. 4.5 seconds to do an ls is pretty bad. Its worse in a large directory, although not porportionally to the amount of files: dylanv@iscsi0 /srv/flows/9/13 $ time ls [snip] real 0m31.693s user 0m0.070s sys 0m0.430s dylanv@iscsi0 /srv/flows/9/13 $ ls | wc -l 14274 dylanv@iscsi0 /srv/flows/9/13 $ And an even larger dir: dylanv@iscsi0 /srv/flows/6/7 $ time ls [snip] real 1m31.849s user 0m0.520s sys 0m1.450s dylanv@iscsi0 /srv/flows/6/7 $ ls | wc -l 42462 dylanv@iscsi0 /srv/flows/6/7 $ Any ideas for why this might be? Its clearly blocking on IO somewhere, but I'm not sure where. Both kernel and userland are 32-bit for now. Caching doesn't really appear to make a whole lot of difference. (A subsequent ls on the large directory above takes 1m24.274s) What concerns me even more is that I'm not even using DLM yet! This is with lock_nolock. Any suggestions or ideas are much appreciated. =) Thanks, Dylan -- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster