Hi, Andrew: Thank you very much for the help. Yes, your explanation really makes sense. I buy it. But I would like to discuss it a little bit further. The following message was part of my previous reply to Wendy. Just paste it here for your convenience. # stat abc/ File: `abc/' Size: 8192 Blocks: 6024 IO Block: 4096 directory Device: fc00h/64512d Inode: 1065226 Links: 2 Access: (0770/drwxrwx---) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2008-05-08 06:18:58.000000000 +0000 Modify: 2008-04-15 03:02:24.000000000 +0000 Change: 2008-04-15 07:11:52.000000000 +0000 # cd abc/ # time ls | wc -l 31764 real 0m44.797s user 0m0.189s sys 0m2.276s >From the test results, it seems that the system really only used 2.276 seconds to perform the disk IO, read the directory and count the number of files. I am not sure whether I missed anything or not. I really cannot understand how the system took about 42 seconds to process the lock on the single directory. Any further comments? Thanks again in advance, Jas --- "Andrew A. Neuschwander" <andrew@xxxxxxxxxxxx> wrote: > I've looked at this problem a bit as well. My system > is a 4Gb FC SAN with > a bonded GigE DLM dedicated network. Stat'ing 30,000 > files in 3 minutes on > GFS isn't unreasonable considering that it must get > and release the gfs > locks. In this scenario, you are averaging about 6ms > per file stat. When > we did our tests, all of our subsystems (FC, Net, > CPU, Memory, Disk) were > near idle. I think the 6ms is simply the accumulated > latency of all the > subsystems involved. There is a lot of work > happening in that short period > of time. > > -A > -- > Andrew A. Neuschwander, RHCE > Linux Systems/Software Engineer > College of Forestry and Conservation > The University of Montana > http://www.ntsg.umt.edu > andrew@xxxxxxxxxxxx - 406.243.6310 > > > On Thu, May 8, 2008 4:29 pm, Bob Peterson wrote: > > On Thu, 2008-05-08 at 14:27 -0700, Ja S wrote: > >> Hi, All: > >> > >> I used to post this question before, but have not > >> received any comments yet. Please allow me post > it > >> again. > >> > >> I have a subdirectory containing more than 30,000 > >> small files on a SAN storage (GFS1+DLM, RAID10). > No > >> user application knows the existence of the > >> subdirectory. In other words, the subdirectory is > free > >> of accessing. > >> > >> However, it took ages to list the subdirectory on > an > >> absolute idle cluster node. See below: > >> > >> # time ls -la | wc -l > >> 31767 > >> > >> real 3m5.249s > >> user 0m0.628s > >> sys 0m5.137s > >> > >> There are about 3 minutes spent on somewhere. > Does > >> anyone have any clue what the system was waiting > for? > >> > >> > >> Thanks for your time and wish to see your > valuable > >> comments soon. > >> > >> Jas > > > > Hi Jas, > > > > I believe the answer to your question is in the > FAQ: > > > > > http://sources.redhat.com/cluster/wiki/FAQ/GFS#gfs_slow > > > > Regards, > > > > Bob Peterson > > Red Hat Clustering & GFS > > > > > > -- > > Linux-cluster mailing list > > Linux-cluster@xxxxxxxxxx > > > https://www.redhat.com/mailman/listinfo/linux-cluster > > > > > > -- > Linux-cluster mailing list > Linux-cluster@xxxxxxxxxx > https://www.redhat.com/mailman/listinfo/linux-cluster > ____________________________________________________________________________________ Be a better friend, newshound, and know-it-all with Yahoo! Mobile. Try it now. http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ -- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster