That did the trick, thanks for the vote to make it "warm & fuzzy" for us! ;^) |
On Tue, 2008-12-09 at 13:56 +0000, Steven Whitehouse wrote:
Hi, On Tue, 2008-12-09 at 08:57 -0500, Robert Hurst wrote: > A runaway application print job created this large number of > identically small files yesterday, which in turned caused mucho > problems for us trying to do a directory listing (ls) and removal > (rm). Eventually, we had to fence the node that had incurred a system > load of over 800(!), and upon reboot, I removed the files using > something more GFS-friendly, i.e., find spool -name 'PRT_4*' -exec rm > -fv {} \; > > Temporary print files are now being created and removed cleanly and > efficiently, as before. > > But while that solved the cleanup, and even with umount / mount the > other 2 two nodes' GFS spool directory, we are still experiencing > latency when doing a simple ls in that directory -- not nearly as bad > before, but nonetheless a few seconds can go by just to print < 100 > entries. It is naturally concerning to us, because no other directory > in that same GFS filesystem (or any other) is giving us any such > latency issues. > > The spool directory entry itself grew to 64kb from its usual 2kb, to > accommodate all those prior filenames ... is that something that is > required to be re-mkdir'ed in order to avoid this GFS latency? > Yes, thats probably the best solution. GFS directories do not shrink when entries are removed. There is a plan to fix this in GFS2 at some stage, but at the moment it shares this trait I'm afraid, Steve. -- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster
-- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster