On Mon, Mar 12, 2012 at 02:54:20PM -0700, Michael Spiegle wrote: > I believe we figured out what was going wrong: > 1) You definitely need inode64 as a mount option > 2) It seems that the AG metadata was being cached. We had to unmount > the system and remount it to get updated counts on per-AG usage. If you were looking at it with xfs_db, then yes, that is what will happen. Use "echo 1 > /proc/sys/vm/drop_caches" to get the cached metadata dropped. > For the moment, I've written a script to copy/rename/delete our files > so that they are gradually migrated to new AGs. FWIW, I noticed that > this operation is significantly faster on an EL6.2-based kernel > (2.6.32) compared to EL5 (2.6.18). I'm also using the 'delaylog' > mount option which probably helps a bit. I still have a few other > curiosities about this particular issue though: > > On Sun, Mar 11, 2012 at 5:56 PM, Dave Chinner <david@xxxxxxxxxxxxx> wrote: > > > > Entirely normal. some operations require Io to complete (e.g. > > reading directory blocks to find where to insert the new entry), > > while adding the first file to a directory generally requires zero > > IO. You're seeing the difference between cold cache and hot cache > > performance. > > > > In this situation, any files written to the same directory exhibited > this issue regardless of cache state. For example: > > Takes 300ms to complete: > touch tmp/0 > > Takes 600ms to complete: > touch tmp/0 tmp/1 > > Takes 1200ms to complete: > touch tmp/0 tmp/1 tmp/2 tmp/3 > > I would expect the directory to be cached after the first file is > created. I don't understand why all subsequent writes were affected > as well. I don't have enough information to help you. I don't know what hardware you are running on, how big the directory is, what they layout of the directory is, etc. The "needs to do IO" was simply a SWAG.... Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs