Re: [PATCH] libxfs: stop caching inode structures

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Feb 08, 2012 at 04:11:26PM +1100, Dave Chinner wrote:
> Ok, so what does it do to the speed of phase6 and phase7 of repair?
> How much CPU overhead does this add to every inode lookup done in
> these phases?

I'm away from my test system, but on the tons of inodes filesystems it
actually slightly improved their speed, probably because the box
was swapping less, or we spent less time in inode cache doing cache
misses as we'd never actually have the inode we care about cached.

The reason why the individual inode cache here doesn't work is because
we only every touched inodes in phase7 if we are going to modify them
and write them out, so we absolutely need the backing buffer anyway.

I can't see how phase6 benefits from the logical inode cache either,
given it's structure:

 - in phase 6a we iterate over each inode in the incore inode tree,
   and if it's a directory check/rebuild it
 - phase6b then updates the "." and ".." entries for directories
   that need, which means we require the backing buffers.
 - phase6c moves disconnected inodes to lost_found, which again needs
   the backing buffer to actually do anything.

In short there is no code in repair that benefits from doing logical
inode caching.

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs


[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux