Re: [PATCH] libxfs: stop caching inode structures

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Feb 07, 2012 at 01:22:28PM -0500, Christoph Hellwig wrote:
> Currently libxfs has a cache for xfs_inode structures.  Unlike in kernelspace
> where the inode cache, and the associated page cache for file data is used
> for all filesystem operations the libxfs inode cache is only used in few
> places:
> 
>  - the libxfs init code reads the root and realtime inodes when called from
>    xfs_db using a special flag, but these inode structure are never referenced
>    again
>  - mkfs uses namespace and bmap routines that take the xfs_inode structure
>    to create the root and realtime inodes, as well as any additional files
>    specified in the proto file
>  - the xfs_db attr code uses xfs_inode-based attr routines in the attrset
>    and attrget commands
>  - phase6 of xfs_repair uses xfs_inode-based routines for rebuilding
>    directories and moving files to the lost+found directory.
>  - phase7 of xfs_repair uses struct xfs_inode to modify the nlink count
>    of inodes.
> 
> So except in repair we never ever reuse a cached inode, and in repair we can
> easily read the information from the more compact cached buffers (or even
> better rewrite phase7 to operate on the raw on-disk inodes).  Given these
> facts stop caching the inodes to reduce memory usage especially in xfs_repair.

Ok, so what does it do to the speed of phase6 and phase7 of repair?
How much CPU overhead does this add to every inode lookup done in
these phases?

Indeed, there are cases where individual inode caching is much more
memory efficient than keeping the buffers around (think sparse inode
chunks on disk where only a few of the 64 inodes are actually
allocated). Tracking them in buffers (esp. if the inode size is
large) could use a lot more memory than just caching the active
inodes in a struct xfs_inode. Hence I'm not so sure this is clear
cut win for memory usage.

Do you have any numbers for memory usage or performance?

The code changes are simple enough, so if it is actually a win then
I see no problems with doing this. But that's what I need more
information about to be convinced on....

> With this we probably could increase the memory available to the buffer
> cache in xfs_repair, but trying to do so I got a bit lost - the current
> formula seems to magic to me to make any sense, and simply doubling the
> buffer cache size causes us to run out of memory given that the data cached

IIRC, that's because the current formula sets the buffer cache size
to 75% of physical RAM on the machine. Doubling it will definitely
cause problems ;)

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs


[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux