On Mon, Aug 23, 2021 at 07:58:41PM -0700, Darrick J. Wong wrote: > On Tue, Aug 24, 2021 at 12:32:08PM +1000, Dave Chinner wrote: > > From: Dave Chinner <dchinner@xxxxxxxxxx> > > > > Yup, the VFS hoist broke it, and nobody noticed. Bulkstat workloads > > make it clear that it doesn't work as it should. > > Is there an easy way to test the dontcache behavior so that we don't > screw this up again? > > /me's brain is fried, will study this in more detail in the morning. Perhaps. We can measure how many xfs inodes are cached via the filesystem stats e.g. $ pminfo -t xfs.vnodes.active xfs.vnodes.active [number of vnodes not on free lists] $ sudo grep xfs_inode /proc/slabinfo | awk '{ print $2 }' 243440 $ pminfo -f xfs.vnodes.active xfs.vnodes.active value 243166 $ And so we should be able to run a bulkstat from fstests on a filesystem with a known amount of files in it and measure the number of cached inodes before/after... I noticed this because I recently re-added the threaded per-ag bulkstat scan to my scalability workload (via the xfs_io bulkstat command) after I dropped it ages ago because per-ag threading of fstests::src/bulkstat.c was really messy. It appears nobody has been paying attention to bulkstat memory usage (and therefore I_DONTCACHE behaviour) for some time.... Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx