Re: LTP test for fanotify evictable marks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon 13-06-22 08:40:37, Amir Goldstein wrote:
> On Sun, Mar 20, 2022 at 2:54 PM Amir Goldstein <amir73il@xxxxxxxxx> wrote:
> >
> > On Thu, Mar 17, 2022 at 5:14 PM Amir Goldstein <amir73il@xxxxxxxxx> wrote:
> > >
> > > On Thu, Mar 17, 2022 at 4:12 PM Jan Kara <jack@xxxxxxx> wrote:
> > > >
> > > > On Mon 07-03-22 17:57:36, Amir Goldstein wrote:
> > > > > Jan,
> > > > >
> > > > > Following RFC discussion [1], following are the volatile mark patches.
> > > > >
> > > > > Tested both manually and with this LTP test [2].
> > > > > I was struggling with this test for a while because drop caches
> > > > > did not get rid of the un-pinned inode when test was run with
> > > > > ext2 or ext4 on my test VM. With xfs, the test works fine for me,
> > > > > but it may not work for everyone.
> > > > >
> > > > > Perhaps you have a suggestion for a better way to test inode eviction.
> > > >
> > > > Drop caches does not evict dirty inodes. The inode is likely dirty because
> > > > you have chmodded it just before drop caches. So I think calling sync or
> > > > syncfs before dropping caches should fix your problems with ext2 / ext4.  I
> > > > suspect this has worked for XFS only because it does its private inode
> > > > dirtiness tracking and keeps the inode behind VFS's back.
> > >
> > > I did think of that and tried to fsync which did not help, but maybe
> > > I messed it up somehow.
> > >
> >
> > You were right. fsync did fix the test.
> 
> Hi Jan,
> 
> I was preparing to post the LTP test for FAN_MARK_EVICTABLE [1]
> and I realized the issue we discussed above was not really resolved.
> fsync() + drop_caches is not enough to guarantee reliable inode eviction.
> 
> It "kind of" works for ext2 and xfs, but not for ext4, ext3, btrfs.
> "kind of" because even for ext2 and xfs, dropping only inode cache (2)
> doesn't evict the inode/mark and dropping inode+page cache (3) does work
> most of the time, although I did occasionally see failures.
> I suspect those failures were related to running the test on a system
> with very low page cache usage.
> The fact that I had to tweak vfs_cache_pressure to increase test reliability
> also suggests that there are heuristics at play.

Well, yes, there's no guaranteed way to force inode out of cache. It is all
best-effort stuff. When we needed to make sure inode goes out of cache on
nearest occasion, we have introduced d_mark_dontcache() but there's no
fs common way to set this flag on dentry and I don't think we want to
expose one.

I was thinking whether we have some more reliable way to test this
functionality and I didn't find one. One other obvious approach to the test
is to create memcgroup with low memory limit, tag large tree with evictable
mark, and see whether the memory gets exhausted. This is kind of where this
functionality is aimed. But there are also variables in this testing scheme
that may be difficult to tame and the test will likely take rather long
time to perform.

								Honza
-- 
Jan Kara <jack@xxxxxxxx>
SUSE Labs, CR



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux