Re: [PATCH v5 11/24] xfs: add XBF_VERITY_SEEN xfs_buf flag

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Mar 11, 2024 at 08:25:05AM -0700, Darrick J. Wong wrote:

<snip>

> <nod> The latest version of this tries to avoid letting reclaim take the
> top of the tree.  Logically this makes sense to me to reduce read verify
> latency, but I was hoping Eric or Andrey or someone with more
> familiarity with fsverity would chime in on whether or not that made
> sense.
> 
> > > > Also -- the fsverity block interfaces pass in a "u64 pos" argument.  Was
> > > > that done because merkle trees may some day have more than 2^32 blocks
> > > > in them?  That won't play well with things like xarrays on 32-bit
> > > > machines.
> > > > 
> > > > (Granted we've been talking about deprecating XFS on 32-bit for a while
> > > > now but we're not the whole world)
> > > > 
> > > > > i.e.
> > > > > 
> > > > > 	p = xa_load(key);
> > > > > 	if (p)
> > > > > 		return p;
> > > > > 
> > > > > 	xfs_attr_get(key);
> > > > > 	if (!args->value)
> > > > > 		/* error */
> > > > > 
> > > > > 	/*
> > > > > 	 * store the current value, freeing any old value that we
> > > > > 	 * replaced at this key. Don't care about failure to store,
> > > > > 	 * this is optimistic caching.
> > > > > 	 */
> > > > > 	p = xa_store(key, args->value, GFP_NOFS);
> > > > > 	if (p)
> > > > > 		kvfree(p);
> > > > > 	return args->value;
> > > > 
> > > > Attractive.  Will have to take a look at that tomorrow.
> > > 
> > > Done.  I think.  Not sure that I actually got all the interactions
> > > between the shrinker and the xarray correct though.  KASAN and lockdep
> > > don't have any complaints running fstests, so that's a start.
> > > 
> > > https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=fsverity-cleanups-6.9_2024-03-09
> > 
> > My initial impression is "over-engineered".
> 
> Overly focused on unfamiliar data structures -- I've never built
> anything with an xarray before.
> 
> > I personally would have just allocated the xattr value buffer with a
> > little extra size and added all the external cache information (a
> > reference counter is all we need as these are fixed sized blocks) to
> > the tail of the blob we actually pass to fsverity.
> 
> Oho, that's a good idea.  I didn't like the separate allocation anyway.
> Friday was mostly chatting with willy and trying to make sure I got the
> xarray access patterns correct.
> 
> >                                                    If we tag the
> > inode in the radix tree as having verity blobs that can be freed, we
> > can then just extend the existing fs sueprblock shrinker callout to
> > also walk all the verity inodes with cached data to try to reclaim
> > some objects...
> 
> This too is a wonderful suggestion -- use the third radix tree tag to
> mark inodes with extra incore caches that can be reclaimed, then teach
> xfs_reclaim_inodes_{nr,count} to scan them.  Allocating a per-inode
> shrinker was likely to cause problems with flooding debugfs with too
> many knobs anyway.
> 
> > But, if a generic blob cache is what it takes to move this forwards,
> > so be it.
> 
> Not necessarily. ;)

And here's today's branch, with xfs_blobcache.[ch] removed and a few
more cleanups:
https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/tag/?h=fsverity-cleanups-6.9_2024-03-11

--D

> 
> --D
> 
> > -Dave.
> > -- 
> > Dave Chinner
> > david@xxxxxxxxxxxxx
> > 
> 




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux