Re: [PATCH] fs/9p: Add new options to Documentation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, 2023-04-02 at 16:07 +0200, Christian Schoenebeck wrote:
> +CC: Jeff for experience on this caching issue with NFS ...
> 
> On Tuesday, March 28, 2023 5:51:51 PM CEST Eric Van Hensbergen wrote:
> > As I work through the documentation rework and to some extent the
> > testing matrix -- I am reconsidering some choices and wanted to open
> > up the discussion here.
> > 
> > TLDR; I'm thinking of reworking the cache options before the merge
> > window to keep things simple while setting up for some of the future
> > options.
> 
> Yeah, revising the 9p cache options highly makes sense!
> 
> So what is the plan on this now? I saw you sent a new patch with the "old"
> options today? So is this one here deferred?
> 
> > While we have a bunch of new options, in practice I expect users to
> > probably consolidate around three models: no caching, tight caches,
> > and expiring caches with fscache being an orthogonal add-on to the
> > last two.
> 
> Actually as of today I don't know why somebody would want to use fscache
> instead of loose. Does it actually make sense to keep fscache and if yes why?
> 
> > The ultimate goal is to simplify the options based on expected use models:
> > 
> > - cache=[ none, file, all ] (none is currently default)
> 
> dir?
> 
> > - write_policy = [ *writethrough, writeback ] (writethrough would be default)
> > - cache_validate = [ never, *open, x (seconds) ]  (cache_validate
> > would default to open)
> 
> Jeff came up with the point that NFS uses a slicing time window for NFS. So
> the question is, would it make sense to add an option x seconds that might be
> dropped soon anyway?

See the acregmin/acregmax/acdirmin/acdirmax settings in nfs(5). What
you're talking about is basically adding an actimeo= option.

If you're revising options for this stuff, then consider following NFS's
naming. Not that they are better in any sense, but they are at least
familiar to administrators.

As far as the sliding window goes, the way it tracks that is a bit
arcane, but in include/linux/nfs_fs.h:


        /*
         * read_cache_jiffies is when we started read-caching this inode.
         * attrtimeo is for how long the cached information is assumed
         * to be valid. A successful attribute revalidation doubles
         * attrtimeo (up to acregmax/acdirmax), a failure resets it to
         * acregmin/acdirmin.
         *
         * We need to revalidate the cached attrs for this inode if
         *
         *      jiffies - read_cache_jiffies >= attrtimeo
         *
         * Please note the comparison is greater than or equal
         * so that zero timeout values can be specified.
         */


That's probably what I'd aim for here.

> > - fscache
> > 
> > So, mapping of existing (deprecated) legacy modes:
> > - none (obvious) write_policy=writethrough
> > - *readahead -> cache=file cache_validate_open write_policy=writethrough
> > - mmap -> cache=file cache_validate=open write_policy=writeback
> 
> Mmm, why is that "file"? To me "file" sounds like any access to files is
> cached, whereas cache=mmap just uses the cache if mmap() was called, not for
> any other file access.
> 
> > - loose -> cache=all cache_validate=never write_policy=writeback
> > - fscache -> cache=all cache_validate=never write_policy=writeback &
> > fscache enabled
> > 
> > Some things I'm less certain of: cache_validation is probably an
> > imperfect term as is using 'open' as one of the options, in this case
> > I'm envisioning 'open' to mean open-to-close coherency for file
> > caching (cache is only validated on open) and validation on lookup for
> > dir-cache coherency (using qid.version). Specifying a number here
> > expires existing caches and requires validation after a certain number
> > of seconds (is that the right granularity)?
> 
> Personally I would then really call it open-to-close or opentoclose and waste
> some more characters in favour of clarity.
> 
> > So, I think this is more clear from a documentation standpoint, but
> > unfortuantely I haven't reduced the test matrix much - in fact I've
> > probably made it worse. I expect the common cases to basically be:
> > - cache=none
> > - new default? (cache=all, write_policy=writeback, cache_validate=open)
> > - fscache w/(cache=all, write_policy=writeback, cache_validate=5)
> > 
> > Which would give us 3 configurations to test against versus 25
> > (assuming testing for one time value for cache-validate=x). Important
> > to remember that this is just cache mode tests, the other mount
> > options act as multipliers.
> > 
> > Thoughts?  Alternatives?
> > 
> >         -eric
> > 
> > On Mon, Mar 27, 2023 at 10:38 AM Christian Schoenebeck
> > <linux_oss@xxxxxxxxxxxxx> wrote:
> > > 
> > > On Monday, March 27, 2023 5:05:52 AM CEST Eric Van Hensbergen wrote:
> > > > Need to update the documentation for new mount flags
> > > > and cache modes.
> > > > 
> > > > Signed-off-by: Eric Van Hensbergen <ericvh@xxxxxxxxxx>
> > > > ---
> > > >  Documentation/filesystems/9p.rst | 29 ++++++++++++++++-------------
> > > >  1 file changed, 16 insertions(+), 13 deletions(-)
> > > > 
> > > > diff --git a/Documentation/filesystems/9p.rst b/Documentation/filesystems/9p.rst
> > > > index 0e800b8f73cc..6d257854a02a 100644
> > > > --- a/Documentation/filesystems/9p.rst
> > > > +++ b/Documentation/filesystems/9p.rst
> > > > @@ -78,19 +78,18 @@ Options
> > > >               offering several exported file systems.
> > > > 
> > > >    cache=mode specifies a caching policy.  By default, no caches are used.
> > > > -
> > > > -                        none
> > > > -                             default no cache policy, metadata and data
> > > > -                                alike are synchronous.
> > > > -                     loose
> > > > -                             no attempts are made at consistency,
> > > > -                                intended for exclusive, read-only mounts
> > > > -                        fscache
> > > > -                             use FS-Cache for a persistent, read-only
> > > > -                             cache backend.
> > > > -                        mmap
> > > > -                             minimal cache that is only used for read-write
> > > > -                                mmap.  Northing else is cached, like cache=none
> > > > +             Modes are progressive and inclusive.  For example, specifying fscache
> > > > +             will use loose caches, writeback, and readahead.  Due to their
> > > > +             inclusive nature, only one cache mode can be specified per mount.
> > > 
> > > I would highly recommend to rather specify below for each option "this option
> > > implies writeback, readahead ..." etc., as it is not obvious otherwise which
> > > option would exactly imply what. It is worth those extra few lines IMO to
> > > avoid confusion.
> > > 
> > > > +
> > > > +                     =========       =============================================
> > > > +                     none            no cache of file or metadata
> > > > +                     readahead       readahead caching of files
> > > > +                     writeback       delayed writeback of files
> > > > +                     mmap            support mmap operations read/write with cache
> > > > +                     loose           meta-data and file cache with no coherency
> > > > +                     fscache         use FS-Cache for a persistent cache backend
> > > > +                     =========       =============================================
> > > > 
> > > >    debug=n    specifies debug level.  The debug level is a bitmask.
> > > > 
> > > > @@ -137,6 +136,10 @@ Options
> > > >               This can be used to share devices/named pipes/sockets between
> > > >               hosts.  This functionality will be expanded in later versions.
> > > > 
> > > > +  directio   bypass page cache on all read/write operations
> > > > +
> > > > +  ignoreqv   ignore qid.version==0 as a marker to ignore cache
> > > > +
> > > >    noxattr    do not offer xattr functions on this mount.
> > > > 
> > > >    access     there are four access modes.
> > > > 
> > > 
> > > 
> > > 
> > > 
> > 
> 
> 
> 
> 

-- 
Jeff Layton <jlayton@xxxxxxxxxx>




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux