Re: [PATCH] 9p: Convert to new fscache API

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



David Howells wrote on Wed, Nov 18, 2020:
> > What's the current schedule/plan for the fscache branch merging? Will
> > you be trying this merge window next month?
> 
> That's the aim.  We have afs, ceph and nfs are about ready; I've had a go at
> doing the 9p conversion, which I'll have to leave to you now, I think, and am
> having a poke at cifs.

Ok, will try to polish it up by then.
Worst case as discussed is to have fscache be an alias for cache=loose
until it's ready but with the first version you gave me it hopefully
won't be needed.

> > >  (*) I have made an assumption that 9p_client_read() and write can handle I/Os
> > >      larger than a page.  If this is not the case, v9fs_req_ops will need
> > >      clamp_length() implementing.
> > 
> > There's a max driven by the client's msize
> 
> The netfs read helpers provide you with a couple of options here:
> 
>  (1) You can use ->clamp_length() to do multiple slices of at least 1 byte
>      each.  The assumption being that these represent parallel operations.  A
>      new subreq will be generated for each slice.
> 
>  (2) You can go with large slices that are larger than msize, and just read
>      part of it with each read.  After reading, the netfs helper will keep
>      calling you again to read some more of it, provided you didn't return an
>      error and you at least read something.

clamp_length looks good for that, if we can get parallel requests out
it'll all come back faster.

> > (client->msize - P9_IOHDRSZ ; unfortunately msize is just guaranted to be >=
> > 4k so that means the actual IO size would be smaller in that case even if
> > that's not intended to be common)
> 
> Does that mean you might be limited to reads of less than PAGE_SIZE on some
> systems (ppc64 for example)?  This isn't a problem for the read helper, but
> might be an issue for writing from THPs.

Quite likely, the actual used size varies depending on the backend (64k
for tcp, 500k for virtio) but can definietly be less than PAGE_SIZE.

I take it the read helper would just iterate as long as there's data
still required to read, writing from THPs wouldn't do that?


> > >  (*) The cache needs to be invalidated if a 3rd-party change happens, but I
> > >      haven't done that.
> > 
> > There's no concurrent access logic in 9p as far as I'm aware (like NFS
> > does if the mtime changes for example), so I assume we can keep ignoring
> > this.
> 
> By that, I presume you mean concurrent accesses are just not permitted?

Sorry - I meant coherency if files are modified on multiple clients
isn't guaranted - there are voluntary locks but that's about it, nothing
will detect e.g. remote file size modifications.
Concurrency on a given client works fine and should be used if possible.

-- 
Dominique



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux