Re: [PATCH v5 37/40] netfs: Optimise away reads above the point at which there can be no data

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Nathan Chancellor <nathan@xxxxxxxxxx> wrote:

> It appears that ctx->inode.i_mapping is NULL in netfs_inode_init(). This
> patch appears to cure the problem for me but I am not sure if it is
> proper or not.

I'm not sure that's the best way.  It kind of indicates that
nfs_netfs_inode_init() is not being called in the right place - it should
really be called after alloc_inode() has called inode_init_always().

However, mapping_set_release_always() makes ->release_folio() and
->invalidate_folio() always called for an inode's folios, even if PG_private
is not set - the idea being that this allows netfslib to update the
"zero_point" when a page we've written to the server gets invalidated here,
thereby requiring us to go fetch it again.

Now, NFS doesn't make use of this feature and fscache and cachefiles don't use
it directly, so we might not want to call mapping_set_release_always() for
NFS.

I'm not sure NFS can even reliably make use of it unless it's using a lease
unless it gets change notifications from the server.

So I'm thinking of applying your patch but add a comment to say why we're
doing it.  A better way, though, is to move the call to nfs_netfs_inode_init()
and give it a flag to say whether or not we want the facility.

David





[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux