Re: [man-pages RFC PATCH v4] statx, inode: document the new STATX_INO_VERSION field

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 09 Sep 2022, Jeff Layton wrote:
> On Fri, 2022-09-09 at 08:29 +1000, NeilBrown wrote:
> > On Thu, 08 Sep 2022, Jeff Layton wrote:
> > > On Thu, 2022-09-08 at 10:40 +1000, NeilBrown wrote:
> > > > On Thu, 08 Sep 2022, Jeff Layton wrote:
> > > > > On Wed, 2022-09-07 at 13:55 +0000, Trond Myklebust wrote:
> > > > > > On Wed, 2022-09-07 at 09:12 -0400, Jeff Layton wrote:
> > > > > > > On Wed, 2022-09-07 at 08:52 -0400, J. Bruce Fields wrote:
> > > > > > > > On Wed, Sep 07, 2022 at 08:47:20AM -0400, Jeff Layton wrote:
> > > > > > > > > On Wed, 2022-09-07 at 21:37 +1000, NeilBrown wrote:
> > > > > > > > > > On Wed, 07 Sep 2022, Jeff Layton wrote:
> > > > > > > > > > > +The change to \fIstatx.stx_ino_version\fP is not atomic with
> > > > > > > > > > > respect to the
> > > > > > > > > > > +other changes in the inode. On a write, for instance, the
> > > > > > > > > > > i_version it usually
> > > > > > > > > > > +incremented before the data is copied into the pagecache.
> > > > > > > > > > > Therefore it is
> > > > > > > > > > > +possible to see a new i_version value while a read still
> > > > > > > > > > > shows the old data.
> > > > > > > > > > 
> > > > > > > > > > Doesn't that make the value useless?
> > > > > > > > > > 
> > > > > > > > > 
> > > > > > > > > No, I don't think so. It's only really useful for comparing to an
> > > > > > > > > older
> > > > > > > > > sample anyway. If you do "statx; read; statx" and the value
> > > > > > > > > hasn't
> > > > > > > > > changed, then you know that things are stable. 
> > > > > > > > 
> > > > > > > > I don't see how that helps.  It's still possible to get:
> > > > > > > > 
> > > > > > > >                 reader          writer
> > > > > > > >                 ------          ------
> > > > > > > >                                 i_version++
> > > > > > > >                 statx
> > > > > > > >                 read
> > > > > > > >                 statx
> > > > > > > >                                 update page cache
> > > > > > > > 
> > > > > > > > right?
> > > > > > > > 
> > > > > > > 
> > > > > > > Yeah, I suppose so -- the statx wouldn't necessitate any locking. In
> > > > > > > that case, maybe this is useless then other than for testing purposes
> > > > > > > and userland NFS servers.
> > > > > > > 
> > > > > > > Would it be better to not consume a statx field with this if so? What
> > > > > > > could we use as an alternate interface? ioctl? Some sort of global
> > > > > > > virtual xattr? It does need to be something per-inode.
> > > > > > 
> > > > > > I don't see how a non-atomic change attribute is remotely useful even
> > > > > > for NFS.
> > > > > > 
> > > > > > The main problem is not so much the above (although NFS clients are
> > > > > > vulnerable to that too) but the behaviour w.r.t. directory changes.
> > > > > > 
> > > > > > If the server can't guarantee that file/directory/... creation and
> > > > > > unlink are atomically recorded with change attribute updates, then the
> > > > > > client has to always assume that the server is lying, and that it has
> > > > > > to revalidate all its caches anyway. Cue endless readdir/lookup/getattr
> > > > > > requests after each and every directory modification in order to check
> > > > > > that some other client didn't also sneak in a change of their own.
> > > > > > 
> > > > > 
> > > > > We generally hold the parent dir's inode->i_rwsem exclusively over most
> > > > > important directory changes, and the times/i_version are also updated
> > > > > while holding it. What we don't do is serialize reads of this value vs.
> > > > > the i_rwsem, so you could see new directory contents alongside an old
> > > > > i_version. Maybe we should be taking it for read when we query it on a
> > > > > directory?
> > > > 
> > > > We do hold i_rwsem today.  I'm working on changing that.  Preserving
> > > > atomic directory changeinfo will be a challenge.  The only mechanism I
> > > > can think if is to pass a "u64*" to all the directory modification ops,
> > > > and they fill in the version number at the point where it is incremented
> > > > (inode_maybe_inc_iversion_return()).  The (nfsd) caller assumes that
> > > > "before" was one less than "after".  If you don't want to internally
> > > > require single increments, then you would need to pass a 'u64 [2]' to
> > > > get two iversions back.
> > > > 
> > > 
> > > That's a major redesign of what the i_version counter is today. It may
> > > very well end up being needed, but that's going to touch a lot of stuff
> > > in the VFS. Are you planning to do that as a part of your locking
> > > changes?
> > > 
> > 
> > "A major design"?  How?  The "one less than" might be, but allowing a
> > directory morphing op to fill in a "u64 [2]" is just a new interface to
> > existing data.  One that allows fine grained atomicity.
> > 
> > This would actually be really good for NFS.  nfs_mkdir (for example)
> > could easily have access to the atomic pre/post changedid provided by
> > the server, and so could easily provide them to nfsd.
> > 
> > I'm not planning to do this as part of my locking changes.  In the first
> > instance only NFS changes behaviour, and it doesn't provide atomic
> > changeids, so there is no loss of functionality.
> > 
> > When some other filesystem wants to opt-in to shared-locking on
> > directories - that would be the time to push through a better interface.
> > 
> 
> I think nfsd does provide atomic changeids for directory operations
> currently. AFAICT, any operation where we're changing directory contents
> is done while holding the i_rwsem exclusively, and we hold that lock
> over the pre and post i_version fetch for the change_info4.
> 
> If you change nfsd to allow parallel directory morphing operations
> without addressing this, then I think that would be a regression.

Of course.

As I said, in the first instance only NFS allows parallel directory
morphing ops, and NFS doesn't provide atomic pre/post already.  No
regression.

Parallel directory morphing is opt-in - at least until all file systems
can be converted and these other issues are resolved.

> 
> > 
> > > > > 
> > > > > Achieving atomicity with file writes though is another matter entirely.
> > > > > I'm not sure that's even doable or how to approach it if so.
> > > > > Suggestions?
> > > > 
> > > > Call inode_maybe_inc_version(page->host) in __folio_mark_dirty() ??
> > > > 
> > > 
> > > Writes can cover multiple folios so we'd be doing several increments per
> > > write. Maybe that's ok? Should we also be updating the ctime at that
> > > point as well?
> > 
> > You would only do several increments if something was reading the value
> > concurrently, and then you really should to several increments for
> > correctness.
> > 
> 
> Agreed.
> 
> > > 
> > > Fetching the i_version under the i_rwsem is probably sufficient to fix
> > > this though. Most of the write_iter ops already bump the i_version while
> > > holding that lock, so this wouldn't add any extra locking to the write
> > > codepaths.
> > 
> > Adding new locking doesn't seem like a good idea.  It's bound to have
> > performance implications.  It may well end up serialising the directory
> > op that I'm currently trying to make parallelisable.
> > 
> 
> The new locking would only be in the NFSv4 GETATTR codepath:
> 
>     https://lore.kernel.org/linux-nfs/20220908172448.208585-9-jlayton@xxxxxxxxxx/T/#u
> 
> Maybe we'd still better off taking a hit in the write codepath instead
> of doing this, but with this, most of the penalty would be paid by nfsd
> which I would think would be preferred here.

inode_lock_shard() would be acceptable here.  inode_lock() is unnecessary.

> 
> The problem of mmap writes is another matter though. Not sure what we
> can do about that without making i_version bumps a lot more expensive.
> 

Agreed.  We need to document our way out of that one.

NeilBrown

> -- 
> Jeff Layton <jlayton@xxxxxxxxxx>
> 




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux