Re: [PATCH RFC 2/9] timekeeping: new interfaces for multigrain timestamp handing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 2023-11-02 at 10:29 +1100, Dave Chinner wrote:
> On Wed, Nov 01, 2023 at 09:34:57PM +0000, Trond Myklebust wrote:
> > On Wed, 2023-11-01 at 10:10 -1000, Linus Torvalds wrote:
> > > The above does not expose *any* changes to timestamps to users, and
> > > should work across a wide variety of filesystems, without requiring
> > > any special code from the filesystem itself.
> > > 
> > > And now please all jump on me and say "No, Linus, that won't work,
> > > because XYZ".
> > > 
> > > Because it is *entirely* possible that I missed something truly
> > > fundamental, and the above is completely broken for some obvious
> > > reason that I just didn't think of.
> > > 
> > 
> > My client writes to the file and immediately reads the ctime. A 3rd
> > party client then writes immediately after my ctime read.
> > A reboot occurs (maybe minutes later), then I re-read the ctime, and
> > get the same value as before the 3rd party write.
> > 
> > Yes, most of the time that is better than the naked ctime, but not
> > across a reboot.
> 
> This sort of "crash immediately after 3rd party data write" scenario
> has never worked properly, even with i_version.
> 
> The issue is that 3rd party (local) buffered writes or metadata
> changes do not require any integrity or metadata stability
> operations to be performed by the filesystem unless O_[D]SYNC is set
> on the fd, RWF_[D]SYNC is set on the IO, or f{data}sync() is
> performed on the file.
> 
> Hence no local filesystem currently persists i_version or ctime
> outside of operations with specific data integrity semantics.
> 
> nfsd based modifications have application specific persistence
> requirements and that is triggered by the nfsd calling
> ->commit_metadata prior to returning the operation result to the
> client. This is what persists i_version/timestamp changes that were
> made during the nfsd operation - this persistence behaviour is not
> driven by the local filesystem.
> 
> IOWs, this "change attribute failure" scenario is an existing
> problem with the current i_version implementation.  It has always
> been flawed in this way but this didn't matter a decade ago because
> it's only purpose (and user) was nfsd and that had the required
> persistence semantics to hide these flaws within the application's
> context.
>
> Now that we are trying to expose i_version as a "generic change
> attribute", these persistence flaws get exposed because local
> filesystem operations do not have the same enforced persistence
> semantics as the NFS server.
> 
> This is another reason I want i_version to die.
> 
> What we need is a clear set of well defined semantics around statx
> change attribute sampling. Correct crash-recovery/integrity behaviour
> requires this rule:
> 
>   If the change attribute has been sampled, then the next
>   modification to the filesystem that bumps change attribute *must*
>   persist the change attribute modification atomically with the
>   modification that requires it to change, or submit and complete
>   persistence of the change attribute modification before the
>   modification that requires it starts.
> 
> e.g. a truncate can bump the change attribute atomically with the
> metadata changes in a transaction-based filesystem (ext4, XFS,
> btrfs, bcachefs, etc).
> 
> Data writes are much harder, though. Some filesysetm structures can
> write data and metadata in a single update e.g. log structured or
> COW filesystems that can mix data and metadata like btrfs.
> Journalling filesystems require ordering between journal writes and
> the data writes to guarantee the change attribute is persistent
> before we write the data. Non-journalling filesystems require inode
> vs data write ordering.
> 
> Hence I strongly doubt that a persistent change attribute is best
> implemented at the VFS - optimal, efficient implementations are
> highly filesystem specific regardless of how the change attribute is
> encoded in filesysetm metadata.
> 
> This is another reason I want to change how the inode timestamp code
> is structured to call into the filesystem first rather than last.
> Different filesystems will need to do different things to persist
> a "ctime change counter" attribute correctly and efficiently -
> it's not a one-size fits all situation....

FWIW, the big danger for nfsd is is i_version rollback after a crash:

We can end up handing out an i_version value to the client before it
ever makes it to disk. If that happens, and the server crashes before it
ever makes it to disk, then the client can see the old i_version when it
queries it again (assuming the earlier write was lost).

That, in an of itself, is not a _huge_ problem for NFS clients. They'll
typically just invalidate their cache if that occurs and reread any data
they need.

The real danger is that you can have a write that occurs after the
reboot that is different from the earlier one and hand out a change
attribute that is a duplicate of the one viewed earlier. Now you have
the same change attribute that refers to two different states of the
file (and potential data corruption).

We mitigate that today by factoring in the ctime on regular files when
generating the change attribute (see nfsd4_change_attribute()). In
theory, i_version rolling back + a clock jump backward could generate
change attr collisions, even with that, but that's a bit harder to
contrive so we mostly don't worry about it.

I'm all for coming up with a way to make this more resilient though. If
we can offer the guarantee that you're proposing above, then that would
be a very nice thing.
-- 
Jeff Layton <jlayton@xxxxxxxxxx>





[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux