Re: [man-pages RFC PATCH v4] statx, inode: document the new STATX_INO_VERSION field

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 2022-09-15 at 17:49 +0000, Trond Myklebust wrote:
> On Thu, 2022-09-15 at 12:45 -0400, Jeff Layton wrote:
> > On Thu, 2022-09-15 at 15:08 +0000, Trond Myklebust wrote:
> > > On Thu, 2022-09-15 at 10:06 -0400, J. Bruce Fields wrote:
> > > > On Tue, Sep 13, 2022 at 09:14:32AM +1000, NeilBrown wrote:
> > > > > On Mon, 12 Sep 2022, J. Bruce Fields wrote:
> > > > > > On Sun, Sep 11, 2022 at 08:13:11AM +1000, NeilBrown wrote:
> > > > > > > On Fri, 09 Sep 2022, Jeff Layton wrote:
> > > > > > > > 
> > > > > > > > The machine crashes and comes back up, and we get a query
> > > > > > > > for
> > > > > > > > i_version
> > > > > > > > and it comes back as X. Fine, it's an old version. Now
> > > > > > > > there
> > > > > > > > is a write.
> > > > > > > > What do we do to ensure that the new value doesn't
> > > > > > > > collide
> > > > > > > > with X+1? 
> > > > > > > 
> > > > > > > (I missed this bit in my earlier reply..)
> > > > > > > 
> > > > > > > How is it "Fine" to see an old version?
> > > > > > > The file could have changed without the version changing.
> > > > > > > And I thought one of the goals of the crash-count was to be
> > > > > > > able to
> > > > > > > provide a monotonic change id.
> > > > > > 
> > > > > > I was still mainly thinking about how to provide reliable
> > > > > > close-
> > > > > > to-open
> > > > > > semantics between NFS clients.  In the case the writer was an
> > > > > > NFS
> > > > > > client, it wasn't done writing (or it would have COMMITted),
> > > > > > so
> > > > > > those
> > > > > > writes will come in and bump the change attribute soon, and
> > > > > > as
> > > > > > long as
> > > > > > we avoid the small chance of reusing an old change attribute,
> > > > > > we're OK,
> > > > > > and I think it'd even still be OK to advertise
> > > > > > CHANGE_TYPE_IS_MONOTONIC_INCR.
> > > > > 
> > > > > You seem to be assuming that the client doesn't crash at the
> > > > > same
> > > > > time
> > > > > as the server (maybe they are both VMs on a host that lost
> > > > > power...)
> > > > > 
> > > > > If client A reads and caches, client B writes, the server
> > > > > crashes
> > > > > after
> > > > > writing some data (to already allocated space so no inode
> > > > > update
> > > > > needed)
> > > > > but before writing the new i_version, then client B crashes.
> > > > > When server comes back the i_version will be unchanged but the
> > > > > data
> > > > > has
> > > > > changed.  Client A will cache old data indefinitely...
> > > > 
> > > > I guess I assume that if all we're promising is close-to-open,
> > > > then a
> > > > client isn't allowed to trust its cache in that situation.  Maybe
> > > > that's
> > > > an overly draconian interpretation of close-to-open.
> > > > 
> > > > Also, I'm trying to think about how to improve things
> > > > incrementally.
> > > > Incorporating something like a crash count into the on-disk
> > > > i_version
> > > > fixes some cases without introducing any new ones or regressing
> > > > performance after a crash.
> > > > 
> > > > If we subsequently wanted to close those remaining holes, I think
> > > > we'd
> > > > need the change attribute increment to be seen as atomic with
> > > > respect
> > > > to
> > > > its associated change, both to clients and (separately) on disk. 
> > > > (That
> > > > would still allow the change attribute to go backwards after a
> > > > crash,
> > > > to
> > > > the value it held as of the on-disk state of the file.  I think
> > > > clients
> > > > should be able to deal with that case.)
> > > > 
> > > > But, I don't know, maybe a bigger hammer would be OK:
> > > > 
> > > 
> > > If you're not going to meet the minimum bar of data integrity, then
> > > this whole exercise is just a massive waste of everyone's time. The
> > > answer then going forward is just to recommend never using Linux as
> > > an
> > > NFS server. Makes my life much easier, because I no longer have to
> > > debug any of the issues.
> > > 
> > > 
> > 
> > To be clear, you believe any scheme that would allow the client to
> > see
> > an old change attr after a crash is insufficient?
> > 
> 
> Correct. If a NFSv4 client or userspace application cannot trust that
> it will always see a change to the change attribute value when the file
> data changes, then you will eventually see data corruption due to the
> cached data no longer matching the stored data.
> 
> A false positive update of the change attribute (i.e. a case where the
> change attribute changes despite the data/metadata staying the same) is
> not desirable because it causes performance issues, but false negatives
> are far worse because they mean your data backup, cache, etc... are not
> consistent. Applications that have strong consistency requirements will
> have no option but to revalidate by always reading the entire file data
> + metadata.
> 
> > The only way I can see to fix that (at least with only a crash
> > counter)
> > would be to factor it in at presentation time like Neil suggested.
> > Basically we'd just mask off the top 16 bits and plop the crash
> > counter
> > in there before presenting it.
> > 
> > In principle, I suppose we could do that at the nfsd level as well
> > (and
> > that might be the simplest way to fix this). We probably wouldn't be
> > able to advertise a change attr type of MONOTONIC with this scheme
> > though.
> 
> Why would you want to limit the crash counter to 16 bits?
> 

To leave more room for the "real" counter. Otherwise, an inode that gets
frequent writes after a long period of no crashes could experience the
counter wrap.

IOW, we have 63 bits to play with. Whatever part we dedicate to the
crash counter will not be available for the actual version counter.

I'm proposing a 16+47+1 split, but I'm happy to hear arguments for a
different one.
-- 
Jeff Layton <jlayton@xxxxxxxxxx>




[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux