> On Sep 8, 2022, at 11:56 AM, J. Bruce Fields <bfields@xxxxxxxxxxxx> wrote: > > On Thu, Sep 08, 2022 at 11:44:33AM -0400, Jeff Layton wrote: >> On Thu, 2022-09-08 at 11:21 -0400, Theodore Ts'o wrote: >>> On Thu, Sep 08, 2022 at 10:33:26AM +0200, Jan Kara wrote: >>>> It boils down to the fact that we don't want to call mark_inode_dirty() >>>> from IOCB_NOWAIT path because for lots of filesystems that means journal >>>> operation and there are high chances that may block. >>>> >>>> Presumably we could treat inode dirtying after i_version change similarly >>>> to how we handle timestamp updates with lazytime mount option (i.e., not >>>> dirty the inode immediately but only with a delay) but then the time window >>>> for i_version inconsistencies due to a crash would be much larger. >>> >>> Perhaps this is a radical suggestion, but there seems to be a lot of >>> the problems which are due to the concern "what if the file system >>> crashes" (and so we need to worry about making sure that any >>> increments to i_version MUST be persisted after it is incremented). >>> >>> Well, if we assume that unclean shutdowns are rare, then perhaps we >>> shouldn't be optimizing for that case. So.... what if a file system >>> had a counter which got incremented each time its journal is replayed >>> representing an unclean shutdown. That shouldn't happen often, but if >>> it does, there might be any number of i_version updates that may have >>> gotten lost. So in that case, the NFS client should invalidate all of >>> its caches. >>> >>> If the i_version field was large enough, we could just prefix the >>> "unclean shutdown counter" with the existing i_version number when it >>> is sent over the NFS protocol to the client. But if that field is too >>> small, and if (as I understand things) NFS just needs to know when >>> i_version is different, we could just simply hash the "unclean >>> shtudown counter" with the inode's "i_version counter", and let that >>> be the version which is sent from the NFS client to the server. >>> >>> If we could do that, then it doesn't become critical that every single >>> i_version bump has to be persisted to disk, and we could treat it like >>> a lazytime update; it's guaranteed to updated when we do an clean >>> unmount of the file system (and when the file system is frozen), but >>> on a crash, there is no guaranteee that all i_version bumps will be >>> persisted, but we do have this "unclean shutdown" counter to deal with >>> that case. >>> >>> Would this make life easier for folks? >>> >>> - Ted >> >> Thanks for chiming in, Ted. That's part of the problem, but we're >> actually not too worried about that case: >> >> nfsd mixes the ctime in with i_version, so you'd have to crash+clock >> jump backward by juuuust enough to allow you to get the i_version and >> ctime into a state it was before the crash, but with different data. >> We're assuming that that is difficult to achieve in practice. > > But a change in the clock could still cause our returned change > attribute to go backwards (even without a crash). Not sure how to > evaluate the risk, but it was enough that Trond hasn't been comfortable > with nfsd advertising NFS4_CHANGE_TYPE_IS_MONOTONIC. > > Ted's idea would be sufficient to allow us to turn that flag on, which I > think allows some client-side optimizations. > >> The issue with a reboot counter (or similar) is that on an unclean crash >> the NFS client would end up invalidating every inode in the cache, as >> all of the i_versions would change. That's probably excessive. > > But if we use the crash counter on write instead of read, we don't > invalidate caches unnecessarily. And I think the monotonicity would > still be close enough for our purposes? > >> The bigger issue (at the moment) is atomicity: when we fetch an >> i_version, the natural inclination is to associate that with the state >> of the inode at some point in time, so we need this to be updated >> atomically with certain other attributes of the inode. That's the part >> I'm trying to sort through at the moment. > > That may be, but I still suspect the crash counter would help. Fwiw, I like the crash counter idea too. -- Chuck Lever