On Wed, Mar 30, 2016 at 02:43:38PM -0400, Trond Myklebust wrote: > On Wed, Mar 30, 2016 at 2:39 PM, Olga Kornievskaia <aglo@xxxxxxxxx> wrote: > > On Wed, Mar 30, 2016 at 2:20 PM, Trond Myklebust > > <trond.myklebust@xxxxxxxxxxxxxxx> wrote: > >> On Wed, Mar 30, 2016 at 1:40 PM, J. Bruce Fields <bfields@xxxxxxxxxxxx> wrote: > >>> If we assume no other writers until we close, couldn't you on close wait > >>> for all writes, send a final getattr for change attribute, and trust > >>> that? If the extra getattr's too much, then you'd need some algorithm > >>> like the above to determine which change attribute is the last. Or > >>> implement > >>> https://tools.ietf.org/html/draft-ietf-nfsv4-minorversion2-41#section-12.2.3 > >>> on client and server and just track the maximum returned value when the > >>> server returns something other than NFS4_CHANGE_TYPE_IS_UNDEFINED. > >>> > >> > >> The correct tool to use for resolving these caching issues is > >> ultimately a write delegation. > >> > >> You can also eliminate a lot of invalidations if you know that the > >> server implements change_attr_type == > >> NFS4_CHANGE_TYPE_IS_VERSION_COUNTER or > >> NFS4_CHANGE_TYPE_IS_VERSION_COUNTER_NOPNFS, since that allows you to > >> predict what the attribute should be after a change. > > > > Thanks for all the info. But let me highlight that I was asking about > > v3. I don't see that the code has issues with cache invalidation for > > nfsv4 when receiving out-of-order RPCs. > > > > I am not sure if it's worth implementing something that Bruce > > suggests. I just wanted to make sure that what i'm seeing is > > "expected" behavior (caz it's v3) and not a bug. > > Yes. The design does expect the occasional false positive cache > invalidation due to RPC request reordering. In the v3 and close-to-open case, since the ctime's monotonically increasing, why couldn't we just keep track of the maximum ctime seen before close? --b. -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html