On Thu, 7 Mar 2024 at 11:36, David Howells <dhowells@xxxxxxxxxx> wrote: > (2) invalidate_inode_pages2() is used in some places to effect invalidation > of the pagecache in the case where the server tells us that a third party > modified the server copy of a file. What the right behaviour should be > here, I'm not sure, but at the moment, any dirty data will get laundered > back to the server. Possibly it should be simply invalidated locally or > the user asked how they want to handle the divergence. Skipping ->launder_page will mean there's a window where the data *will* be lost, AFAICS. Of course concurrent cached writes on different hosts against the same region (the size of which depends on how the caching is done) will conflict. But if concurrent writes are to different regions, then they shouldn't be lost, no? Without the current ->launder_page thing I don't see how that could be guaranteed. Thanks, Miklos