On Tue, May 16, 2023 at 3:29 PM David Wysochanski <dwysocha@xxxxxxxxxx> wrote: > > On Thu, Feb 16, 2023 at 10:07 AM David Howells <dhowells@xxxxxxxxxx> wrote: > > > > Hi Willy, > > > > Is this okay by you? You said you wanted to look at the remaining uses of > > page_has_private(), of which there are then three after these patches, not > > counting folio_has_private(): > > > > arch/s390/kernel/uv.c: if (page_has_private(page)) > > mm/khugepaged.c: 1 + page_mapcount(page) + page_has_private(page)) { > > mm/migrate_device.c: extra += 1 + page_has_private(page); > > > > -- > > I've split the folio_has_private()/filemap_release_folio() call pair > > merging into its own patch, separate from the actual bugfix and pulled out > > the folio_needs_release() function into mm/internal.h and made > > filemap_release_folio() use it. I've also got rid of the bit clearances > > from the network filesystem evict_inode functions as they doesn't seem to > > be necessary. > > > > Note that the last vestiges of try_to_release_page() got swept away, so I > > rebased and dealt with that. One comment remained, which is removed by the > > first patch. > > > > David > > > > Changes: > > ======== > > ver #6) > > - Drop the third patch which removes a duplicate check in vmscan(). > > > > ver #5) > > - Rebased on linus/master. try_to_release_page() has now been entirely > > replaced by filemap_release_folio(), barring one comment. > > - Cleaned up some pairs in ext4. > > > > ver #4) > > - Split has_private/release call pairs into own patch. > > - Moved folio_needs_release() to mm/internal.h and removed open-coded > > version from filemap_release_folio(). > > - Don't need to clear AS_RELEASE_ALWAYS in ->evict_inode(). > > - Added experimental patch to reduce shrink_folio_list(). > > > > ver #3) > > - Fixed mapping_clear_release_always() to use clear_bit() not set_bit(). > > - Moved a '&&' to the correct line. > > > > ver #2) > > - Rewrote entirely according to Willy's suggestion[1]. > > > > Link: https://lore.kernel.org/r/Yk9V/03wgdYi65Lb@xxxxxxxxxxxxxxxxxxxx/ [1] > > Link: https://lore.kernel.org/r/164928630577.457102.8519251179327601178.stgit@xxxxxxxxxxxxxxxxxxxxxx/ # v1 > > Link: https://lore.kernel.org/r/166844174069.1124521.10890506360974169994.stgit@xxxxxxxxxxxxxxxxxxxxxx/ # v2 > > Link: https://lore.kernel.org/r/166869495238.3720468.4878151409085146764.stgit@xxxxxxxxxxxxxxxxxxxxxx/ # v3 > > Link: https://lore.kernel.org/r/1459152.1669208550@xxxxxxxxxxxxxxxxxxxxxx/ # v3 also > > Link: https://lore.kernel.org/r/166924370539.1772793.13730698360771821317.stgit@xxxxxxxxxxxxxxxxxxxxxx/ # v4 > > Link: https://lore.kernel.org/r/167172131368.2334525.8569808925687731937.stgit@xxxxxxxxxxxxxxxxxxxxxx/ # v5 > > --- > > %(shortlog)s > > %(diffstat)s > > > > David Howells (2): > > mm: Merge folio_has_private()/filemap_release_folio() call pairs > > mm, netfs, fscache: Stop read optimisation when folio removed from > > pagecache > > > > fs/9p/cache.c | 2 ++ > > fs/afs/internal.h | 2 ++ > > fs/cachefiles/namei.c | 2 ++ > > fs/ceph/cache.c | 2 ++ > > fs/cifs/fscache.c | 2 ++ > > fs/ext4/move_extent.c | 12 ++++-------- > > fs/splice.c | 3 +-- > > include/linux/pagemap.h | 16 ++++++++++++++++ > > mm/filemap.c | 2 ++ > > mm/huge_memory.c | 3 +-- > > mm/internal.h | 11 +++++++++++ > > mm/khugepaged.c | 3 +-- > > mm/memory-failure.c | 8 +++----- > > mm/migrate.c | 3 +-- > > mm/truncate.c | 6 ++---- > > mm/vmscan.c | 8 ++++---- > > 16 files changed, 56 insertions(+), 29 deletions(-) > > > > -- > > Linux-cachefs mailing list > > Linux-cachefs@xxxxxxxxxx > > https://listman.redhat.com/mailman/listinfo/linux-cachefs > > > > Willy, and David, > > Can this series move forward? > This just got mentioned again [1] after Chris tested the NFS netfs > patches that were merged in 6.4-rc1 > > [1] https://lore.kernel.org/linux-nfs/CAAmbk-f_U8CPcTQM866L572uUHdK4p5iWKnUQs4r8fkW=6RW9g@xxxxxxxxxxxxxx/ Sorry about the timing on the original email as I forgot it lined up with LSF/MM. FYI, I tested with 6.4-rc1 plus these two patches, then I added the NFS hunk needed (see below). All my tests pass now[1], and it makes sense from all the ftraces I've seen on test runs that fail (cachefiles_prep_read trace event would show "DOWN no-data" even after data was written previously). This small NFS hunk needs added to patch #2 in this series: diff --git a/fs/nfs/fscache.c b/fs/nfs/fscache.c index 8c35d88a84b1..d4a20748b14f 100644 --- a/fs/nfs/fscache.c +++ b/fs/nfs/fscache.c @@ -180,6 +180,10 @@ void nfs_fscache_init_inode(struct inode *inode) &auxdata, /* aux_data */ sizeof(auxdata), i_size_read(inode)); + + if (netfs_inode(inode)->cache) + mapping_set_release_always(inode->i_mapping); + } /* [1] https://lore.kernel.org/linux-nfs/CALF+zOn_qX4tcT2ucq4jD3G-1ERqZkL6Cw7hx75OnQF0ivqSeA@xxxxxxxxxxxxxx/