Hi, On Fri, Jan 12, 2024 at 11:20:53PM -0600, Steve French wrote: > Here is a patch similar to what David suggested. Seems > straightforward fix. See attached. > I did limited testing on it tonight with 6.1 (will do more tomorrow, > but feedback welcome) but it did fix the regression in xfstest > generic/001 mentioned in this thread. > > > > > On Fri, Jan 12, 2024 at 8:26 AM David Howells <dhowells@xxxxxxxxxx> wrote: > > > > gregkh@xxxxxxxxxxxxxxxxxxx <gregkh@xxxxxxxxxxxxxxxxxxx> wrote: > > > > > I guess I can just revert the single commit here? Can someone send me > > > the revert that I need to do so as I get it right? > > > > In cifs_flush_folio() the error check for filemap_get_folio() just needs > > changing to check !folio instead of IS_ERR(folio). > > > > David > > > > > > > -- > Thanks, > > Steve > From ba288a873fb8ac3d1bf5563366558a905620c071 Mon Sep 17 00:00:00 2001 > From: Steve French <stfrench@xxxxxxxxxxxxx> > Date: Fri, 12 Jan 2024 23:08:51 -0600 > Subject: [PATCH] cifs: fix flushing folio regression for 6.1 backport > > filemap_get_folio works differenty in 6.1 vs. later kernels > (returning NULL in 6.1 instead of an error). Add > this minor correction which addresses the regression in the patch: > cifs: Fix flushing, invalidation and file size with copy_file_range() > > Suggested-by: David Howells <dhowells@xxxxxxxxxx> > Reported-by: Salvatore Bonaccorso <carnil@xxxxxxxxxx> > Signed-off-by: Steve French <stfrench@xxxxxxxxxxxxx> > --- > fs/smb/client/cifsfs.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/fs/smb/client/cifsfs.c b/fs/smb/client/cifsfs.c > index 2e15b182e59f..ac0b7f229a23 100644 > --- a/fs/smb/client/cifsfs.c > +++ b/fs/smb/client/cifsfs.c > @@ -1240,7 +1240,7 @@ static int cifs_flush_folio(struct inode *inode, loff_t pos, loff_t *_fstart, lo > int rc = 0; > > folio = filemap_get_folio(inode->i_mapping, index); > - if (IS_ERR(folio)) > + if ((!folio) || (IS_ERR(folio))) > return 0; > > size = folio_size(folio); I was able to test the patch with the case from the Debian bugreport and seems to resolve the issue. Even if late, as Greg just queued up already: Tested-by: Salvatore Bonaccorso <carnil@xxxxxxxxxx> I wonder, but cannot(!) confirm, might there have been other such backports in 6.1.y which are incorrect? At least it does not seem the case so far for other uses of checking IS_ERR(folio) in v6.1.72 where were using filemap_get_folio. So I guess the other cases are all okay. In v6.1.72 finding those: block/partitions/core.c- block/partitions/core.c- folio = read_mapping_folio(mapping, n >> PAGE_SECTORS_SHIFT, NULL); block/partitions/core.c: if (IS_ERR(folio)) -- drivers/scsi/scsicam.c- drivers/scsi/scsicam.c- folio = read_mapping_folio(mapping, 0, NULL); drivers/scsi/scsicam.c: if (IS_ERR(folio)) -- fs/ceph/addr.c- fs/ceph/addr.c- folio = read_mapping_folio(inode->i_mapping, 0, file); fs/ceph/addr.c: if (IS_ERR(folio)) { -- fs/erofs/data.c- folio = read_cache_folio(mapping, index, NULL, NULL); fs/erofs/data.c- memalloc_nofs_restore(nofs_flag); fs/erofs/data.c: if (IS_ERR(folio)) -- fs/ext2/dir.c- struct folio *folio = read_mapping_folio(mapping, n, NULL); fs/ext2/dir.c- fs/ext2/dir.c: if (IS_ERR(folio)) -- fs/smb/client/cifsfs.c- fs/smb/client/cifsfs.c- folio = filemap_get_folio(inode->i_mapping, index); fs/smb/client/cifsfs.c: if (IS_ERR(folio)) -- fs/verity/enable.c- page_cache_sync_ra(&ractl, remaining_pages); fs/verity/enable.c- folio = read_cache_folio(ractl.mapping, index, NULL, file); fs/verity/enable.c: if (IS_ERR(folio)) -- mm/filemap.c- mm/filemap.c- folio = do_read_cache_folio(mapping, index, filler, file, gfp); mm/filemap.c: if (IS_ERR(folio)) -- mm/shmem.c- huge_gfp = limit_gfp_mask(huge_gfp, gfp); mm/shmem.c- folio = shmem_alloc_and_acct_folio(huge_gfp, inode, index, true); mm/shmem.c: if (IS_ERR(folio)) { -- mm/shmem.c- folio = shmem_alloc_and_acct_folio(gfp, inode, index, false); mm/shmem.c- } mm/shmem.c: if (IS_ERR(folio)) { where only fs/smb/client/cifsfs.c was using filemap_get_folio() in this context. Regards, Salvatore