The quilt patch titled Subject: nilfs2: remove calls to folio_set_error() and folio_clear_error() has been removed from the -mm tree. Its filename was nilfs2-remove-calls-to-folio_set_error-and-folio_clear_error.patch This patch was dropped because it was merged into the mm-nonmm-stable branch of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm ------------------------------------------------------ From: "Matthew Wilcox (Oracle)" <willy@xxxxxxxxxxxxx> Subject: nilfs2: remove calls to folio_set_error() and folio_clear_error() Date: Tue, 30 Apr 2024 14:09:01 +0900 Nobody checks this flag on nilfs2 folios, stop setting and clearing it. That lets us simplify nilfs_end_folio_io() slightly. Link: https://lkml.kernel.org/r/20240420025029.2166544-17-willy@xxxxxxxxxxxxx Link: https://lkml.kernel.org/r/20240430050901.3239-1-konishi.ryusuke@xxxxxxxxx Signed-off-by: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> Signed-off-by: Ryusuke Konishi <konishi.ryusuke@xxxxxxxxx> Cc: kernel test robot <lkp@xxxxxxxxx> Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx> Cc: Song Liu <song@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- fs/nilfs2/dir.c | 1 - fs/nilfs2/segment.c | 8 +------- 2 files changed, 1 insertion(+), 8 deletions(-) --- a/fs/nilfs2/dir.c~nilfs2-remove-calls-to-folio_set_error-and-folio_clear_error +++ a/fs/nilfs2/dir.c @@ -174,7 +174,6 @@ Eend: dir->i_ino, (folio->index << PAGE_SHIFT) + offs, (unsigned long)le64_to_cpu(p->inode)); fail: - folio_set_error(folio); return false; } --- a/fs/nilfs2/segment.c~nilfs2-remove-calls-to-folio_set_error-and-folio_clear_error +++ a/fs/nilfs2/segment.c @@ -1725,14 +1725,8 @@ static void nilfs_end_folio_io(struct fo return; } - if (!err) { - if (!nilfs_folio_buffers_clean(folio)) - filemap_dirty_folio(folio->mapping, folio); - folio_clear_error(folio); - } else { + if (err || !nilfs_folio_buffers_clean(folio)) filemap_dirty_folio(folio->mapping, folio); - folio_set_error(folio); - } folio_end_writeback(folio); } _ Patches currently in -mm which might be from willy@xxxxxxxxxxxxx are