The quilt patch titled Subject: mm/filemap: drop streaming/uncached pages when writeback completes has been removed from the -mm tree. Its filename was mm-filemap-drop-streaming-uncached-pages-when-writeback-completes.patch This patch was dropped because it was merged into the mm-stable branch of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm ------------------------------------------------------ From: Jens Axboe <axboe@xxxxxxxxx> Subject: mm/filemap: drop streaming/uncached pages when writeback completes Date: Fri, 20 Dec 2024 08:47:47 -0700 If the folio is marked as streaming, drop pages when writeback completes. Intended to be used with RWF_DONTCACHE, to avoid needing sync writes for uncached IO. Link: https://lkml.kernel.org/r/20241220154831.1086649-10-axboe@xxxxxxxxx Signed-off-by: Jens Axboe <axboe@xxxxxxxxx> Cc: Brian Foster <bfoster@xxxxxxxxxx> Cc: Chris Mason <clm@xxxxxxxx> Cc: Christoph Hellwig <hch@xxxxxx> Cc: Johannes Weiner <hannes@xxxxxxxxxxx> Cc: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx> Cc: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/filemap.c | 28 ++++++++++++++++++++++++++++ 1 file changed, 28 insertions(+) --- a/mm/filemap.c~mm-filemap-drop-streaming-uncached-pages-when-writeback-completes +++ a/mm/filemap.c @@ -1571,6 +1571,27 @@ int folio_wait_private_2_killable(struct } EXPORT_SYMBOL(folio_wait_private_2_killable); +/* + * If folio was marked as dropbehind, then pages should be dropped when writeback + * completes. Do that now. If we fail, it's likely because of a big folio - + * just reset dropbehind for that case and latter completions should invalidate. + */ +static void folio_end_dropbehind_write(struct folio *folio) +{ + /* + * Hitting !in_task() should not happen off RWF_DONTCACHE writeback, + * but can happen if normal writeback just happens to find dirty folios + * that were created as part of uncached writeback, and that writeback + * would otherwise not need non-IRQ handling. Just skip the + * invalidation in that case. + */ + if (in_task() && folio_trylock(folio)) { + if (folio->mapping) + folio_unmap_invalidate(folio->mapping, folio, 0); + folio_unlock(folio); + } +} + /** * folio_end_writeback - End writeback against a folio. * @folio: The folio. @@ -1581,6 +1602,8 @@ EXPORT_SYMBOL(folio_wait_private_2_killa */ void folio_end_writeback(struct folio *folio) { + bool folio_dropbehind = false; + VM_BUG_ON_FOLIO(!folio_test_writeback(folio), folio); /* @@ -1602,9 +1625,14 @@ void folio_end_writeback(struct folio *f * reused before the folio_wake_bit(). */ folio_get(folio); + if (!folio_test_dirty(folio)) + folio_dropbehind = folio_test_clear_dropbehind(folio); if (__folio_end_writeback(folio)) folio_wake_bit(folio, PG_writeback); acct_reclaim_writeback(folio); + + if (folio_dropbehind) + folio_end_dropbehind_write(folio); folio_put(folio); } EXPORT_SYMBOL(folio_end_writeback); _ Patches currently in -mm which might be from axboe@xxxxxxxxx are