The quilt patch titled Subject: iomap: hold state_lock over call to ifs_set_range_uptodate() has been removed from the -mm tree. Its filename was iomap-hold-state_lock-over-call-to-ifs_set_range_uptodate.patch This patch was dropped because it was merged into the mm-stable branch of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm ------------------------------------------------------ From: "Matthew Wilcox (Oracle)" <willy@xxxxxxxxxxxxx> Subject: iomap: hold state_lock over call to ifs_set_range_uptodate() Date: Wed, 4 Oct 2023 17:53:01 +0100 Patch series "Add folio_end_read", v2. The core of this patchset is the new folio_end_read() call which filesystems can use when finishing a page cache read instead of separate calls to mark the folio uptodate and unlock it. As an illustration of its use, I converted ext4, iomap & mpage; more can be converted. I think that's useful by itself, but the interesting optimisation is that we can implement that with a single XOR instruction that sets the uptodate bit, clears the lock bit, tests the waiter bit and provides a write memory barrier. That removes one memory barrier and one atomic instruction from each page read, which seems worth doing. That's in patch 15. The last two patches could be a separate series, but basically we can do the same thing with the writeback flag that we do with the unlock flag; clear it and test the waiters bit at the same time. This patch (of 17): This is really preparation for the next patch, but it lets us call folio_mark_uptodate() in just one place instead of two. Link: https://lkml.kernel.org/r/20231004165317.1061855-1-willy@xxxxxxxxxxxxx Link: https://lkml.kernel.org/r/20231004165317.1061855-2-willy@xxxxxxxxxxxxx Signed-off-by: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> Cc: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> Cc: Nicholas Piggin <npiggin@xxxxxxxxx> Cc: "Theodore Ts'o" <tytso@xxxxxxx> Cc: Andreas Dilger <adilger.kernel@xxxxxxxxx> Cc: Richard Henderson <richard.henderson@xxxxxxxxxx> Cc: Ivan Kokshaysky <ink@xxxxxxxxxxxxxxxxxxxx> Cc: Matt Turner <mattst88@xxxxxxxxx> Cc: Thomas Bogendoerfer <tsbogend@xxxxxxxxxxxxxxxx> Cc: Michael Ellerman <mpe@xxxxxxxxxxxxxx> Cc: Christophe Leroy <christophe.leroy@xxxxxxxxxx> Cc: Paul Walmsley <paul.walmsley@xxxxxxxxxx> Cc: Palmer Dabbelt <palmer@xxxxxxxxxxx> Cc: Albert Ou <aou@xxxxxxxxxxxxxxxxx> Cc: Heiko Carstens <hca@xxxxxxxxxxxxx> Cc: Vasily Gorbik <gor@xxxxxxxxxxxxx> Cc: Alexander Gordeev <agordeev@xxxxxxxxxxxxx> Cc: Christian Borntraeger <borntraeger@xxxxxxxxxxxxx> Cc: Sven Schnelle <svens@xxxxxxxxxxxxx> Cc: Geert Uytterhoeven <geert@xxxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- fs/iomap/buffered-io.c | 20 +++++++++++--------- 1 file changed, 11 insertions(+), 9 deletions(-) --- a/fs/iomap/buffered-io.c~iomap-hold-state_lock-over-call-to-ifs_set_range_uptodate +++ a/fs/iomap/buffered-io.c @@ -57,30 +57,32 @@ static inline bool ifs_block_is_uptodate return test_bit(block, ifs->state); } -static void ifs_set_range_uptodate(struct folio *folio, +static bool ifs_set_range_uptodate(struct folio *folio, struct iomap_folio_state *ifs, size_t off, size_t len) { struct inode *inode = folio->mapping->host; unsigned int first_blk = off >> inode->i_blkbits; unsigned int last_blk = (off + len - 1) >> inode->i_blkbits; unsigned int nr_blks = last_blk - first_blk + 1; - unsigned long flags; - spin_lock_irqsave(&ifs->state_lock, flags); bitmap_set(ifs->state, first_blk, nr_blks); - if (ifs_is_fully_uptodate(folio, ifs)) - folio_mark_uptodate(folio); - spin_unlock_irqrestore(&ifs->state_lock, flags); + return ifs_is_fully_uptodate(folio, ifs); } static void iomap_set_range_uptodate(struct folio *folio, size_t off, size_t len) { struct iomap_folio_state *ifs = folio->private; + unsigned long flags; + bool uptodate = true; + + if (ifs) { + spin_lock_irqsave(&ifs->state_lock, flags); + uptodate = ifs_set_range_uptodate(folio, ifs, off, len); + spin_unlock_irqrestore(&ifs->state_lock, flags); + } - if (ifs) - ifs_set_range_uptodate(folio, ifs, off, len); - else + if (uptodate) folio_mark_uptodate(folio); } _ Patches currently in -mm which might be from willy@xxxxxxxxxxxxx are buffer-make-folio_create_empty_buffers-return-a-buffer_head.patch mpage-convert-map_buffer_to_folio-to-folio_create_empty_buffers.patch ext4-convert-to-folio_create_empty_buffers.patch buffer-add-get_nth_bh.patch gfs2-convert-inode-unstuffing-to-use-a-folio.patch gfs2-convert-gfs2_getbuf-to-folios.patch gfs2-convert-gfs2_getjdatabuf-to-use-a-folio.patch gfs2-convert-gfs2_write_buf_to_page-to-use-a-folio.patch nilfs2-convert-nilfs_mdt_freeze_buffer-to-use-a-folio.patch nilfs2-convert-nilfs_grab_buffer-to-use-a-folio.patch nilfs2-convert-nilfs_copy_page-to-nilfs_copy_folio.patch nilfs2-convert-nilfs_mdt_forget_block-to-use-a-folio.patch nilfs2-convert-nilfs_mdt_get_frozen_buffer-to-use-a-folio.patch nilfs2-remove-nilfs_page_get_nth_block.patch nilfs2-convert-nilfs_lookup_dirty_data_buffers-to-use-folio_create_empty_buffers.patch ntfs-convert-ntfs_read_block-to-use-a-folio.patch ntfs-convert-ntfs_writepage-to-use-a-folio.patch ntfs-convert-ntfs_prepare_pages_for_non_resident_write-to-folios.patch ntfs3-convert-ntfs_zero_range-to-use-a-folio.patch ocfs2-convert-ocfs2_map_page_blocks-to-use-a-folio.patch reiserfs-convert-writepage-to-use-a-folio.patch ufs-add-ufs_get_locked_folio-and-ufs_put_locked_folio.patch ufs-use-ufs_get_locked_folio-in-ufs_alloc_lastblock.patch ufs-convert-ufs_change_blocknr-to-use-folios.patch ufs-remove-ufs_get_locked_page.patch buffer-remove-folio_create_empty_buffers.patch