Patch "btrfs: mark all dirty sectors as locked inside writepage_delalloc()" has been added to the 6.12-stable tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is a note to let you know that I've just added the patch titled

    btrfs: mark all dirty sectors as locked inside writepage_delalloc()

to the 6.12-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     btrfs-mark-all-dirty-sectors-as-locked-inside-writep.patch
and it can be found in the queue-6.12 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@xxxxxxxxxxxxxxx> know about it.



commit 182a65d6168ea39c695e59c044242a703263b0ed
Author: Qu Wenruo <wqu@xxxxxxxx>
Date:   Mon Sep 16 08:12:40 2024 +0930

    btrfs: mark all dirty sectors as locked inside writepage_delalloc()
    
    [ Upstream commit c96d0e3921419bd3e5d8a1f355970c8ae3047ef4 ]
    
    Currently we only mark sectors as locked if there is a *NEW* delalloc
    range for it.
    
    But NEW delalloc range is not the same as dirty sectors we want to
    submit, e.g:
    
            0       32K      64K      96K       128K
            |       |////////||///////|    |////|
                                           120K
    
    For above 64K page size case, writepage_delalloc() for page 0 will find
    and lock the delalloc range [32K, 96K), which is beyond the page
    boundary.
    
    Then when writepage_delalloc() is called for the page 64K, since [64K,
    96K) is already locked, only [120K, 128K) will be locked.
    
    This means, although range [64K, 96K) is dirty and will be submitted
    later by extent_writepage_io(), it will not be marked as locked.
    
    This is fine for now, as we call btrfs_folio_end_writer_lock_bitmap() to
    free every non-compressed sector, and compression is only allowed for
    full page range.
    
    But this is not safe for future sector perfect compression support, as
    this can lead to double folio unlock:
    
                  Thread A                 |           Thread B
    ---------------------------------------+--------------------------------
                                           | submit_one_async_extent()
                                           | |- extent_clear_unlock_delalloc()
    extent_writepage()                     |    |- btrfs_folio_end_writer_lock()
    |- btrfs_folio_end_writer_lock_bitmap()|       |- btrfs_subpage_end_and_test_writer()
       |                                   |       |  |- atomic_sub_and_test()
       |                                   |       |     /* Now the atomic value is 0 */
       |- if (atomic_read() == 0)          |       |
       |- folio_unlock()                   |       |- folio_unlock()
    
    The root cause is the above range [64K, 96K) is dirtied and should also
    be locked but it isn't.
    
    So to make everything more consistent and prepare for the incoming
    sector perfect compression, mark all dirty sectors as locked.
    
    Signed-off-by: Qu Wenruo <wqu@xxxxxxxx>
    Signed-off-by: David Sterba <dsterba@xxxxxxxx>
    Stable-dep-of: 8bf334beb349 ("btrfs: fix double accounting race when extent_writepage_io() failed")
    Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>

diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index ba34b92d48c2f..8222ae6f29af5 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -1174,6 +1174,7 @@ static noinline_for_stack int writepage_delalloc(struct btrfs_inode *inode,
 	u64 delalloc_end = page_end;
 	u64 delalloc_to_write = 0;
 	int ret = 0;
+	int bit;
 
 	/* Save the dirty bitmap as our submission bitmap will be a subset of it. */
 	if (btrfs_is_subpage(fs_info, inode->vfs_inode.i_mapping)) {
@@ -1183,6 +1184,12 @@ static noinline_for_stack int writepage_delalloc(struct btrfs_inode *inode,
 		bio_ctrl->submit_bitmap = 1;
 	}
 
+	for_each_set_bit(bit, &bio_ctrl->submit_bitmap, fs_info->sectors_per_page) {
+		u64 start = page_start + (bit << fs_info->sectorsize_bits);
+
+		btrfs_folio_set_writer_lock(fs_info, folio, start, fs_info->sectorsize);
+	}
+
 	/* Lock all (subpage) delalloc ranges inside the folio first. */
 	while (delalloc_start < page_end) {
 		delalloc_end = page_end;
@@ -1193,9 +1200,6 @@ static noinline_for_stack int writepage_delalloc(struct btrfs_inode *inode,
 		}
 		set_delalloc_bitmap(folio, &delalloc_bitmap, delalloc_start,
 				    min(delalloc_end, page_end) + 1 - delalloc_start);
-		btrfs_folio_set_writer_lock(fs_info, folio, delalloc_start,
-					    min(delalloc_end, page_end) + 1 -
-					    delalloc_start);
 		last_delalloc_end = delalloc_end;
 		delalloc_start = delalloc_end + 1;
 	}




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux