Patch "iomap: pass byte granular end position to iomap_add_to_ioend" has been added to the 6.12-stable tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is a note to let you know that I've just added the patch titled

    iomap: pass byte granular end position to iomap_add_to_ioend

to the 6.12-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     iomap-pass-byte-granular-end-position-to-iomap_add_t.patch
and it can be found in the queue-6.12 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@xxxxxxxxxxxxxxx> know about it.



commit b3d04c479944b549209c7b8b67402f6fa76a5615
Author: Long Li <leo.lilong@xxxxxxxxxx>
Date:   Mon Dec 9 19:42:39 2024 +0800

    iomap: pass byte granular end position to iomap_add_to_ioend
    
    [ Upstream commit b44679c63e4d3ac820998b6bd59fba89a72ad3e7 ]
    
    This is a preparatory patch for fixing zero padding issues in concurrent
    append write scenarios. In the following patches, we need to obtain
    byte-granular writeback end position for io_size trimming after EOF
    handling.
    
    Due to concurrent writeback and truncate operations, inode size may
    shrink. Resampling inode size would force writeback code to handle the
    newly appeared post-EOF blocks, which is undesirable. As Dave
    explained in [1]:
    
    "Really, the issue is that writeback mappings have to be able to
    handle the range being mapped suddenly appear to be beyond EOF.
    This behaviour is a longstanding writeback constraint, and is what
    iomap_writepage_handle_eof() is attempting to handle.
    
    We handle this by only sampling i_size_read() whilst we have the
    folio locked and can determine the action we should take with that
    folio (i.e. nothing, partial zeroing, or skip altogether). Once
    we've made the decision that the folio is within EOF and taken
    action on it (i.e. moved the folio to writeback state), we cannot
    then resample the inode size because a truncate may have started
    and changed the inode size."
    
    To avoid resampling inode size after EOF handling, we convert end_pos
    to byte-granular writeback position and return it from EOF handling
    function.
    
    Since iomap_set_range_dirty() can handle unaligned lengths, this
    conversion has no impact on it. However, iomap_find_dirty_range()
    requires aligned start and end range to find dirty blocks within the
    given range, so the end position needs to be rounded up when passed
    to it.
    
    LINK [1]: https://lore.kernel.org/linux-xfs/Z1Gg0pAa54MoeYME@localhost.localdomain/
    
    Signed-off-by: Long Li <leo.lilong@xxxxxxxxxx>
    Link: https://lore.kernel.org/r/20241209114241.3725722-2-leo.lilong@xxxxxxxxxx
    Reviewed-by: Brian Foster <bfoster@xxxxxxxxxx>
    Signed-off-by: Christian Brauner <brauner@xxxxxxxxxx>
    Stable-dep-of: 51d20d1dacbe ("iomap: fix zero padding data issue in concurrent append writes")
    Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>

diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
index ce73d2a48c1e..05e5cc3bf976 100644
--- a/fs/iomap/buffered-io.c
+++ b/fs/iomap/buffered-io.c
@@ -1764,7 +1764,8 @@ static bool iomap_can_add_to_ioend(struct iomap_writepage_ctx *wpc, loff_t pos)
  */
 static int iomap_add_to_ioend(struct iomap_writepage_ctx *wpc,
 		struct writeback_control *wbc, struct folio *folio,
-		struct inode *inode, loff_t pos, unsigned len)
+		struct inode *inode, loff_t pos, loff_t end_pos,
+		unsigned len)
 {
 	struct iomap_folio_state *ifs = folio->private;
 	size_t poff = offset_in_folio(folio, pos);
@@ -1790,8 +1791,8 @@ static int iomap_add_to_ioend(struct iomap_writepage_ctx *wpc,
 
 static int iomap_writepage_map_blocks(struct iomap_writepage_ctx *wpc,
 		struct writeback_control *wbc, struct folio *folio,
-		struct inode *inode, u64 pos, unsigned dirty_len,
-		unsigned *count)
+		struct inode *inode, u64 pos, u64 end_pos,
+		unsigned dirty_len, unsigned *count)
 {
 	int error;
 
@@ -1816,7 +1817,7 @@ static int iomap_writepage_map_blocks(struct iomap_writepage_ctx *wpc,
 			break;
 		default:
 			error = iomap_add_to_ioend(wpc, wbc, folio, inode, pos,
-					map_len);
+					end_pos, map_len);
 			if (!error)
 				(*count)++;
 			break;
@@ -1887,11 +1888,11 @@ static bool iomap_writepage_handle_eof(struct folio *folio, struct inode *inode,
 		 *    remaining memory is zeroed when mapped, and writes to that
 		 *    region are not written out to the file.
 		 *
-		 * Also adjust the writeback range to skip all blocks entirely
-		 * beyond i_size.
+		 * Also adjust the end_pos to the end of file and skip writeback
+		 * for all blocks entirely beyond i_size.
 		 */
 		folio_zero_segment(folio, poff, folio_size(folio));
-		*end_pos = round_up(isize, i_blocksize(inode));
+		*end_pos = isize;
 	}
 
 	return true;
@@ -1904,6 +1905,7 @@ static int iomap_writepage_map(struct iomap_writepage_ctx *wpc,
 	struct inode *inode = folio->mapping->host;
 	u64 pos = folio_pos(folio);
 	u64 end_pos = pos + folio_size(folio);
+	u64 end_aligned = 0;
 	unsigned count = 0;
 	int error = 0;
 	u32 rlen;
@@ -1945,9 +1947,10 @@ static int iomap_writepage_map(struct iomap_writepage_ctx *wpc,
 	/*
 	 * Walk through the folio to find dirty areas to write back.
 	 */
-	while ((rlen = iomap_find_dirty_range(folio, &pos, end_pos))) {
+	end_aligned = round_up(end_pos, i_blocksize(inode));
+	while ((rlen = iomap_find_dirty_range(folio, &pos, end_aligned))) {
 		error = iomap_writepage_map_blocks(wpc, wbc, folio, inode,
-				pos, rlen, &count);
+				pos, end_pos, rlen, &count);
 		if (error)
 			break;
 		pos += rlen;




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux