Patch "netfs: Fix trimming of streaming-write folios in netfs_inval_folio()" has been added to the 6.10-stable tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is a note to let you know that I've just added the patch titled

    netfs: Fix trimming of streaming-write folios in netfs_inval_folio()

to the 6.10-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     netfs-fix-trimming-of-streaming-write-folios-in-netf.patch
and it can be found in the queue-6.10 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@xxxxxxxxxxxxxxx> know about it.



commit 74e5aa03f1e98a0daeff4dece1f2b6a8a4d3bd45
Author: David Howells <dhowells@xxxxxxxxxx>
Date:   Fri Aug 23 21:08:12 2024 +0100

    netfs: Fix trimming of streaming-write folios in netfs_inval_folio()
    
    [ Upstream commit cce6bfa6ca0e30af9927b0074c97fe6a92f28092 ]
    
    When netfslib writes to a folio that it doesn't have data for, but that
    data exists on the server, it will make a 'streaming write' whereby it
    stores data in a folio that is marked dirty, but not uptodate.  When it
    does this, it attaches a record to folio->private to track the dirty
    region.
    
    When truncate() or fallocate() wants to invalidate part of such a folio, it
    will call into ->invalidate_folio(), specifying the part of the folio that
    is to be invalidated.  netfs_invalidate_folio(), on behalf of the
    filesystem, must then determine how to trim the streaming write record.  In
    a couple of cases, however, it does this incorrectly (the reduce-length and
    move-start cases are switched over and don't, in any case, calculate the
    value correctly).
    
    Fix this by making the logic tree more obvious and fixing the cases.
    
    Fixes: 9ebff83e6481 ("netfs: Prep to use folio->private for write grouping and streaming write")
    Signed-off-by: David Howells <dhowells@xxxxxxxxxx>
    Link: https://lore.kernel.org/r/20240823200819.532106-5-dhowells@xxxxxxxxxx
    cc: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx>
    cc: Pankaj Raghav <p.raghav@xxxxxxxxxxx>
    cc: Jeff Layton <jlayton@xxxxxxxxxx>
    cc: Marc Dionne <marc.dionne@xxxxxxxxxxxx>
    cc: linux-afs@xxxxxxxxxxxxxxxxxxx
    cc: netfs@xxxxxxxxxxxxxxx
    cc: linux-mm@xxxxxxxxx
    cc: linux-fsdevel@xxxxxxxxxxxxxxx
    Signed-off-by: Christian Brauner <brauner@xxxxxxxxxx>
    Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>

diff --git a/fs/netfs/misc.c b/fs/netfs/misc.c
index 21acf4b092a46..a46bf569303fc 100644
--- a/fs/netfs/misc.c
+++ b/fs/netfs/misc.c
@@ -97,10 +97,20 @@ EXPORT_SYMBOL(netfs_clear_inode_writeback);
 void netfs_invalidate_folio(struct folio *folio, size_t offset, size_t length)
 {
 	struct netfs_folio *finfo;
+	struct netfs_inode *ctx = netfs_inode(folio_inode(folio));
 	size_t flen = folio_size(folio);
 
 	kenter("{%lx},%zx,%zx", folio->index, offset, length);
 
+	if (offset == 0 && length == flen) {
+		unsigned long long i_size = i_size_read(&ctx->inode);
+		unsigned long long fpos = folio_pos(folio), end;
+
+		end = umin(fpos + flen, i_size);
+		if (fpos < i_size && end > ctx->zero_point)
+			ctx->zero_point = end;
+	}
+
 	folio_wait_private_2(folio); /* [DEPRECATED] */
 
 	if (!folio_test_private(folio))
@@ -115,18 +125,34 @@ void netfs_invalidate_folio(struct folio *folio, size_t offset, size_t length)
 		/* We have a partially uptodate page from a streaming write. */
 		unsigned int fstart = finfo->dirty_offset;
 		unsigned int fend = fstart + finfo->dirty_len;
-		unsigned int end = offset + length;
+		unsigned int iend = offset + length;
 
 		if (offset >= fend)
 			return;
-		if (end <= fstart)
+		if (iend <= fstart)
+			return;
+
+		/* The invalidation region overlaps the data.  If the region
+		 * covers the start of the data, we either move along the start
+		 * or just erase the data entirely.
+		 */
+		if (offset <= fstart) {
+			if (iend >= fend)
+				goto erase_completely;
+			/* Move the start of the data. */
+			finfo->dirty_len = fend - iend;
+			finfo->dirty_offset = offset;
+			return;
+		}
+
+		/* Reduce the length of the data if the invalidation region
+		 * covers the tail part.
+		 */
+		if (iend >= fend) {
+			finfo->dirty_len = offset - fstart;
 			return;
-		if (offset <= fstart && end >= fend)
-			goto erase_completely;
-		if (offset <= fstart && end > fstart)
-			goto reduce_len;
-		if (offset > fstart && end >= fend)
-			goto move_start;
+		}
+
 		/* A partial write was split.  The caller has already zeroed
 		 * it, so just absorb the hole.
 		 */
@@ -139,12 +165,6 @@ void netfs_invalidate_folio(struct folio *folio, size_t offset, size_t length)
 	folio_clear_uptodate(folio);
 	kfree(finfo);
 	return;
-reduce_len:
-	finfo->dirty_len = offset + length - finfo->dirty_offset;
-	return;
-move_start:
-	finfo->dirty_len -= offset - finfo->dirty_offset;
-	finfo->dirty_offset = offset;
 }
 EXPORT_SYMBOL(netfs_invalidate_folio);
 
@@ -164,7 +184,7 @@ bool netfs_release_folio(struct folio *folio, gfp_t gfp)
 	if (folio_test_dirty(folio))
 		return false;
 
-	end = folio_pos(folio) + folio_size(folio);
+	end = umin(folio_pos(folio) + folio_size(folio), i_size_read(&ctx->inode));
 	if (end > ctx->zero_point)
 		ctx->zero_point = end;
 




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux