Patch "netfs: Fix non-contiguous donation between completed reads" has been added to the 6.12-stable tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is a note to let you know that I've just added the patch titled

    netfs: Fix non-contiguous donation between completed reads

to the 6.12-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     netfs-fix-non-contiguous-donation-between-completed-.patch
and it can be found in the queue-6.12 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@xxxxxxxxxxxxxxx> know about it.



commit d712fb757e6ea8e0a3bd69826237cc850d986808
Author: David Howells <dhowells@xxxxxxxxxx>
Date:   Fri Dec 13 13:50:02 2024 +0000

    netfs: Fix non-contiguous donation between completed reads
    
    [ Upstream commit c8b90d40d5bba8e6fba457b8a7c10d3c0d467e37 ]
    
    When a read subrequest finishes, if it doesn't have sufficient coverage to
    complete the folio(s) covering either side of it, it will donate the excess
    coverage to the adjacent subrequests on either side, offloading
    responsibility for unlocking the folio(s) covered to them.
    
    Now, preference is given to donating down to a lower file offset over
    donating up because that check is done first - but there's no check that
    the lower subreq is actually contiguous, and so we can end up donating
    incorrectly.
    
    The scenario seen[1] is that an 8MiB readahead request spanning four 2MiB
    folios is split into eight 1MiB subreqs (numbered 1 through 8).  These
    terminate in the order 1,6,2,5,3,7,4,8.  What happens is:
    
            - 1 donates to 2
            - 6 donates to 5
            - 2 completes, unlocking the first folio (with 1).
            - 5 completes, unlocking the third folio (with 6).
            - 3 donates to 4
            - 7 donates to 4 incorrectly
            - 4 completes, unlocking the second folio (with 3), but can't use
              the excess from 7.
            - 8 donates to 4, also incorrectly.
    
    Fix this by preventing downward donation if the subreqs are not contiguous
    (in the example above, 7 donates to 4 across the gap left by 5 and 6).
    
    Reported-by: Shyam Prasad N <nspmangalore@xxxxxxxxx>
    Closes: https://lore.kernel.org/r/CANT5p=qBwjBm-D8soFVVtswGEfmMtQXVW83=TNfUtvyHeFQZBA@xxxxxxxxxxxxxx/
    Signed-off-by: David Howells <dhowells@xxxxxxxxxx>
    Link: https://lore.kernel.org/r/526707.1733224486@xxxxxxxxxxxxxxxxxxxxxx/ [1]
    Link: https://lore.kernel.org/r/20241213135013.2964079-3-dhowells@xxxxxxxxxx
    cc: Steve French <sfrench@xxxxxxxxx>
    cc: Paulo Alcantara <pc@xxxxxxxxxxxxx>
    cc: Jeff Layton <jlayton@xxxxxxxxxx>
    cc: linux-cifs@xxxxxxxxxxxxxxx
    cc: netfs@xxxxxxxxxxxxxxx
    cc: linux-fsdevel@xxxxxxxxxxxxxxx
    Signed-off-by: Christian Brauner <brauner@xxxxxxxxxx>
    Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>

diff --git a/fs/netfs/read_collect.c b/fs/netfs/read_collect.c
index e70eb4ea21c03..a44132c986538 100644
--- a/fs/netfs/read_collect.c
+++ b/fs/netfs/read_collect.c
@@ -249,16 +249,17 @@ static bool netfs_consume_read_data(struct netfs_io_subrequest *subreq, bool was
 
 	/* Deal with the trickiest case: that this subreq is in the middle of a
 	 * folio, not touching either edge, but finishes first.  In such a
-	 * case, we donate to the previous subreq, if there is one, so that the
-	 * donation is only handled when that completes - and remove this
-	 * subreq from the list.
+	 * case, we donate to the previous subreq, if there is one and if it is
+	 * contiguous, so that the donation is only handled when that completes
+	 * - and remove this subreq from the list.
 	 *
 	 * If the previous subreq finished first, we will have acquired their
 	 * donation and should be able to unlock folios and/or donate nextwards.
 	 */
 	if (!subreq->consumed &&
 	    !prev_donated &&
-	    !list_is_first(&subreq->rreq_link, &rreq->subrequests)) {
+	    !list_is_first(&subreq->rreq_link, &rreq->subrequests) &&
+	    subreq->start == prev->start + prev->len) {
 		prev = list_prev_entry(subreq, rreq_link);
 		WRITE_ONCE(prev->next_donated, prev->next_donated + subreq->len);
 		subreq->start += subreq->len;




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux