Patch "netfs: Fix missing wakeup after issuing writes" has been added to the 6.11-stable tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is a note to let you know that I've just added the patch titled

    netfs: Fix missing wakeup after issuing writes

to the 6.11-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     netfs-fix-missing-wakeup-after-issuing-writes.patch
and it can be found in the queue-6.11 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@xxxxxxxxxxxxxxx> know about it.



commit 2e0d0ce01e8e37cbe02264d1be6f3bd64c99cad7
Author: David Howells <dhowells@xxxxxxxxxx>
Date:   Wed Oct 2 15:45:50 2024 +0100

    netfs: Fix missing wakeup after issuing writes
    
    [ Upstream commit 1ca4169c391c370e0f3a92938df2862900575096 ]
    
    After dividing up a proposed write into subrequests, netfslib sets
    NETFS_RREQ_ALL_QUEUED to indicate to the collector that it can move on to
    the final cleanup once it has emptied the subrequest queues.
    
    Now, whilst the collector will normally end up running at least once after
    this bit is set just because it takes a while to process all the write
    subrequests before the collector runs out of subrequests, there exists the
    possibility that the issuing thread will be forced to sleep and the
    collector thread will clean up all the subrequests before ALL_QUEUED gets
    set.
    
    In such a case, the collector thread will not get triggered again and will
    never clear NETFS_RREQ_IN_PROGRESS thus leaving a request uncompleted and
    causing a potential futute hang.
    
    Fix this by scheduling the write collector if all the subrequest queues are
    empty (and thus no writes pending issuance).
    
    Note that we'd do this ideally before queuing the subrequest, but in the
    case of buffered writeback, at least, we can't find out that we've run out
    of folios until after we've called writeback_iter() and it has returned
    NULL - at which point we might not actually have any subrequests still
    under construction.
    
    Fixes: 288ace2f57c9 ("netfs: New writeback implementation")
    Signed-off-by: David Howells <dhowells@xxxxxxxxxx>
    Link: https://lore.kernel.org/r/3317784.1727880350@xxxxxxxxxxxxxxxxxxxxxx
    cc: Jeff Layton <jlayton@xxxxxxxxxx>
    cc: netfs@xxxxxxxxxxxxxxx
    cc: linux-fsdevel@xxxxxxxxxxxxxxx
    Signed-off-by: Christian Brauner <brauner@xxxxxxxxxx>
    Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>

diff --git a/fs/netfs/write_issue.c b/fs/netfs/write_issue.c
index 3f7e37e50c7d0..9486e54b1e563 100644
--- a/fs/netfs/write_issue.c
+++ b/fs/netfs/write_issue.c
@@ -494,6 +494,30 @@ static int netfs_write_folio(struct netfs_io_request *wreq,
 	return 0;
 }
 
+/*
+ * End the issuing of writes, letting the collector know we're done.
+ */
+static void netfs_end_issue_write(struct netfs_io_request *wreq)
+{
+	bool needs_poke = true;
+
+	smp_wmb(); /* Write subreq lists before ALL_QUEUED. */
+	set_bit(NETFS_RREQ_ALL_QUEUED, &wreq->flags);
+
+	for (int s = 0; s < NR_IO_STREAMS; s++) {
+		struct netfs_io_stream *stream = &wreq->io_streams[s];
+
+		if (!stream->active)
+			continue;
+		if (!list_empty(&stream->subrequests))
+			needs_poke = false;
+		netfs_issue_write(wreq, stream);
+	}
+
+	if (needs_poke)
+		netfs_wake_write_collector(wreq, false);
+}
+
 /*
  * Write some of the pending data back to the server
  */
@@ -541,10 +565,7 @@ int netfs_writepages(struct address_space *mapping,
 			break;
 	} while ((folio = writeback_iter(mapping, wbc, folio, &error)));
 
-	for (int s = 0; s < NR_IO_STREAMS; s++)
-		netfs_issue_write(wreq, &wreq->io_streams[s]);
-	smp_wmb(); /* Write lists before ALL_QUEUED. */
-	set_bit(NETFS_RREQ_ALL_QUEUED, &wreq->flags);
+	netfs_end_issue_write(wreq);
 
 	mutex_unlock(&ictx->wb_lock);
 
@@ -632,10 +653,7 @@ int netfs_end_writethrough(struct netfs_io_request *wreq, struct writeback_contr
 	if (writethrough_cache)
 		netfs_write_folio(wreq, wbc, writethrough_cache);
 
-	netfs_issue_write(wreq, &wreq->io_streams[0]);
-	netfs_issue_write(wreq, &wreq->io_streams[1]);
-	smp_wmb(); /* Write lists before ALL_QUEUED. */
-	set_bit(NETFS_RREQ_ALL_QUEUED, &wreq->flags);
+	netfs_end_issue_write(wreq);
 
 	mutex_unlock(&ictx->wb_lock);
 
@@ -680,13 +698,7 @@ int netfs_unbuffered_write(struct netfs_io_request *wreq, bool may_wait, size_t
 			break;
 	}
 
-	netfs_issue_write(wreq, upload);
-
-	smp_wmb(); /* Write lists before ALL_QUEUED. */
-	set_bit(NETFS_RREQ_ALL_QUEUED, &wreq->flags);
-	if (list_empty(&upload->subrequests))
-		netfs_wake_write_collector(wreq, false);
-
+	netfs_end_issue_write(wreq);
 	_leave(" = %d", error);
 	return error;
 }




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux