Patch "xfs: fix xfs_inodegc_stop racing with mod_delayed_work" has been added to the 5.15-stable tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is a note to let you know that I've just added the patch titled

    xfs: fix xfs_inodegc_stop racing with mod_delayed_work

to the 5.15-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     xfs-fix-xfs_inodegc_stop-racing-with-mod_delayed_wor.patch
and it can be found in the queue-5.15 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@xxxxxxxxxxxxxxx> know about it.



commit 40c124cf66ae4ddfa82dbe3c34a357a0cae38c66
Author: Darrick J. Wong <djwong@xxxxxxxxxx>
Date:   Thu Sep 21 18:01:56 2023 -0700

    xfs: fix xfs_inodegc_stop racing with mod_delayed_work
    
    [ Upstream commit 2254a7396a0ca6309854948ee1c0a33fa4268cec ]
    
    syzbot reported this warning from the faux inodegc shrinker that tries
    to kick off inodegc work:
    
    ------------[ cut here ]------------
    WARNING: CPU: 1 PID: 102 at kernel/workqueue.c:1445 __queue_work+0xd44/0x1120 kernel/workqueue.c:1444
    RIP: 0010:__queue_work+0xd44/0x1120 kernel/workqueue.c:1444
    Call Trace:
     __queue_delayed_work+0x1c8/0x270 kernel/workqueue.c:1672
     mod_delayed_work_on+0xe1/0x220 kernel/workqueue.c:1746
     xfs_inodegc_shrinker_scan fs/xfs/xfs_icache.c:2212 [inline]
     xfs_inodegc_shrinker_scan+0x250/0x4f0 fs/xfs/xfs_icache.c:2191
     do_shrink_slab+0x428/0xaa0 mm/vmscan.c:853
     shrink_slab+0x175/0x660 mm/vmscan.c:1013
     shrink_one+0x502/0x810 mm/vmscan.c:5343
     shrink_many mm/vmscan.c:5394 [inline]
     lru_gen_shrink_node mm/vmscan.c:5511 [inline]
     shrink_node+0x2064/0x35f0 mm/vmscan.c:6459
     kswapd_shrink_node mm/vmscan.c:7262 [inline]
     balance_pgdat+0xa02/0x1ac0 mm/vmscan.c:7452
     kswapd+0x677/0xd60 mm/vmscan.c:7712
     kthread+0x2e8/0x3a0 kernel/kthread.c:376
     ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:308
    
    This warning corresponds to this code in __queue_work:
    
            /*
             * For a draining wq, only works from the same workqueue are
             * allowed. The __WQ_DESTROYING helps to spot the issue that
             * queues a new work item to a wq after destroy_workqueue(wq).
             */
            if (unlikely(wq->flags & (__WQ_DESTROYING | __WQ_DRAINING) &&
                         WARN_ON_ONCE(!is_chained_work(wq))))
                    return;
    
    For this to trip, we must have a thread draining the inodedgc workqueue
    and a second thread trying to queue inodegc work to that workqueue.
    This can happen if freezing or a ro remount race with reclaim poking our
    faux inodegc shrinker and another thread dropping an unlinked O_RDONLY
    file:
    
    Thread 0        Thread 1        Thread 2
    
    xfs_inodegc_stop
    
                                    xfs_inodegc_shrinker_scan
                                    xfs_is_inodegc_enabled
                                    <yes, will continue>
    
    xfs_clear_inodegc_enabled
    xfs_inodegc_queue_all
    <list empty, do not queue inodegc worker>
    
                    xfs_inodegc_queue
                    <add to list>
                    xfs_is_inodegc_enabled
                    <no, returns>
    
    drain_workqueue
    <set WQ_DRAINING>
    
                                    llist_empty
                                    <no, will queue list>
                                    mod_delayed_work_on(..., 0)
                                    __queue_work
                                    <sees WQ_DRAINING, kaboom>
    
    In other words, everything between the access to inodegc_enabled state
    and the decision to poke the inodegc workqueue requires some kind of
    coordination to avoid the WQ_DRAINING state.  We could perhaps introduce
    a lock here, but we could also try to eliminate WQ_DRAINING from the
    picture.
    
    We could replace the drain_workqueue call with a loop that flushes the
    workqueue and queues workers as long as there is at least one inode
    present in the per-cpu inodegc llists.  We've disabled inodegc at this
    point, so we know that the number of queued inodes will eventually hit
    zero as long as xfs_inodegc_start cannot reactivate the workers.
    
    There are four callers of xfs_inodegc_start.  Three of them come from the
    VFS with s_umount held: filesystem thawing, failed filesystem freezing,
    and the rw remount transition.  The fourth caller is mounting rw (no
    remount or freezing possible).
    
    There are three callers ofs xfs_inodegc_stop.  One is unmounting (no
    remount or thaw possible).  Two of them come from the VFS with s_umount
    held: fs freezing and ro remount transition.
    
    Hence, it is correct to replace the drain_workqueue call with a loop
    that drains the inodegc llists.
    
    Fixes: 6191cf3ad59f ("xfs: flush inodegc workqueue tasks before cancel")
    Signed-off-by: Darrick J. Wong <djwong@xxxxxxxxxx>
    Reviewed-by: Dave Chinner <dchinner@xxxxxxxxxx>
    Signed-off-by: Dave Chinner <david@xxxxxxxxxxxxx>
    Signed-off-by: Leah Rumancik <leah.rumancik@xxxxxxxxx>
    Acked-by: Darrick J. Wong <djwong@xxxxxxxxxx>
    Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>

diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c
index 02022164772de..eab98d76dbe1f 100644
--- a/fs/xfs/xfs_icache.c
+++ b/fs/xfs/xfs_icache.c
@@ -448,18 +448,23 @@ xfs_iget_check_free_state(
 }
 
 /* Make all pending inactivation work start immediately. */
-static void
+static bool
 xfs_inodegc_queue_all(
 	struct xfs_mount	*mp)
 {
 	struct xfs_inodegc	*gc;
 	int			cpu;
+	bool			ret = false;
 
 	for_each_online_cpu(cpu) {
 		gc = per_cpu_ptr(mp->m_inodegc, cpu);
-		if (!llist_empty(&gc->list))
+		if (!llist_empty(&gc->list)) {
 			mod_delayed_work_on(cpu, mp->m_inodegc_wq, &gc->work, 0);
+			ret = true;
+		}
 	}
+
+	return ret;
 }
 
 /*
@@ -1902,24 +1907,41 @@ xfs_inodegc_flush(
 
 /*
  * Flush all the pending work and then disable the inode inactivation background
- * workers and wait for them to stop.
+ * workers and wait for them to stop.  Caller must hold sb->s_umount to
+ * coordinate changes in the inodegc_enabled state.
  */
 void
 xfs_inodegc_stop(
 	struct xfs_mount	*mp)
 {
+	bool			rerun;
+
 	if (!xfs_clear_inodegc_enabled(mp))
 		return;
 
+	/*
+	 * Drain all pending inodegc work, including inodes that could be
+	 * queued by racing xfs_inodegc_queue or xfs_inodegc_shrinker_scan
+	 * threads that sample the inodegc state just prior to us clearing it.
+	 * The inodegc flag state prevents new threads from queuing more
+	 * inodes, so we queue pending work items and flush the workqueue until
+	 * all inodegc lists are empty.  IOWs, we cannot use drain_workqueue
+	 * here because it does not allow other unserialized mechanisms to
+	 * reschedule inodegc work while this draining is in progress.
+	 */
 	xfs_inodegc_queue_all(mp);
-	drain_workqueue(mp->m_inodegc_wq);
+	do {
+		flush_workqueue(mp->m_inodegc_wq);
+		rerun = xfs_inodegc_queue_all(mp);
+	} while (rerun);
 
 	trace_xfs_inodegc_stop(mp, __return_address);
 }
 
 /*
  * Enable the inode inactivation background workers and schedule deferred inode
- * inactivation work if there is any.
+ * inactivation work if there is any.  Caller must hold sb->s_umount to
+ * coordinate changes in the inodegc_enabled state.
  */
 void
 xfs_inodegc_start(



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux