On Mon, Dec 30, 2024 at 04:43:39PM GMT, Greg Kroah-Hartman wrote: > 6.12-stable review patch. If anyone has any objections, please let me know. > > ------------------ > > From: Filipe Manana <fdmanana@xxxxxxxx> > > commit 2c8507c63f5498d4ee4af404a8e44ceae4345056 upstream. > > During swap activation we iterate over the extents of a file and we can > have many thousands of them, so we can end up in a busy loop monopolizing > a core. Avoid this by doing a voluntary reschedule after processing each > extent. > > CC: stable@xxxxxxxxxxxxxxx # 5.4+ > Reviewed-by: Qu Wenruo <wqu@xxxxxxxx> > Signed-off-by: Filipe Manana <fdmanana@xxxxxxxx> > Signed-off-by: David Sterba <dsterba@xxxxxxxx> > Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> > --- > fs/btrfs/inode.c | 2 ++ > 1 file changed, 2 insertions(+) > > --- a/fs/btrfs/inode.c > +++ b/fs/btrfs/inode.c > @@ -7117,6 +7117,8 @@ noinline int can_nocow_extent(struct ino > ret = -EAGAIN; > goto out; > } > + > + cond_resched(); > } > > if (file_extent) > > Hi, please let me confirm; is this backport really ok? I mean, should the cond_resched() be added to btrfs_swap_activate() loop? I was able to reproduce the same situation: $ git rev-parse HEAD 319addc2ad901dac4d6cc931d77ef35073e0942f $ b4 mbox --single-message c37ea7a8de12e996091ba295b2f201fbe680c96c.1733929328.git.fdmanana@xxxxxxxx 1 messages in the thread Saved ./c37ea7a8de12e996091ba295b2f201fbe680c96c.1733929328.git.fdmanana@xxxxxxxxxxxx $ patch -p1 < ./c37ea7a8de12e996091ba295b2f201fbe680c96c.1733929328.git.fdmanana@xxxxxxxxxxxx patching file fs/btrfs/inode.c Hunk #1 succeeded at 7117 with fuzz 1 (offset -2961 lines). $ git diff diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index 58ffe78132d9..6fe2ac620464 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -7117,6 +7117,8 @@ noinline int can_nocow_extent(struct inode *inode, u64 offset, u64 *len, ret = -EAGAIN; goto out; } + + cond_resched(); } if (file_extent) The same goes for all the other stable branches applied. Sorry if I'm missing something. Thanks, -Koichiro