This is a note to let you know that I've just added the patch titled ext4: avoid unnecessary spreading of allocations among groups to the 5.19-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: ext4-avoid-unnecessary-spreading-of-allocations-among-groups.patch and it can be found in the queue-5.19 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. >From 1940265ede6683f6317cba0d428ce6505eaca944 Mon Sep 17 00:00:00 2001 From: Jan Kara <jack@xxxxxxx> Date: Thu, 8 Sep 2022 11:21:25 +0200 Subject: ext4: avoid unnecessary spreading of allocations among groups From: Jan Kara <jack@xxxxxxx> commit 1940265ede6683f6317cba0d428ce6505eaca944 upstream. mb_set_largest_free_order() updates lists containing groups with largest chunk of free space of given order. The way it updates it leads to always moving the group to the tail of the list. Thus allocations looking for free space of given order effectively end up cycling through all groups (and due to initialization in last to first order). This spreads allocations among block groups which reduces performance for rotating disks or low-end flash media. Change mb_set_largest_free_order() to only update lists if the order of the largest free chunk in the group changed. Fixes: 196e402adf2e ("ext4: improve cr 0 / cr 1 group scanning") CC: stable@xxxxxxxxxx Reported-and-tested-by: Stefan Wahren <stefan.wahren@xxxxxxxx> Tested-by: Ojaswin Mujoo <ojaswin@xxxxxxxxxxxxx> Reviewed-by: Ritesh Harjani (IBM) <ritesh.list@xxxxxxxxx> Signed-off-by: Jan Kara <jack@xxxxxxx> Link: https://lore.kernel.org/all/0d81a7c2-46b7-6010-62a4-3e6cfc1628d6@xxxxxxxx/ Link: https://lore.kernel.org/r/20220908092136.11770-2-jack@xxxxxxx Signed-off-by: Theodore Ts'o <tytso@xxxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- fs/ext4/mballoc.c | 24 +++++++++++++----------- 1 file changed, 13 insertions(+), 11 deletions(-) --- a/fs/ext4/mballoc.c +++ b/fs/ext4/mballoc.c @@ -1077,23 +1077,25 @@ mb_set_largest_free_order(struct super_b struct ext4_sb_info *sbi = EXT4_SB(sb); int i; - if (test_opt2(sb, MB_OPTIMIZE_SCAN) && grp->bb_largest_free_order >= 0) { + for (i = MB_NUM_ORDERS(sb) - 1; i >= 0; i--) + if (grp->bb_counters[i] > 0) + break; + /* No need to move between order lists? */ + if (!test_opt2(sb, MB_OPTIMIZE_SCAN) || + i == grp->bb_largest_free_order) { + grp->bb_largest_free_order = i; + return; + } + + if (grp->bb_largest_free_order >= 0) { write_lock(&sbi->s_mb_largest_free_orders_locks[ grp->bb_largest_free_order]); list_del_init(&grp->bb_largest_free_order_node); write_unlock(&sbi->s_mb_largest_free_orders_locks[ grp->bb_largest_free_order]); } - grp->bb_largest_free_order = -1; /* uninit */ - - for (i = MB_NUM_ORDERS(sb) - 1; i >= 0; i--) { - if (grp->bb_counters[i] > 0) { - grp->bb_largest_free_order = i; - break; - } - } - if (test_opt2(sb, MB_OPTIMIZE_SCAN) && - grp->bb_largest_free_order >= 0 && grp->bb_free) { + grp->bb_largest_free_order = i; + if (grp->bb_largest_free_order >= 0 && grp->bb_free) { write_lock(&sbi->s_mb_largest_free_orders_locks[ grp->bb_largest_free_order]); list_add_tail(&grp->bb_largest_free_order_node, Patches currently in stable-queue which might be from jack@xxxxxxx are queue-5.19/ext4-avoid-unnecessary-spreading-of-allocations-among-groups.patch queue-5.19/ext4-use-buckets-for-cr-1-block-scan-instead-of-rbtree.patch queue-5.19/ext4-fix-bug-in-extents-parsing-when-eh_entries-0-and-eh_depth-0.patch queue-5.19/ext4-make-mballoc-try-target-group-first-even-with-mb_optimize_scan.patch queue-5.19/ext4-use-locality-group-preallocation-for-small-closed-files.patch queue-5.19/ext4-make-directory-inode-spreading-reflect-flexbg-size.patch