The patch titled Subject: mm: lru_cache_disable: use synchronize_rcu_expedited has been added to the -mm mm-hotfixes-unstable branch. Its filename is mm-lru_cache_disable-use-synchronize_rcu_expedited.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-lru_cache_disable-use-synchronize_rcu_expedited.patch This patch will later appear in the mm-hotfixes-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Marcelo Tosatti <mtosatti@xxxxxxxxxx> Subject: mm: lru_cache_disable: use synchronize_rcu_expedited Date: Mon, 30 May 2022 12:51:56 -0300 commit ff042f4a9b050 ("mm: lru_cache_disable: replace work queue synchronization with synchronize_rcu") replaced lru_cache_disable's usage of work queues with synchronize_rcu. Some users reported large performance regressions due to this commit, for example: https://lore.kernel.org/all/20220521234616.GO1790663@paulmck-ThinkPad-P17-Gen-1/T/ Switching to synchronize_rcu_expedited fixes the problem. Link: https://lkml.kernel.org/r/YpToHCmnx/HEcVyR@xxxxxxxxxxx Fixes: ff042f4a9b050 ("mm: lru_cache_disable: replace work queue synchronization with synchronize_rcu") Signed-off-by: Marcelo Tosatti <mtosatti@xxxxxxxxxx> Tested-by: Stefan Wahren <stefan.wahren@xxxxxxxx> Tested-by: Michael Larabel <Michael@xxxxxxxxxxxxxxxxxx> Cc: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx> Cc: Nicolas Saenz Julienne <nsaenzju@xxxxxxxxxx> Cc: Borislav Petkov <bp@xxxxxxxxx> Cc: Minchan Kim <minchan@xxxxxxxxxx> Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx> Cc: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> Cc: Juri Lelli <juri.lelli@xxxxxxxxxx> Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx> Cc: Paul E. McKenney <paulmck@xxxxxxxxxx> Cc: Phil Elwell <phil@xxxxxxxxxxxxxxx> Cc: <stable@xxxxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/swap.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) --- a/mm/swap.c~mm-lru_cache_disable-use-synchronize_rcu_expedited +++ a/mm/swap.c @@ -881,7 +881,7 @@ void lru_cache_disable(void) * lru_disable_count = 0 will have exited the critical * section when synchronize_rcu() returns. */ - synchronize_rcu(); + synchronize_rcu_expedited(); #ifdef CONFIG_SMP __lru_add_drain_all(true); #else _ Patches currently in -mm which might be from mtosatti@xxxxxxxxxx are mm-lru_cache_disable-use-synchronize_rcu_expedited.patch