The patch titled Subject: mm/swap_slots.c: don't disable preemption while taking the per-CPU cache has been removed from the -mm tree. Its filename was mm-swap-dont-disable-preemption-while-taking-the-per-cpu-cache.patch This patch was dropped because it was merged into mainline or a subsystem tree ------------------------------------------------------ From: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx> Subject: mm/swap_slots.c: don't disable preemption while taking the per-CPU cache get_cpu_var() disables preemption and returns the per-CPU version of the variable. Disabling preemption is useful to ensure atomic access to the variable within the critical section. In this case however, after the per-CPU version of the variable is obtained the ->free_lock is acquired. For that reason it seems the raw accessor could be used. It only seems that ->slots_ret should be retested (because with disabled preemption this variable can not be set to NULL otherwise). This popped up during PREEMPT-RT testing because it tries to take spinlocks in a preempt disabled section. In RT, spinlocks can sleep. Link: http://lkml.kernel.org/r/20170623114755.2ebxdysacvgxzott@xxxxxxxxxxxxx Signed-off-by: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx> Acked-by: Michal Hocko <mhocko@xxxxxxxx> Cc: Tim Chen <tim.c.chen@xxxxxxxxxxxxxxx> Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx> Cc: Ying Huang <ying.huang@xxxxxxxxx> Cc: Hugh Dickins <hughd@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/swap_slots.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff -puN mm/swap_slots.c~mm-swap-dont-disable-preemption-while-taking-the-per-cpu-cache mm/swap_slots.c --- a/mm/swap_slots.c~mm-swap-dont-disable-preemption-while-taking-the-per-cpu-cache +++ a/mm/swap_slots.c @@ -273,11 +273,11 @@ int free_swap_slot(swp_entry_t entry) { struct swap_slots_cache *cache; - cache = &get_cpu_var(swp_slots); + cache = raw_cpu_ptr(&swp_slots); if (use_swap_slot_cache && cache->slots_ret) { spin_lock_irq(&cache->free_lock); /* Swap slots cache may be deactivated before acquiring lock */ - if (!use_swap_slot_cache) { + if (!use_swap_slot_cache || !cache->slots_ret) { spin_unlock_irq(&cache->free_lock); goto direct_free; } @@ -297,7 +297,6 @@ int free_swap_slot(swp_entry_t entry) direct_free: swapcache_free_entries(&entry, 1); } - put_cpu_var(swp_slots); return 0; } _ Patches currently in -mm which might be from bigeasy@xxxxxxxxxxxxx are -- To unsubscribe from this list: send the line "unsubscribe mm-commits" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html