Re: [PATCH] mm, swap: don't disable preemption while taking the per-CPU cache

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri 23-06-17 13:47:55, Sebastian Andrzej Siewior wrote:
> get_cpu_var() disables preemption and returns the per-CPU version of the
> variable. Disabling preemption is useful to ensure atomic access to the
> variable within the critical section.
> In this case however, after the per-CPU version of the variable is
> obtained the ->free_lock is acquired. For that reason it seems the raw
> accessor could be used. It only seems that ->slots_ret should be
> retested (because with disabled preemption this variable can not be set
> to NULL otherwise).
> This popped up during PREEMPT-RT testing because it tries to take
> spinlocks in a preempt disabled section.

Ohh, because the spinlock can sleep with PREEMPT-RT right? Don't we have
much more places like that? It is perfectly valid to take a spinlock
while the preemption is disabled. E.g. we do take ptl lock inside
kmap_atomic sections which disables preemption on 32b systems.

> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx>

Acked-by: Michal Hocko <mhocko@xxxxxxxx>

> ---
> On 2017-06-23 12:34:23 [+0200], Michal Hocko wrote:
> > The changelog doesn't explain, why does this change matter. Disabling
> > preemption shortly before taking a spinlock shouldn't make much
> > difference. I suspect you care because of RT, right? In that case spell
> > that in the changelog and explain why it matters.
> 
> yes, it is bad for RT. I added the RT pieces as explanation.
> 
> > Other than hat the patch looks good to me.
> 
> Thank you. +akpm.
> 
>  mm/swap_slots.c | 5 ++---
>  1 file changed, 2 insertions(+), 3 deletions(-)
> 
> diff --git a/mm/swap_slots.c b/mm/swap_slots.c
> index 58f6c78f1dad..51c304477482 100644
> --- a/mm/swap_slots.c
> +++ b/mm/swap_slots.c
> @@ -272,11 +272,11 @@ int free_swap_slot(swp_entry_t entry)
>  {
>  	struct swap_slots_cache *cache;
>  
> -	cache = &get_cpu_var(swp_slots);
> +	cache = raw_cpu_ptr(&swp_slots);
>  	if (use_swap_slot_cache && cache->slots_ret) {
>  		spin_lock_irq(&cache->free_lock);
>  		/* Swap slots cache may be deactivated before acquiring lock */
> -		if (!use_swap_slot_cache) {
> +		if (!use_swap_slot_cache || !cache->slots_ret) {
>  			spin_unlock_irq(&cache->free_lock);
>  			goto direct_free;
>  		}
> @@ -296,7 +296,6 @@ int free_swap_slot(swp_entry_t entry)
>  direct_free:
>  		swapcache_free_entries(&entry, 1);
>  	}
> -	put_cpu_var(swp_slots);
>  
>  	return 0;
>  }
> -- 
> 2.13.1
> 

-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]
  Powered by Linux