Re: [PATCH] SUNRPC: Don't disable preemption while calling svc_pool_for_cpu().

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Sebastian-


> On May 4, 2022, at 1:24 PM, Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx> wrote:
> 
> svc_xprt_enqueue() disables preemption via get_cpu() and then asks for a
> pool of a specific CPU (current) via svc_pool_for_cpu().
> With disabled preemption it acquires svc_pool::sp_lock, a spinlock_t,
> which is a sleeping lock on PREEMPT_RT and can't be acquired with
> disabled preemption.

I found this paragraph a little unclear. If you repost, I'd suggest:

    svc_xprt_enqueue() disables preemption via get_cpu() and then asks
    for a pool of a specific CPU (current) via svc_pool_for_cpu().
    While preemption is disabled, svc_xprt_enqueue() acquires
    svc_pool::sp_lock with bottom-halfs disabled, which can sleep on
    PREEMPT_RT.


> Disabling preemption is not required here. The pool is protected with a
> lock so the following list access is safe even cross-CPU. The following
> iteration through svc_pool::sp_all_threads is under RCU-readlock and
> remaining operations within the loop are atomic and do not rely on
> disabled-preemption.
> 
> Use raw_smp_processor_id() as the argument for the requested CPU in
> svc_pool_for_cpu().
> 
> Reported-by: Mike Galbraith <umgwanakikbuti@xxxxxxxxx>
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx>
> ---
> net/sunrpc/svc_xprt.c | 5 +----
> 1 file changed, 1 insertion(+), 4 deletions(-)
> 
> diff --git a/net/sunrpc/svc_xprt.c b/net/sunrpc/svc_xprt.c
> index 5b59e2103526e..79965deec5b12 100644
> --- a/net/sunrpc/svc_xprt.c
> +++ b/net/sunrpc/svc_xprt.c
> @@ -448,7 +448,6 @@ void svc_xprt_enqueue(struct svc_xprt *xprt)
> {
> 	struct svc_pool *pool;
> 	struct svc_rqst	*rqstp = NULL;
> -	int cpu;
> 
> 	if (!svc_xprt_ready(xprt))
> 		return;
> @@ -461,8 +460,7 @@ void svc_xprt_enqueue(struct svc_xprt *xprt)
> 	if (test_and_set_bit(XPT_BUSY, &xprt->xpt_flags))
> 		return;
> 
> -	cpu = get_cpu();
> -	pool = svc_pool_for_cpu(xprt->xpt_server, cpu);
> +	pool = svc_pool_for_cpu(xprt->xpt_server, raw_smp_processor_id());

The pre-2014 code here was this:

	cpu = get_cpu();
	pool = svc_pool_for_cpu(xprt->xpt_server, cpu);
	put_cpu();

Your explanation covers the rationale for leaving pre-emption
enabled in the body of svc_xprt_enqueue(), but it's not clear
to me that rationale also applies to svc_pool_for_cpu().

Protecting the svc_pool data structures with RCU might be
better, but would amount to a more extensive change. To address
the pre-emption issue quickly, you could simply revert this
call site back to its pre-2014 form and postpone deeper changes.

I have an NFS server in my lab on NUMA hardware and can run
some tests this week with this patch.


> 	atomic_long_inc(&pool->sp_stats.packets);
> 
> @@ -485,7 +483,6 @@ void svc_xprt_enqueue(struct svc_xprt *xprt)
> 	rqstp = NULL;
> out_unlock:
> 	rcu_read_unlock();
> -	put_cpu();
> 	trace_svc_xprt_enqueue(xprt, rqstp);
> }
> EXPORT_SYMBOL_GPL(svc_xprt_enqueue);
> -- 
> 2.36.0
> 

--
Chuck Lever







[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux