Re: [PATCH v3 5/9] SUNRPC: Count pool threads that were awoken but found no work to do

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> On Jul 10, 2023, at 6:29 PM, NeilBrown <neilb@xxxxxxx> wrote:
> 
> On Tue, 11 Jul 2023, Chuck Lever wrote:
>> From: Chuck Lever <chuck.lever@xxxxxxxxxx>
>> 
>> Measure a source of thread scheduling inefficiency -- count threads
>> that were awoken but found that the transport queue had already been
>> emptied.
>> 
>> An empty transport queue is possible when threads that run between
>> the wake_up_process() call and the woken thread returning from the
>> scheduler have pulled all remaining work off the transport queue
>> using the first svc_xprt_dequeue() in svc_get_next_xprt().
> 
> I'm in two minds about this.  The data being gathered here is
> potentially useful

It's actually pretty shocking: I've measured more than
15% of thread wake-ups find no work to do.


> - but who it is useful to?
> I think it is primarily useful for us - to understand the behaviour of
> the implementation so we can know what needs to be optimised.
> It isn't really of any use to a sysadmin who wants to understand how
> their system is performing.
> 
> But then .. who are tracepoints for?  Developers or admins?
> I guess that fact that we feel free to modify them whenever we need
> means they are primarily for developers?  In which case this is a good
> patch, and maybe we'll revert the functionality one day if it turns out
> that we can change the implementation so that a thread is never woken
> when there is no work to do ....

A reasonable question to ask. The new "starved" metric
is similar: possibly useful while we are developing the
code, but not particularly valuable for system
administrators.

How are the pool_stats used by administrators?

(And, why are they in /proc/fs/nfsd/ rather than under
something RPC-related?)


>> Signed-off-by: Chuck Lever <chuck.lever@xxxxxxxxxx>
>> ---
>> include/linux/sunrpc/svc.h |    1 +
>> net/sunrpc/svc.c           |    2 ++
>> net/sunrpc/svc_xprt.c      |    7 ++++---
>> 3 files changed, 7 insertions(+), 3 deletions(-)
>> 
>> diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h
>> index 74ea13270679..9dd3b16cc4c2 100644
>> --- a/include/linux/sunrpc/svc.h
>> +++ b/include/linux/sunrpc/svc.h
>> @@ -43,6 +43,7 @@ struct svc_pool {
>> struct percpu_counter sp_threads_woken;
>> struct percpu_counter sp_threads_timedout;
>> struct percpu_counter sp_threads_starved;
>> + struct percpu_counter sp_threads_no_work;
>> 
>> #define SP_TASK_PENDING (0) /* still work to do even if no
>> * xprt is queued. */
>> diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
>> index 88b7b5fb6d75..b7a02309ecb1 100644
>> --- a/net/sunrpc/svc.c
>> +++ b/net/sunrpc/svc.c
>> @@ -518,6 +518,7 @@ __svc_create(struct svc_program *prog, unsigned int bufsize, int npools,
>> percpu_counter_init(&pool->sp_threads_woken, 0, GFP_KERNEL);
>> percpu_counter_init(&pool->sp_threads_timedout, 0, GFP_KERNEL);
>> percpu_counter_init(&pool->sp_threads_starved, 0, GFP_KERNEL);
>> + percpu_counter_init(&pool->sp_threads_no_work, 0, GFP_KERNEL);
>> }
>> 
>> return serv;
>> @@ -595,6 +596,7 @@ svc_destroy(struct kref *ref)
>> percpu_counter_destroy(&pool->sp_threads_woken);
>> percpu_counter_destroy(&pool->sp_threads_timedout);
>> percpu_counter_destroy(&pool->sp_threads_starved);
>> + percpu_counter_destroy(&pool->sp_threads_no_work);
>> }
>> kfree(serv->sv_pools);
>> kfree(serv);
>> diff --git a/net/sunrpc/svc_xprt.c b/net/sunrpc/svc_xprt.c
>> index ecbccf0d89b9..6c2a702aa469 100644
>> --- a/net/sunrpc/svc_xprt.c
>> +++ b/net/sunrpc/svc_xprt.c
>> @@ -776,9 +776,9 @@ static struct svc_xprt *svc_get_next_xprt(struct svc_rqst *rqstp, long timeout)
>> 
>> if (!time_left)
>> percpu_counter_inc(&pool->sp_threads_timedout);
>> -
>> if (signalled() || kthread_should_stop())
>> return ERR_PTR(-EINTR);
>> + percpu_counter_inc(&pool->sp_threads_no_work);
>> return ERR_PTR(-EAGAIN);
>> out_found:
>> /* Normally we will wait up to 5 seconds for any required
>> @@ -1445,13 +1445,14 @@ static int svc_pool_stats_show(struct seq_file *m, void *p)
>> return 0;
>> }
>> 
>> - seq_printf(m, "%u %llu %llu %llu %llu %llu\n",
>> + seq_printf(m, "%u %llu %llu %llu %llu %llu %llu\n",
>> pool->sp_id,
>> percpu_counter_sum_positive(&pool->sp_messages_arrived),
>> percpu_counter_sum_positive(&pool->sp_sockets_queued),
>> percpu_counter_sum_positive(&pool->sp_threads_woken),
>> percpu_counter_sum_positive(&pool->sp_threads_timedout),
>> - percpu_counter_sum_positive(&pool->sp_threads_starved));
>> + percpu_counter_sum_positive(&pool->sp_threads_starved),
>> + percpu_counter_sum_positive(&pool->sp_threads_no_work));
>> 
>> return 0;
>> }
>> 
>> 
>> 
> 

--
Chuck Lever






[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux