Hello Trond, workqueue WQ_UNBOUND flag is also needed. Some customer hit a problem, RT thread caused rpciod starvation. It is easy to reproduce it with running a cpu intensive workload with lower nice value than rpciod workqueue on the cpu the network interrupt is received. I've also tested iozone and fio test with WQ_UNBOUND|WQ_SYSFS flag on for NFS/RDMA, NFS/IPoIB. The results are better than BOUND. Thanks, Shirley On 01/24/2015 04:18 PM, Trond Myklebust wrote: > Increase the concurrency level for rpciod threads to allow for allocations > etc that happen in the RPCSEC_GSS layer. Also note that the NFSv4 byte range > locks may now need to allocate memory from inside rpciod. > > Add the WQ_HIGHPRI flag to improve latency guarantees while we're at it. > > Signed-off-by: Trond Myklebust <trond.myklebust@xxxxxxxxxxxxxxx> > --- > net/sunrpc/sched.c | 3 ++- > 1 file changed, 2 insertions(+), 1 deletion(-) > > diff --git a/net/sunrpc/sched.c b/net/sunrpc/sched.c > index d20f2329eea3..4f65ec28d2b4 100644 > --- a/net/sunrpc/sched.c > +++ b/net/sunrpc/sched.c > @@ -1069,7 +1069,8 @@ static int rpciod_start(void) > * Create the rpciod thread and wait for it to start. > */ > dprintk("RPC: creating workqueue rpciod\n"); > - wq = alloc_workqueue("rpciod", WQ_MEM_RECLAIM, 1); > + /* Note: highpri because network receive is latency sensitive */ > + wq = alloc_workqueue("rpciod", WQ_MEM_RECLAIM | WQ_HIGHPRI, 0); > rpciod_workqueue = wq; > return rpciod_workqueue != NULL; > } > -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html