Bound workqueues might be too restrictive since they allow only a single core per session for processing completions. WQ_UNBOUND will allow bouncing to another CPU if the running CPU is currently busy. Luckily, our workqueues are NUMA aware and will first try to bounce within the same NUMA socket. My measurements with NULL backend devices show that there is no (noticeable) additional latency as a result of the change. I'd expect even to gain performance when working with fast devices that also allocate MSIX interrupt vectors. While we're at it, make it WQ_HIGHPRI since processing completions is really a high priority for performance. This one is an RFC since I'd like to ask the users to try out this patch and report the results. Signed-off-by: Sagi Grimberg <sagig@xxxxxxxxxxxx> --- drivers/infiniband/ulp/isert/ib_isert.c | 3 ++- 1 files changed, 2 insertions(+), 1 deletions(-) diff --git a/drivers/infiniband/ulp/isert/ib_isert.c b/drivers/infiniband/ulp/isert/ib_isert.c index dafb3c5..6b5ce34 100644 --- a/drivers/infiniband/ulp/isert/ib_isert.c +++ b/drivers/infiniband/ulp/isert/ib_isert.c @@ -3320,7 +3320,8 @@ static int __init isert_init(void) { int ret; - isert_comp_wq = alloc_workqueue("isert_comp_wq", 0, 0); + isert_comp_wq = alloc_workqueue("isert_comp_wq", + WQ_UNBOUND | WQ_HIGHPRI, 0); if (!isert_comp_wq) { isert_err("Unable to allocate isert_comp_wq\n"); ret = -ENOMEM; -- 1.7.1 -- To unsubscribe from this list: send the line "unsubscribe target-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html