Re: [PATCH net-next v5 1/5] net/smc: Make smc_tcp_listen_work() independent

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



It is indeed okay to use system_wq at present. Dues to the load balancing issues we found, queue_work() always submits tasks to the worker on the current CPU. tcp_listen_work() execution once may submit a large number of tasks to the worker of the current CPU, causing unnecessary pending, even though worker on other CPU are totaly free. I was plan to make tcp_listen_work() blocked wait on worker of every CPU, so I create a new workqueue, and that's the only reason for it. But this problem is not very urgent, and I don't have strong opinion too...


在 2022/2/9 上午1:06, Karsten Graul 写道:
On 08/02/2022 13:53, D. Wythe wrote:
+static struct workqueue_struct	*smc_tcp_ls_wq;	/* wq for tcp listen work */
  struct workqueue_struct	*smc_hs_wq;	/* wq for handshake work */
  struct workqueue_struct	*smc_close_wq;	/* wq for close work */
@@ -2227,7 +2228,7 @@ static void smc_clcsock_data_ready(struct sock *listen_clcsock)
  	lsmc->clcsk_data_ready(listen_clcsock);
  	if (lsmc->sk.sk_state == SMC_LISTEN) {
  		sock_hold(&lsmc->sk); /* sock_put in smc_tcp_listen_work() */
-		if (!queue_work(smc_hs_wq, &lsmc->tcp_listen_work))
+		if (!queue_work(smc_tcp_ls_wq, &lsmc->tcp_listen_work))
  			sock_put(&lsmc->sk);

It works well this way, but given the fact that there is one tcp_listen worker per
listen socket and these workers finish relatively quickly, wouldn't it be okay to
use the system_wq instead of using an own queue? But I have no strong opinion about that...



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Kernel Development]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite Info]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Samba]     [Linux Media]     [Device Mapper]

  Powered by Linux