> On Thu, Apr 06, 2017 at 02:29:03PM +0200, Marta Rybczynska wrote: >> >>>> You say above "we post *up to* 2 work requests", unless you wish to >> >>>> change that to "we always post at least 2 work requests per queue >> >>>> entry", Jason is right, your frequency of signaling needs to be X/2 >> >>>> regardless of your CQ size, you need the signaling to control the queue >> >>>> depth tracking. >> >>> >> >>> If you would like to spread things out farther between signaling, then >> >>> you can modify your send routine to only increment the send counter for >> >>> actual send requests, ignoring registration WQEs and invalidate WQES, >> >>> and then signal every X/2 sends. >> >> >> >> Yea, you're right, and not only I got it wrong, I even contradicted my >> >> own suggestion that was exactly what you and Jason suggested (where is >> >> the nearest rat-hole...) >> >> >> >> So I suggested to signal every X/2 and Marta reported SQ overflows for >> >> high queue-dpeth. Marta, at what queue-depth have you seen this? >> > >> > The remote side had queue depth of 16 or 32 and that's the WQ on the >> > initiator side that overflows (mlx5_wq_overflow). We're testing with >> > signalling X/2 and it seems to work. >> >> Update on the situation: the signalling on X/2 seems to work fine in >> practice. To clarify more that's the send queue that overflows >> (mlx5_wq_overflow in begin_wqe of drivers/infiniband/hw/mlx5/qp.c). >> >> However, I have still doubt how it's going to work in the case of >> higher queue depths (i.e. the typical case). If we signal every X/2 >> we'll do it much more rarely than today (every 32 messages). I'm not >> sure on the system effect this would have. >> >> Mellanox guys, do you have an idea what it might do? > > It will continue to work as expected with long depths too. > All that you need is do not to forget to issue signal if queue is terminated. > Thanks Leon. I will then submit the v2. Marta -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html