> On 13 Jun 2019, at 19:23, Jason Gunthorpe <jgg@xxxxxxxx> wrote: > > On Thu, Jun 13, 2019 at 06:58:30PM +0200, Håkon Bugge wrote: > >> If you refer to the backlog parameter in rdma_listen(), I cannot see >> it being used at all for IB. >> >> For CX-3, which is paravirtualized wrt. MAD packets, it is the proxy >> UD receive queue length for the PF driver that can be construed as a >> backlog. > > No, in IB you can drop UD packets if your RQ is full - so the proxy RQ > is really part of the overall RQ on QP1. > > The backlog starts once packets are taken off the RQ and begin the > connection accept processing. Do think we say the same thing. If, incoming REQ processing is severly delayed, the backlog is #entries in the QP1 receive queue in the PF. I can call rdma_listen() with a backlog of a zillion, but it will not help. >> Customer configures #VMs and different workload may lead to way >> different number of CM connections. The proxying of MAD packet >> through the PF driver has a finite packet rate. With 64 VMs, 10.000 >> QPs on each, all going down due to a switch failing or similar, you >> have 640.000 DREQs to be sent, and with the finite packet rate of MA >> packets through the PF, this takes more than the current CM >> timeout. And then you re-transmit and increase the burden of the PF >> proxying. > > I feel like the performance of all this proxying is too low to support > such a large work load :( That is what I am aiming at, for example to spread the completion_vector(s) for said QPs ;-) -h > > Can it be improved? > > Jason