Hi all,
Sorry for re-opening an old discussion, but I cannot find any final
decision on how to handle this.
See inline for more comments.
On 6/14/19 7:44 AM, Håkon Bugge wrote:
On 13 Jun 2019, at 22:25, Doug Ledford <dledford@xxxxxxxxxx> wrote:
On Thu, 2019-06-13 at 18:58 +0200, Håkon Bugge wrote:
On 13 Jun 2019, at 16:25, Doug Ledford <dledford@xxxxxxxxxx> wrote:
On Tue, 2019-02-26 at 08:57 +0100, Håkon Bugge wrote:
During certain workloads, the default CM response timeout is too
short, leading to excessive retries. Hence, make it configurable
through sysctl. While at it, also make number of CM retries
configurable.
The defaults are not changed.
Signed-off-by: Håkon Bugge <haakon.bugge@xxxxxxxxxx>
---
v1 -> v2:
* Added unregister_net_sysctl_table() in cma_cleanup()
---
drivers/infiniband/core/cma.c | 52
++++++++++++++++++++++++++++++---
--
1 file changed, 45 insertions(+), 7 deletions(-)
This has been sitting on patchworks since forever. Presumably
because
Jason and I neither one felt like we really wanted it, but also
couldn't justify flat refusing it.
I thought the agreement was to use NL and iproute2. But I haven't had
the capacity.
To be fair, the email thread was gone from my linux-rdma folder. So, I
just had to review the entry in patchworks, and there was no captured
discussion there. So, if the agreement was made, it must have been
face to face some time and if I was involed, I had certainly forgotten
by now. But I still needed to clean up patchworks, hence my email ;-).
This is the "agreement" I was referring too:
On 4 Mar 2019, at 07:27, Parav Pandit <parav@xxxxxxxxxxxx> wrote:
[]
I think we should use rdma_nl_register(RDMA_NL_RDMA_CM, cma_cb_table) which was removed as part of ID stats removal.
Because of below reasons.
1. rdma netlink command auto loads the module
2. we don't need to write any extra code to do register_net_sysctl () in each netns.
Caller's skb's netns will read/write value of response_timeout in 'struct cma_pernet'.
3. last time sysctl added in ipv6 was in 2017 in net/ipv6/addrconf.c, however ipv4 was done in 2018.
Currently rdma_cm/rdma_ucma has configfs, sysctl.
We are adding netlink sys params to ib_core.
We already have 3 clients and infra built using rdma_nl_register() netlink , so hooking up to netlink will provide unified way to set rdma params.
Let's just use netlink for any new params unless it is not doable.
Well, I've made up my mind, so
unless Jason wants to argue the other side, I'm rejecting this
patch.
Here's why. The whole concept of a timeout is to help recovery in
a
situation that overloads one end of the connection. There is a
relationship between the max queue backlog on the one host and the
timeout on the other host.
If you refer to the backlog parameter in rdma_listen(), I cannot see
it being used at all for IB.
No, not exactly. I was more referring to heavy load causing an
overflow in the mad packet receive processing. We have
IB_MAD_QP_RECV_SIZE set to 512 by default, but it can be changed at
module load time of the ib_core module and that represents the maximum
number of backlogged mad packets we can have waiting to be processed
before we just drop them on the floor. There can be other places to
drop them too, but this is the one I was referring to.
How can we determine the CM response timeout base on MAD QP recv size?
As far as I can see we would need
to know the process time for the requests. An incoming connection
request will be send to and handled by the listener
before the listener calls rdma_accept, do the processing time would need
to include this delay.
Maybe the best approach would be to let IB rdma users modify the CM
reposponse timeout, either by adding it to the rdma_conn_param struct or
by adding a setter function similar to rdma_set_ack_timeout. What do you
think?
Regards,
-Dag
That is another scenario than what I try to solve. What I see, is that the MAD packets are delayed, not lost. The delay is longer than the CMA timeout. Hence, the MAD packets are retried, adding more burden to the PF proxying and inducing even longer delays. And excessive CM retries are observed. See 2612d723aadc ("IB/mlx4: Increase the timeout for CM cache") where I have some quantification thereof.
Back to your scenario above, yes indeed, the queue sizes are module params. If the MADs are tossed, we will see rq_num_udsdprd incrementing on a CX-3.
But I do not understand how the dots are connected. Assume one client does rdma_listen(, backlog = 1000); Where are those 1000 REQs stored, assuming an "infinite slow processor"?
Thxs, Håkon
For CX-3, which is paravirtualized wrt. MAD packets, it is the proxy
UD receive queue length for the PF driver that can be construed as a
backlog. Remember that any MAD packet being sent from a VF or the PF
itself, is sent to a proxy UD QP in the PF. Those packets are then
multiplexed out on the real QP0/1. Incoming MAD packets are
demultiplexed and sent once more to the proxy QP in the VF.
Generally, in order for a request to get
dropped and us to need to retransmit, the queue must already have a
full backlog. So, how long does it take a heavily loaded system to
process a full backlog? That, plus a fuzz for a margin of error,
should be our timeout. We shouldn't be asking users to configure
it.
Customer configures #VMs and different workload may lead to way
different number of CM connections. The proxying of MAD packet
through the PF driver has a finite packet rate. With 64 VMs, 10.000
QPs on each, all going down due to a switch failing or similar, you
have 640.000 DREQs to be sent, and with the finite packet rate of MA
packets through the PF, this takes more than the current CM timeout.
And then you re-transmit and increase the burden of the PF proxying.
So, we can change the default to cope with this. But, a MAD packet is
unreliable, we may have transient loss. In this case, we want a short
timeout.
However, if users change the default backlog queue on their
systems,
*then* it would make sense to have the users also change the
timeout
here, but I think guidance would be helpful.
So, to revive this patch, what I'd like to see is some attempt to
actually quantify a reasonable timeout for the default backlog
depth,
then the patch should actually change the default to that
reasonable
timeout, and then put in the ability to adjust the timeout with
some
sort of doc guidance on how to calculate a reasonable timeout based
on
configured backlog depth.
I can agree to this :-)
Thxs, Håkon
--
Doug Ledford <dledford@xxxxxxxxxx>
GPG KeyID: B826A3330E572FDD
Key fingerprint = AE6B 1BDA 122B 23B4 265B 1274 B826 A333 0E57
2FDD
--
Doug Ledford <dledford@xxxxxxxxxx>
GPG KeyID: B826A3330E572FDD
Key fingerprint = AE6B 1BDA 122B 23B4 265B 1274 B826 A333 0E57
2FDD