On 2024-01-04 20:29:02 [+0100], Toke Høiland-Jørgensen wrote: > Alexei Starovoitov <alexei.starovoitov@xxxxxxxxx> writes: > > >> @@ -3925,6 +3926,7 @@ struct sk_buff *tcf_qevent_handle(struct tcf_qevent *qe, struct Qdisc *sch, stru > >> > >> fl = rcu_dereference_bh(qe->filter_chain); > >> > >> + guard(local_lock_nested_bh)(&bpf_run_lock.redirect_lock); > >> switch (tcf_classify(skb, NULL, fl, &cl_res, false)) { > >> case TC_ACT_SHOT: > >> qdisc_qstats_drop(sch); > > > > Here and in all other places this patch adds locks that > > will kill performance of XDP, tcx and everything else in networking. > > > > I'm surprised Jesper and other folks are not jumping in with nacks. > > We measure performance in nanoseconds here. > > Extra lock is no go. > > Please find a different way without ruining performance. > > I'll add that while all this compiles out as no-ops on !PREEMPT_RT, I do > believe there are people who are using XDP on PREEMPT_RT kernels and > still expect decent performance. And to achieve that it is absolutely > imperative that we can amortise expensive operations (such as locking) > over multiple packets. > > I realise there's a fundamental trade-off between the amount of > amortisation and the latency hit that we take from holding locks for > longer, but tuning the batch size (while still keeping some amount of > batching) may be a way forward? I suppose Jakub's suggestion in the > other part of the thread, of putting the locks around napi->poll(), is a > step towards something like this. The RT requirements are usually different. Networking as in CAN might be important but Ethernet could only used for remote communication and so "not" important. People complained that they need to wait for Ethernet to be done until the CAN packet can be injected into the stack. With that expectation you would like to pause Ethernet immediately and switch over the CAN interrupt thread. But if someone managed to setup XDP then it is likely to be important. With RT traffic it is usually not the throughput that matters but the latency. You are likely in the position to receive a packet, say every 1ms, and need to respond immediately. XDP would be used to inspect the packet and either hand it over to the stack or process it. I expected the lock operation (under RT) to always succeeds and not cause any delay because it should not be contended. It should only block if something with higher priority preempted the current interrupt thread _and_ also happen to use XDP on the same CPU. In that case (XDP is needed) it would flush the current user out of the locked section before the higher-prio thread could take over. Doing bulk and allowing the low-priority thread to complete would delay the high-priority thread. Maybe I am too pessimistic here and having two XDP programs on one CPU is unlikely to happen. Adding the lock on per-NAPI basis would allow to batch packets. Acquiring the lock only if XDP is supported would not block the CAN drivers since they dont't support XDP. But sounds like a hack. Daniel said netkit doesn't need this locking because it is not supporting this redirect and it made me think. Would it work to make the redirect structures part of the bpf_prog-structure instead of per-CPU? My understanding is that eBPF's programs data structures are part of it and contain locking allowing one eBPF program preempt another one. Having the redirect structures part of the program would obsolete locking. Do I miss anything? > -Toke Sebastian