Alexandre Cassen <acassen@xxxxxxxxx> writes: >>> Watching at your LPC 2022 presentation, at the end, discussions where >>> made around using existing Qdisc kernel framework and find a way to >>> share the path between XDP and netstack. Is it a target for adding >>> PIFO, or more generally getting queuing support for XDP ? >> >> I don't personally consider it feasible to have forwarded XDP frames >> share the qdisc path. The presence of an sk_buff is too simply too >> fundamentally baked into the qdisc layer. I'm hoping that the addition >> of an eBPF-based qdisc will instead make it feasible to share queueing >> algorithm code between the two layers (and even build forwarding paths >> that can handle both by having the different BPF implementations >> cooperate). And of course co-existence between XDP and stack forwarding >> is important to avoid starvation, but that is already an issue for XDP >> forwarding today. > > Agreed too, eBPF backed Qdisc 'proxy' sounds great idea. latency > forecast impact ? Of writing the qdisc in eBPF instead of as a regular kernel module? Negligible; the overhead shown in the last posting of those patches[0] is not nil, but it seems there's a path to getting rid of it (teaching BPF how to put skbs directly into list/rbtree data structures instead of having to allocate a container for it). The latency impact of mixing XDP and qdisc traffic? Dunno, depends on the traffic and the algorithms managing it. I don't think there's anything inherent in the BPF side of things that would impact latency (it's all just code in the end), as long as we make sure that the APIs and primitives can express all the things we need to effectively implement good algorithms. Which is why I'm asking for examples of use cases :) -Toke [0] https://lore.kernel.org/r/cover.1705432850.git.amery.hung@xxxxxxxxxxxxx