Re: XDP packet queueing and scheduling capabilities

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 13/02/2024 17:07, Toke Høiland-Jørgensen wrote:
Alexandre Cassen <acassen@xxxxxxxxx> writes:

Hi Toke,

here is a target with lot of interest for it : www.gtp-guard.org

Ah, seems like a cool project; thanks for the pointer!

Well, right now focus is much more on code rather than documentation :D

but doc will happen at one time. In a hurry right now adding support to XDP routing data-path that will deal with GTP-U decap on one side and PPPoE encap the other side, and reciprocally. This is one use-case to make mobile access network converge to existing isp access network infrastructure (not L2TP since it just create scalling issues for bunch of customers). another one will be SRv6 later on... Anyway, having fun hacking around on it :)


When dealing with mobile network data-plane, at some point you have
ordering issues and shaping needs, so queuing is truly needed.
Alternatively ones can implement PIFO or others built on AF_XDP but if
dedicated bpf map covers the use-case, would be nice.

Right, I'm kinda thinking about the map type that is part of the XDP
queueing series as a general-purpose packet buffer that will enable all
kinds of features, not just queueing for forwarding. Whether it'll end
up being the PIFO map type, or a simpler one, I'm less certain about.
The PIFO abstraction may end up being too special-purpose. Opinions
welcome!


Read the code last night, that is exactly what is required here, bpf_timer trick is fun.

(IP fragmentation in gtp-guard is using bpf_timer for ephemeral id tracking handling, this is just here re-ordering pop-up... on constructor vendors equipements where normal path is using diffrent processing-unit than frag handling this happen !)

Agreed on your point, having a general purpose map type enable inheritage option for specific feature instead of multiplying meaningful map type for each feature/use-case.


Watching at your LPC 2022 presentation, at the end, discussions where
made around using existing Qdisc kernel framework and find a way to
share the path between XDP and netstack. Is it a target for adding
PIFO, or more generally getting queuing support for XDP ?

I don't personally consider it feasible to have forwarded XDP frames
share the qdisc path. The presence of an sk_buff is too simply too
fundamentally baked into the qdisc layer. I'm hoping that the addition
of an eBPF-based qdisc will instead make it feasible to share queueing
algorithm code between the two layers (and even build forwarding paths
that can handle both by having the different BPF implementations
cooperate). And of course co-existence between XDP and stack forwarding
is important to avoid starvation, but that is already an issue for XDP
forwarding today.

Agreed too, eBPF backed Qdisc 'proxy' sounds great idea. latency forecast impact ?


- Alexandre




[Index of Archives]     [Linux Networking Development]     [Fedora Linux Users]     [Linux SCTP]     [DCCP]     [Gimp]     [Yosemite Campsites]

  Powered by Linux