Toshiaki Makita <toshiaki.makita1@xxxxxxxxx> writes: > > Hi Toke, > > Sorry for the delay. > > On 2019/10/31 21:12, Toke Høiland-Jørgensen wrote: >> Toshiaki Makita <toshiaki.makita1@xxxxxxxxx> writes: >> >>> On 2019/10/28 0:21, Toke Høiland-Jørgensen wrote: >>>> Toshiaki Makita <toshiaki.makita1@xxxxxxxxx> writes: >>>>>> Yeah, you are right that it's something we're thinking about. I'm not >>>>>> sure we'll actually have the bandwidth to implement a complete solution >>>>>> ourselves, but we are very much interested in helping others do this, >>>>>> including smoothing out any rough edges (or adding missing features) in >>>>>> the core XDP feature set that is needed to achieve this :) >>>>> >>>>> I'm very interested in general usability solutions. >>>>> I'd appreciate if you could join the discussion. >>>>> >>>>> Here the basic idea of my approach is to reuse HW-offload infrastructure >>>>> in kernel. >>>>> Typical networking features in kernel have offload mechanism (TC flower, >>>>> nftables, bridge, routing, and so on). >>>>> In general these are what users want to accelerate, so easy XDP use also >>>>> should support these features IMO. With this idea, reusing existing >>>>> HW-offload mechanism is a natural way to me. OVS uses TC to offload >>>>> flows, then use TC for XDP as well... >>>> >>>> I agree that XDP should be able to accelerate existing kernel >>>> functionality. However, this does not necessarily mean that the kernel >>>> has to generate an XDP program and install it, like your patch does. >>>> Rather, what we should be doing is exposing the functionality through >>>> helpers so XDP can hook into the data structures already present in the >>>> kernel and make decisions based on what is contained there. We already >>>> have that for routing; L2 bridging, and some kind of connection >>>> tracking, are obvious contenders for similar additions. >>> >>> Thanks, adding helpers itself should be good, but how does this let users >>> start using XDP without having them write their own BPF code? >> >> It wouldn't in itself. But it would make it possible to write XDP >> programs that could provide the same functionality; people would then >> need to run those programs to actually opt-in to this. >> >> For some cases this would be a simple "on/off switch", e.g., >> "xdp-route-accel --load <dev>", which would install an XDP program that >> uses the regular kernel routing table (and the same with bridging). We >> are planning to collect such utilities in the xdp-tools repo - I am >> currently working on a simple packet filter: >> https://github.com/xdp-project/xdp-tools/tree/xdp-filter > > Let me confirm how this tool adds filter rules. > Is this adding another commandline tool for firewall? > > If so, that is different from my goal. > Introducing another commandline tool will require people to learn > more. > > My proposal is to reuse kernel interface to minimize such need for > learning. I wasn't proposing that this particular tool should be a replacement for the kernel packet filter; it's deliberately fairly limited in functionality. My point was that we could create other such tools for specific use cases which could be more or less drop-in (similar to how nftables has a command line tool that is compatible with the iptables syntax). I'm all for exposing more of the existing kernel capabilities to XDP. However, I think it's the wrong approach to do this by reimplementing the functionality in eBPF program and replicating the state in maps; instead, it's better to refactor the existing kernel functionality to it can be called directly from an eBPF helper function. And then ship a tool as part of xdp-tools that installs an XDP program to make use of these helpers to accelerate the functionality. Take your example of TC rules: You were proposing a flow like this: Userspace TC rule -> kernel rule table -> eBPF map -> generated XDP program Whereas what I mean is that we could do this instead: Userspace TC rule -> kernel rule table and separately XDP program -> bpf helper -> lookup in kernel rule table -Toke