Andrii Nakryiko <andrii.nakryiko@xxxxxxxxx> writes: > On Mon, Mar 23, 2020 at 4:24 AM Toke Høiland-Jørgensen <toke@xxxxxxxxxx> wrote: >> >> Andrii Nakryiko <andrii.nakryiko@xxxxxxxxx> writes: >> >> > On Fri, Mar 20, 2020 at 11:31 AM John Fastabend >> > <john.fastabend@xxxxxxxxx> wrote: >> >> >> >> Jakub Kicinski wrote: >> >> > On Fri, 20 Mar 2020 09:48:10 +0100 Toke Høiland-Jørgensen wrote: >> >> > > Jakub Kicinski <kuba@xxxxxxxxxx> writes: >> >> > > > On Thu, 19 Mar 2020 14:13:13 +0100 Toke Høiland-Jørgensen wrote: >> >> > > >> From: Toke Høiland-Jørgensen <toke@xxxxxxxxxx> >> >> > > >> >> >> > > >> While it is currently possible for userspace to specify that an existing >> >> > > >> XDP program should not be replaced when attaching to an interface, there is >> >> > > >> no mechanism to safely replace a specific XDP program with another. >> >> > > >> >> >> > > >> This patch adds a new netlink attribute, IFLA_XDP_EXPECTED_FD, which can be >> >> > > >> set along with IFLA_XDP_FD. If set, the kernel will check that the program >> >> > > >> currently loaded on the interface matches the expected one, and fail the >> >> > > >> operation if it does not. This corresponds to a 'cmpxchg' memory operation. >> >> > > >> >> >> > > >> A new companion flag, XDP_FLAGS_EXPECT_FD, is also added to explicitly >> >> > > >> request checking of the EXPECTED_FD attribute. This is needed for userspace >> >> > > >> to discover whether the kernel supports the new attribute. >> >> > > >> >> >> > > >> Signed-off-by: Toke Høiland-Jørgensen <toke@xxxxxxxxxx> >> >> > > > >> >> > > > I didn't know we wanted to go ahead with this... >> >> > > >> >> > > Well, I'm aware of the bpf_link discussion, obviously. Not sure what's >> >> > > happening with that, though. So since this is a straight-forward >> >> > > extension of the existing API, that doesn't carry a high implementation >> >> > > cost, I figured I'd just go ahead with this. Doesn't mean we can't have >> >> > > something similar in bpf_link as well, of course. >> >> > >> >> > I'm not really in the loop, but from what I overheard - I think the >> >> > bpf_link may be targeting something non-networking first. >> >> >> >> My preference is to avoid building two different APIs one for XDP and another >> >> for everything else. If we have userlands that already understand links and >> >> pinning support is on the way imo lets use these APIs for networking as well. >> > >> > I agree here. And yes, I've been working on extending bpf_link into >> > cgroup and then to XDP. We are still discussing some cgroup-specific >> > details, but the patch is ready. I'm going to post it as an RFC to get >> > the discussion started, before we do this for XDP. >> >> Well, my reason for being skeptic about bpf_link and proposing the >> netlink-based API is actually exactly this, but in reverse: With >> bpf_link we will be in the situation that everything related to a netdev >> is configured over netlink *except* XDP. > > One can argue that everything related to use of BPF is going to be > uniform and done through BPF syscall? Given variety of possible BPF > hooks/targets, using custom ways to attach for all those many cases is > really bad as well, so having a unifying concept and single entry to > do this is good, no? Well, it depends on how you view the BPF subsystem's relation to the rest of the kernel, I suppose. I tend to view it as a subsystem that provides a bunch of functionality, which you can setup (using "internal" BPF APIs), and then attach that object to a different subsystem (networking) using that subsystem's configuration APIs. Seeing as this really boils down to a matter of taste, though, I'm not sure we'll find agreement on this :) >> Other than that, I don't see any reason why the bpf_link API won't work. >> So I guess that if no one else has any problem with BPF insisting on >> being a special snowflake, I guess I can live with it as well... *shrugs* :) > > Apart from derogatory remark, Yeah, should have left out the 'snowflake' bit, sorry about that... > BPF is a bit special here, because it requires every potential BPF > hook (be it cgroups, xdp, perf_event, etc) to be aware of BPF > program(s) and execute them with special macro. So like it or not, it > is special and each driver supporting BPF needs to implement this BPF > wiring. All that is about internal implementation, though. I'm bothered by the API discrepancy (i.e., from the user PoV we'll end up with: "netlink is what you use to configure your netdev except if you want to attach an XDP program to it"). -Toke