Re: [PATCH bpf-next 1/4] xdp: Support specifying expected existing program when attaching XDP

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 3/23/20 10:53 PM, Andrii Nakryiko wrote:
> On Mon, Mar 23, 2020 at 6:01 PM David Ahern <dsahern@xxxxxxxxx> wrote:
>>
>> On 3/23/20 1:23 PM, Toke Høiland-Jørgensen wrote:
>>>>>> I agree here. And yes, I've been working on extending bpf_link into
>>>>>> cgroup and then to XDP. We are still discussing some cgroup-specific
>>>>>> details, but the patch is ready. I'm going to post it as an RFC to get
>>>>>> the discussion started, before we do this for XDP.
>>>>>
>>>>> Well, my reason for being skeptic about bpf_link and proposing the
>>>>> netlink-based API is actually exactly this, but in reverse: With
>>>>> bpf_link we will be in the situation that everything related to a netdev
>>>>> is configured over netlink *except* XDP.
>>
>> +1
> 
> Hm... so using **libbpf**'s bpf_set_link_xdp_fd() API (notice "bpf" in
> the name of the library and function, and notice no "netlink"), which
> exposes absolutely nothing about netlink (it's just an internal
> implementation detail and can easily change), is ok. But actually
> switching to libbpf's bpf_link would be out of ordinary? Especially
> considering that to use freplace programs (for libxdp and chaining)
> with libbpf you will use bpf_program and bpf_link abstractions
> anyways.

It seems to me you are conflating libbpf api with the kernel uapi.
Making libbpf user friendly certainly encourages standardization on its
use, but there is no requirement that use of bpf means use of libbpf.

> 
>>
>>>>
>>>> One can argue that everything related to use of BPF is going to be
>>>> uniform and done through BPF syscall? Given variety of possible BPF
>>>> hooks/targets, using custom ways to attach for all those many cases is
>>>> really bad as well, so having a unifying concept and single entry to
>>>> do this is good, no?
>>>
>>> Well, it depends on how you view the BPF subsystem's relation to the
>>> rest of the kernel, I suppose. I tend to view it as a subsystem that
>>> provides a bunch of functionality, which you can setup (using "internal"
>>> BPF APIs), and then attach that object to a different subsystem
>>> (networking) using that subsystem's configuration APIs.
>>>
>>
>> again, +1.
>>
>> bpf syscall is used for program related manipulations like load and
> 
> bpf syscall is used for way more than that, actually...
> 
>> unload. Attaching that program to an object has a type unique solution -
>> e.g., netlink for XDP and ioctl for perf_events.
> 
> That's not true and hasn't been true for at least a while now. cgroup
> programs, flow_dissector, lirc_mode2 (whatever that is, I have no
> idea) are attached with BPF_PROG_ATTACH. raw_tracepoint and all the
> fentry/fexit/fmod_ret/freplace attachments are done also through bpf
> syscall. For perf_event related stuff it's done through ioctls right
> now, but with bpf_link unification I wouldn't be surprised if it will

and it always will be able to. Kernel uapi will not be revoked because a
new way to do something comes along.

> be done through the same LINK_CREATE command soon, as is done for
> cgroup and *other* tracing bpf_links. Because consistent API and
> semantics is good, rather than having to do it N different ways for N
> different subsystems.
> 

That's a bpf / libbpf centric perspective. What Toke and I are saying is
the networking centric perspective matters to and networking uses
netlink for configuration.



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux