On Tue, Nov 21, 2023 at 8:06 AM Jiri Pirko <jiri@xxxxxxxxxxx> wrote: > > Mon, Nov 20, 2023 at 11:56:50PM CET, jhs@xxxxxxxxxxxx wrote: > >On Mon, Nov 20, 2023 at 4:49 PM Daniel Borkmann <daniel@xxxxxxxxxxxxx> wrote: > >> > >> On 11/20/23 8:56 PM, Jamal Hadi Salim wrote: > >> > On Mon, Nov 20, 2023 at 1:10 PM Jiri Pirko <jiri@xxxxxxxxxxx> wrote: > >> >> Mon, Nov 20, 2023 at 03:23:59PM CET, jhs@xxxxxxxxxxxx wrote: > > [...] > > > > >> tc BPF and XDP already have widely used infrastructure and can be developed > >> against libbpf or other user space libraries for a user space control plane. > >> With 'control plane' you refer here to the tc / netlink shim you've built, > >> but looking at the tc command line examples, this doesn't really provide a > >> good user experience (you call it p4 but people load bpf obj files). If the > >> expectation is that an operator should run tc commands, then neither it's > >> a nice experience for p4 nor for BPF folks. From a BPF PoV, we moved over > >> to bpf_mprog and plan to also extend this for XDP to have a common look and > >> feel wrt networking for developers. Why can't this be reused? > > > >The filter loading which loads the program is considered pipeline > >instantiation - consider it as "provisioning" more than "control" > >which runs at runtime. "control" is purely netlink based. The iproute2 > >code we use links libbpf for example for the filter. If we can achieve > >the same with bpf_mprog then sure - we just dont want to loose > >functionality though. off top of my head, some sample space: > >- we could have multiple pipelines with different priorities (which tc > >provides to us) - and each pipeline may have its own logic with many > >tables etc (and the choice to iterate the next one is essentially > >encoded in the tc action codes) > >- we use tc block to map groups of ports (which i dont think bpf has > >internal access of) > > > >In regards to usability: no i dont expect someone doing things at > >scale to use command line tc. The APIs are via netlink. But the tc cli > >is must for the rest of the masses per our traditions. Also i really > > I don't follow. You repeatedly mention "the must of the traditional tc > cli", but what of the existing traditional cli you use for p4tc? > If I look at the examples, pretty much everything looks new to me. > Example: > > tc p4ctrl create myprog/table/mytable dstAddr 10.0.1.2/32 \ > action send_to_port param port eno1 > > This is just TC/RTnetlink used as a channel to pass new things over. If > that is the case, what's traditional here? > What is not traditional about it? > > >didnt even want to use ebpf at all for operator experience reasons - > >it requires a compilation of the code and an extra loading compared to > >what our original u32/pedit code offered. > > > >> I don't quite follow why not most of this could be implemented entirely in > >> user space without the detour of this and you would provide a developer > >> library which could then be integrated into a p4 runtime/frontend? This > >> way users never interface with ebpf parts nor tc given they also shouldn't > >> have to - it's an implementation detail. This is what John was also pointing > >> out earlier. > >> > > > >Netlink is the API. We will provide a library for object manipulation > >which abstracts away the need to know netlink. Someone who for their > >own reasons wants to use p4runtime or TDI could write on top of this. > >I would not design a kernel interface to just meet p4runtime (we > >already have TDI which came later which does things differently). So i > >expect us to support both those two. And if i was to do something on > >SDN that was more robust i would write my own that still uses these > >netlink interfaces. > > Actually, what Daniel says about the p4 library used as a backend to p4 > frontend is pretty much aligned what I claimed on the p4 calls couple of > times. If you have this p4 userspace tooling, it is easy for offloads to > replace the backed by vendor-specific library which allows p4 offload > suitable for all vendors (your plan of p4tc offload does not work well > for our hw, as we repeatedly claimed). > That's you - NVIDIA. You have chosen a path away from the kernel towards DOCA. I understand NVIDIA's frustration with dealing with upstream process (which has been cited to me as a good reason for DOCA) but please dont impose these values and your politics on other vendors(Intel, AMD for example) who are more than willing to invest into making the kernel interfaces the path forward. Your choice. Nobody is stopping you from offering your customers proprietary solutions which include a specific ebpf approach alongside DOCA. We believe that a singular interface regardless of the vendor is the right way forward. IMHO, this siloing that unfortunately is also added by eBPF being a double edged sword is not good for the community. > As I also said on the p4 call couple of times, I don't see the kernel > as the correct place to do the p4 abstractions. Why don't you do it in > userspace and give vendors possiblity to have p4 backends with compilers, > runtime optimizations etc in userspace, talking to the HW in the > vendor-suitable way too. Then the SW implementation could be easily eBPF > and the main reason (I believe) why you need to have this is TC > (offload) is then void. > > The "everyone wants to use TC/netlink" claim does not seem correct > to me. Why not to have one Linux p4 solution that fits everyones needs? You mean more fitting to the DOCA world? no, because iam a kernel first person and kernel interfaces are good for everyone. cheers, jamal