Re: [PATCH] um: vector: fix BPF loading in vector drivers

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 30/11/2019 07:29, Anton Ivanov wrote:
On 29/11/2019 23:12, Daniel Borkmann wrote:
On 11/29/19 12:54 PM, Anton Ivanov wrote:
On 29/11/2019 09:15, Daniel Borkmann wrote:
On 11/28/19 6:44 PM, anton.ivanov@xxxxxxxxxxxxxxxxxx wrote:
From: Anton Ivanov <anton.ivanov@xxxxxxxxxxxxxxxxxx>

This fixes a possible hang in bpf firmware loading in the
UML vector io drivers due to use of GFP_KERNEL while holding
a spinlock.

Based on a prposed fix by weiyongjun1@xxxxxxxxxx and suggestions for
improving it by dan.carpenter@xxxxxxxxxx

Signed-off-by: Anton Ivanov <anton.ivanov@xxxxxxxxxxxxxxxxxx>

Any reason why this BPF firmware loading mechanism in UML vector driver that was recently added [0] is plain old classic BPF? Quoting your commit log [0]:

It will allow whatever is allowed by sockfilter. Looking at the sockfilter implementation in the kernel it takes eBPF, however even the kernel docs still state BPF.

You are using SO_ATTACH_FILTER in uml_vector_attach_bpf() which is the old classic BPF (and not eBPF). The kernel internally moves that over to eBPF insns, but you'll be constrained forever with the abilities of cBPF. The later added SO_ATTACH_BPF is
the one for eBPF where you pass the prog fd from bpf().

I will switch to that in the next version.


   All vector drivers now allow a BPF program to be loaded and
   associated with the RX socket in the host kernel.

   1. The program can be loaded as an extra kernel command line
   option to any of the vector drivers.

   2. The program can also be loaded as "firmware", using the
   ethtool flash option. It is possible to turn this facility
   on or off using a command line option.

   A simplistic wrapper for generating the BPF firmware for the raw
   socket driver out of a tcpdump/libpcap filter expression can be
   found at: https://github.com/kot-begemot-uk/uml_vector_utilities/

... it tells what it does but /nothing/ about the original rationale / use case why it is needed. So what is the use case? And why is this only classic BPF? Is there any discussion to read up that lead you to this decision of only implementing
handling for classic BPF?

Moving processing out of the GUEST onto the HOST using a safe language. The firmware load is on the GUEST and your BPF is your virtual NIC "firmware" which runs on the HOST (in the host kernel in fact).

It is identical as an idea to what Netronome cards do in hardware.

I'm asking because classic BPF is /legacy/ stuff that is on feature freeze and only very limited in terms of functionality compared to native (e)BPF which is why you need this weird 'firmware' loader [1] which wraps around tcpdump to
parse the -ddd output into BPF insns ...

Because there is no other mechanism of retrieving it after it is compiled by libpcap in any of the common scripting languages.

The pcap Perl, Python, Go (or whatever else) wrappers do not give you access to the compiled code after the filter has been compiled.

Why is that ingenious design - you have to take it with their maintainers.

So if you want to start with pcap/tcpdump syntax and you do not want to rewrite that part of tcpdump as a dumper in C you have no other choice.

The starting point is chosen because the idea is at some point to replace the existing and very aged pcap network transport in UML. That takes pcap syntax on the kernel command line.

I admit it is a kludge, I will probably do the "do not want" bit and rewrite that in C.

Yeah, it would probably be about the same # of LOC in C.

In any case - the "loader" is only an example, you can compile BPF using LLVM or whatever else you like.

But did you try that with the code you have? Seems not, which is perhaps why there are some
wrong assumptions.

All of my tests were done using bpf generated by tcpdump out of a pcap expression. So the answer is no - I did not try LLVM because I did not need to for what I was aiming to achieve.

The pcap route matches 1:1 existing functionality in the uml pcap driver as well as existing functionality in the vector drivers for the cases where they need to avoid seeing their own xmits and cannot use features like QDISC_BYPASS.


You can't use LLVM's BPF backend here since you only allow to pass in cBPF, and LLVM emits an object file with native eBPF insns (you could use libbpf (in-tree under tools/lib/bpf/)
for loading that).

My initial aim was the same feature sets as pcap and achieve it using a virtual analogue of what cards like Netronome do - via the firmware route.

Switching to SO_ATTACH_BPF will come in the next revision.

A.


A.

_______________________________________________
linux-um mailing list
linux-um@xxxxxxxxxxxxxxxxxxx
http://lists.infradead.org/mailman/listinfo/linux-um


After reviewing what is needed for switching from SOCK_FILTER to SOCK_BPF, IMHO it will have to wait for a while.

1. I am not sticking yet another direct host syscall invocation into the userspace portion of the uml kernel and we cannot add extra userspace libraries like libbpf at present because it is not supported by kbuild.

I have a patch in the queue for that, but it will need to be approved by the kernel build people and merged before this can be done.

2. On top of that, in order to make use of eBPF for vNIC firmware properly, I will need to figure out the correct abstractions. The "program" part is quite clear - an eBPF program fits exactly into the role of virtual nic firmware - it is identical to classic BPF and the way it is used at present.

The maps, however, and how do they go along with the "program firmware" is something which will need to be figured out. It may require a more complex load mechanisms and a proper (not 5 liner wrapper around pcap or tcpdump) firmware packer/unpacker.

Once I have figured it out and it can fit into the kbuild, I will send the next revision. I suspect that it will happen at about the same time I will finish the AF_XDP UML vNIC transport (it has the same requirements, needs the same calls and uses the same libraries).

--
Anton R. Ivanov
Cambridgegreys Limited. Registered in England. Company Number 10273661
https://www.cambridgegreys.com/



[Index of Archives]     [Kernel Development]     [Kernel Announce]     [Kernel Newbies]     [Linux Networking Development]     [Share Photos]     [IDE]     [Security]     [Git]     [Netfilter]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Device Mapper]

  Powered by Linux