Re: [RFC PATCH v2 bpf-next 00/15] xdp_flow: Flow offload to XDP

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2019/11/14 21:41, Toke Høiland-Jørgensen wrote:
Toshiaki Makita <toshiaki.makita1@xxxxxxxxx> writes:

On 2019/11/13 1:53, Toke Høiland-Jørgensen wrote:
Toshiaki Makita <toshiaki.makita1@xxxxxxxxx> writes:

Hi Toke,

Sorry for the delay.

On 2019/10/31 21:12, Toke Høiland-Jørgensen wrote:
Toshiaki Makita <toshiaki.makita1@xxxxxxxxx> writes:

On 2019/10/28 0:21, Toke Høiland-Jørgensen wrote:
Toshiaki Makita <toshiaki.makita1@xxxxxxxxx> writes:
Yeah, you are right that it's something we're thinking about. I'm not
sure we'll actually have the bandwidth to implement a complete solution
ourselves, but we are very much interested in helping others do this,
including smoothing out any rough edges (or adding missing features) in
the core XDP feature set that is needed to achieve this :)

I'm very interested in general usability solutions.
I'd appreciate if you could join the discussion.

Here the basic idea of my approach is to reuse HW-offload infrastructure
in kernel.
Typical networking features in kernel have offload mechanism (TC flower,
nftables, bridge, routing, and so on).
In general these are what users want to accelerate, so easy XDP use also
should support these features IMO. With this idea, reusing existing
HW-offload mechanism is a natural way to me. OVS uses TC to offload
flows, then use TC for XDP as well...

I agree that XDP should be able to accelerate existing kernel
functionality. However, this does not necessarily mean that the kernel
has to generate an XDP program and install it, like your patch does.
Rather, what we should be doing is exposing the functionality through
helpers so XDP can hook into the data structures already present in the
kernel and make decisions based on what is contained there. We already
have that for routing; L2 bridging, and some kind of connection
tracking, are obvious contenders for similar additions.

Thanks, adding helpers itself should be good, but how does this let users
start using XDP without having them write their own BPF code?

It wouldn't in itself. But it would make it possible to write XDP
programs that could provide the same functionality; people would then
need to run those programs to actually opt-in to this.

For some cases this would be a simple "on/off switch", e.g.,
"xdp-route-accel --load <dev>", which would install an XDP program that
uses the regular kernel routing table (and the same with bridging). We
are planning to collect such utilities in the xdp-tools repo - I am
currently working on a simple packet filter:
https://github.com/xdp-project/xdp-tools/tree/xdp-filter

Let me confirm how this tool adds filter rules.
Is this adding another commandline tool for firewall?

If so, that is different from my goal.
Introducing another commandline tool will require people to learn
more.

My proposal is to reuse kernel interface to minimize such need for
learning.

I wasn't proposing that this particular tool should be a replacement for
the kernel packet filter; it's deliberately fairly limited in
functionality. My point was that we could create other such tools for
specific use cases which could be more or less drop-in (similar to how
nftables has a command line tool that is compatible with the iptables
syntax).

I'm all for exposing more of the existing kernel capabilities to XDP.
However, I think it's the wrong approach to do this by reimplementing
the functionality in eBPF program and replicating the state in maps;
instead, it's better to refactor the existing kernel functionality to it
can be called directly from an eBPF helper function. And then ship a
tool as part of xdp-tools that installs an XDP program to make use of
these helpers to accelerate the functionality.

Take your example of TC rules: You were proposing a flow like this:

Userspace TC rule -> kernel rule table -> eBPF map -> generated XDP
program

Whereas what I mean is that we could do this instead:

Userspace TC rule -> kernel rule table

and separately

XDP program -> bpf helper -> lookup in kernel rule table

Thanks, now I see what you mean.
You expect an XDP program like this, right?

int xdp_tc(struct xdp_md *ctx)
{
	int act = bpf_xdp_tc_filter(ctx);
	return act;
}

Yes, basically, except that the XDP program would need to parse the
packet first, and bpf_xdp_tc_filter() would take a parameter struct with
the parsed values. See the usage of bpf_fib_lookup() in
bpf/samples/xdp_fwd_kern.c

But doesn't this way lose a chance to reduce/minimize the program to
only use necessary features for this device?

Not necessarily. Since the BPF program does the packet parsing and fills
in the TC filter lookup data structure, it can limit what features are
used that way (e.g., if I only want to do IPv6, I just parse the v6
header, ignore TCP/UDP, and drop everything that's not IPv6). The lookup
helper could also have a flag argument to disable some of the lookup
features.

It's unclear to me how to configure that.
Use options when attaching the program? Something like
$ xdp_tc attach eth0 --only-with ipv6
But can users always determine their necessary features in advance?
Frequent manual reconfiguration when TC rules frequently changes does not sound nice.
Or, add hook to kernel to listen any TC filter event on some daemon and automatically
reload the attached program?

Another concern is key size. If we use the TC core then TC will use its hash table with
fixed key size. So we cannot decrease the size of hash table key in this way?


It would probably require a bit of refactoring in the kernel data
structures so they can be used without being tied to an skb. David Ahern
did something similar for the fib. For the routing table case, that
resulted in a significant speedup: About 2.5x-3x the performance when
using it via XDP (depending on the number of routes in the table).

I'm curious about how much the helper function can improve the performance compared to
XDP programs which emulates kernel feature without using such helpers.
2.5x-3x sounds a bit slow as XDP to me, but it can be routing specific problem.

Toshiaki Makita



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux