Re: [PATCH bpf-next v2 2/6] xdp: introduce xdp_call

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Björn Töpel <bjorn.topel@xxxxxxxxx> writes:

> On Mon, 25 Nov 2019 at 16:56, Toke Høiland-Jørgensen <toke@xxxxxxxxxx> wrote:
>>
>> Björn Töpel <bjorn.topel@xxxxxxxxx> writes:
>>
>> > On Mon, 25 Nov 2019 at 12:18, Toke Høiland-Jørgensen <toke@xxxxxxxxxx> wrote:
>> >>
>> >> Björn Töpel <bjorn.topel@xxxxxxxxx> writes:
>> >>
>> >> > From: Björn Töpel <bjorn.topel@xxxxxxxxx>
>> >> >
>> >> > The xdp_call.h header wraps a more user-friendly API around the BPF
>> >> > dispatcher. A user adds a trampoline/XDP caller using the
>> >> > DEFINE_XDP_CALL macro, and updates the BPF dispatcher via
>> >> > xdp_call_update(). The actual dispatch is done via xdp_call().
>> >> >
>> >> > Note that xdp_call() is only supported for builtin drivers. Module
>> >> > builds will fallback to bpf_prog_run_xdp().
>> >>
>> >> I don't like this restriction. Distro kernels are not likely to start
>> >> shipping all the network drivers builtin, so they won't benefit from the
>> >> performance benefits from this dispatcher.
>> >>
>> >> What is the reason these dispatcher blocks have to reside in the driver?
>> >> Couldn't we just allocate one system-wide, and then simply change
>> >> bpf_prog_run_xdp() to make use of it transparently (from the driver
>> >> PoV)? That would also remove the need to modify every driver...
>> >>
>> >
>> > Good idea! I'll try that out. Thanks for the suggestion!
>>
>> Awesome! I guess the table may need to be a bit bigger if it's
>> system-wide? But since you've already gone to all that trouble with the
>> binary search, I guess that shouldn't have too much of a performance
>> impact? Maybe the size could even be a config option so users/distros
>> can make their own size tradeoff?
>>
>
> My bigger concern is not the dispatcher size, but that any XDP update
> will be a system wide text-poke. OTOH, this is still the case even if
> there are multiple dispatchers. No more "quickly swap XDP program in
> one packet latency".

Ah, right. I don't actually know the details of how all this kernel text
rewriting happens. I just assumed it was magic faerie dust that just
made everything faster; but now you're telling me there are tradeoffs?! ;)

When you say "no more quickly swap XDP programs" you mean that the
attach operation itself will take longer, right? I.e., it's not that it
will disrupt packet flow to the old program while it's happening? Also,
how much longer?

-Toke





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux