Re: [PATCH net-next] bpf, net: Support redirecting to ifb with bpf

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Daniel Borkmann <daniel@xxxxxxxxxxxxx> writes:

> On 4/13/23 4:43 PM, Toke Høiland-Jørgensen wrote:
>> Daniel Borkmann <daniel@xxxxxxxxxxxxx> writes:
>> 
>>>> 2). We can't redirect ingress packet to ifb with bpf
>>>> By trying to analyze if it is possible to redirect the ingress packet to
>>>> ifb with a bpf program, we find that the ifb device is not supported by
>>>> bpf redirect yet.
>>>
>>> You actually can: Just let BPF program return TC_ACT_UNSPEC for this
>>> case and then add a matchall with higher prio (so it runs after bpf)
>>> that contains an action with mirred egress redirect that pushes to ifb
>>> dev - there is no change needed.
>> 
>> I wasn't aware that BPF couldn't redirect directly to an IFB; any reason
>> why we shouldn't merge this patch in any case?
>> 
>>>> This patch tries to resolve it by supporting redirecting to ifb with bpf
>>>> program.
>>>>
>>>> Ingress bandwidth limit is useful in some scenarios, for example, for the
>>>> TCP-based service, there may be lots of clients connecting it, so it is
>>>> not wise to limit the clients' egress. After limiting the server-side's
>>>> ingress, it will lower the send rate of the client by lowering the TCP
>>>> cwnd if the ingress bandwidth limit is reached. If we don't limit it,
>>>> the clients will continue sending requests at a high rate.
>>>
>>> Adding artificial queueing for the inbound traffic, aren't you worried
>>> about DoS'ing your node?
>> 
>> Just as an aside, the ingress filter -> ifb -> qdisc on the ifb
>> interface does work surprisingly well, and we've been using that over in
>> OpenWrt land for years[0]. It does have some overhead associated with it,
>> but I wouldn't expect it to be a source of self-DoS in itself (assuming
>> well-behaved TCP traffic).
>
> Out of curiosity, wrt OpenWrt case, can you elaborate on the use case to why
> choosing to do this on ingress via ifb rather than on the egress side? I
> presume in this case it's regular router, so pkts would be forwarded anyway,
> and in your case traversing qdisc layer / queuing twice (ingress phys dev ->
> ifb, egress phys dev), right? What is the rationale that would justify such
> setup aka why it cannot be solved differently?

Because there's not always a single egress on the other side. These are
mainly home routers, which tend to have one or more WiFi devices bridged
to one or more ethernet ports on the LAN side, and a single upstream WAN
port. And the objective is to control the total amount of traffic going
over the WAN link (in both directions), to deal with bufferbloat in the
ISP network (which is sadly still all too prevalent).

In this setup, the traffic can be split arbitrarily between the links on
the LAN side, and the only "single bottleneck" is the WAN link. So we
install both egress and ingress shapers on this, configured to something
like 95-98% of the true link bandwidth, thus moving the queues into the
qdisc layer in the router. It's usually necessary to set the ingress
bandwidth shaper a bit lower than the egress due to being "downstream"
of the bottleneck link, but it does work surprisingly well.

We usually use something like a matchall filter to put all ingress
traffic on the ifb, so doing the redirect from BPF has not been an
immediate requirement thus far. However, it does seem a bit odd that
this is not possible, and we do have a BPF-based filter that layers on
top of this kind of setup, which currently uses u32 as the ingress
filter and so it could presumably be improved to use BPF instead if that
was available:
https://git.openwrt.org/?p=project/qosify.git;a=blob;f=README

-Toke





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux