On Thu, Sep 29, 2022 at 12:04 AM Martin KaFai Lau <martin.lau@xxxxxxxxx> wrote: > > From: Martin KaFai Lau <martin.lau@xxxxxxxxxx> > > When a bad bpf prog '.init' calls > bpf_setsockopt(TCP_CONGESTION, "itself"), it will trigger this loop: > > .init => bpf_setsockopt(tcp_cc) => .init => bpf_setsockopt(tcp_cc) ... > ... => .init => bpf_setsockopt(tcp_cc). > > It was prevented by the prog->active counter before but the prog->active > detection cannot be used in struct_ops as explained in the earlier > patch of the set. > > In this patch, the second bpf_setsockopt(tcp_cc) is not allowed > in order to break the loop. This is done by using a bit of > an existing 1 byte hole in tcp_sock to check if there is > on-going bpf_setsockopt(TCP_CONGESTION) in this tcp_sock. > > Note that this essentially limits only the first '.init' can > call bpf_setsockopt(TCP_CONGESTION) to pick a fallback cc (eg. peer > does not support ECN) and the second '.init' cannot fallback to > another cc. This applies even the second > bpf_setsockopt(TCP_CONGESTION) will not cause a loop. > > Signed-off-by: Martin KaFai Lau <martin.lau@xxxxxxxxxx> Reviewed-by: Eric Dumazet <edumazet@xxxxxxxxxx>