On Thu, Jul 2, 2020 at 2:25 AM Jakub Sitnicki <jakub@xxxxxxxxxxxxxx> wrote: > > Add a new program type BPF_PROG_TYPE_SK_LOOKUP with a dedicated attach type > BPF_SK_LOOKUP. The new program kind is to be invoked by the transport layer > when looking up a listening socket for a new connection request for > connection oriented protocols, or when looking up an unconnected socket for > a packet for connection-less protocols. > > When called, SK_LOOKUP BPF program can select a socket that will receive > the packet. This serves as a mechanism to overcome the limits of what > bind() API allows to express. Two use-cases driving this work are: > > (1) steer packets destined to an IP range, on fixed port to a socket > > 192.0.2.0/24, port 80 -> NGINX socket > > (2) steer packets destined to an IP address, on any port to a socket > > 198.51.100.1, any port -> L7 proxy socket > > In its run-time context program receives information about the packet that > triggered the socket lookup. Namely IP version, L4 protocol identifier, and > address 4-tuple. Context can be further extended to include ingress > interface identifier. > > To select a socket BPF program fetches it from a map holding socket > references, like SOCKMAP or SOCKHASH, and calls bpf_sk_assign(ctx, sk, ...) > helper to record the selection. Transport layer then uses the selected > socket as a result of socket lookup. > > This patch only enables the user to attach an SK_LOOKUP program to a > network namespace. Subsequent patches hook it up to run on local delivery > path in ipv4 and ipv6 stacks. > > Suggested-by: Marek Majkowski <marek@xxxxxxxxxxxxxx> > Signed-off-by: Jakub Sitnicki <jakub@xxxxxxxxxxxxxx> > --- > > Notes: > v3: > - Allow bpf_sk_assign helper to replace previously selected socket only > when BPF_SK_LOOKUP_F_REPLACE flag is set, as a precaution for multiple > programs running in series to accidentally override each other's verdict. > - Let BPF program decide that load-balancing within a reuseport socket group > should be skipped for the socket selected with bpf_sk_assign() by passing > BPF_SK_LOOKUP_F_NO_REUSEPORT flag. (Martin) > - Extend struct bpf_sk_lookup program context with an 'sk' field containing > the selected socket with an intention for multiple attached program > running in series to see each other's choices. However, currently the > verifier doesn't allow checking if pointer is set. > - Use bpf-netns infra for link-based multi-program attachment. (Alexei) > - Get rid of macros in convert_ctx_access to make it easier to read. > - Disallow 1-,2-byte access to context fields containing IP addresses. > > v2: > - Make bpf_sk_assign reject sockets that don't use RCU freeing. > Update bpf_sk_assign docs accordingly. (Martin) > - Change bpf_sk_assign proto to take PTR_TO_SOCKET as argument. (Martin) > - Fix broken build when CONFIG_INET is not selected. (Martin) > - Rename bpf_sk_lookup{} src_/dst_* fields remote_/local_*. (Martin) > - Enforce BPF_SK_LOOKUP attach point on load & attach. (Martin) > > include/linux/bpf-netns.h | 3 + > include/linux/bpf_types.h | 2 + > include/linux/filter.h | 19 ++++ > include/uapi/linux/bpf.h | 74 +++++++++++++++ > kernel/bpf/net_namespace.c | 5 + > kernel/bpf/syscall.c | 9 ++ > net/core/filter.c | 186 +++++++++++++++++++++++++++++++++++++ > scripts/bpf_helpers_doc.py | 9 +- > 8 files changed, 306 insertions(+), 1 deletion(-) > > diff --git a/include/linux/bpf-netns.h b/include/linux/bpf-netns.h > index 4052d649f36d..cb1d849c5d4f 100644 > --- a/include/linux/bpf-netns.h > +++ b/include/linux/bpf-netns.h > @@ -8,6 +8,7 @@ > enum netns_bpf_attach_type { > NETNS_BPF_INVALID = -1, > NETNS_BPF_FLOW_DISSECTOR = 0, > + NETNS_BPF_SK_LOOKUP, > MAX_NETNS_BPF_ATTACH_TYPE > }; > [...] > +struct bpf_sk_lookup_kern { > + u16 family; > + u16 protocol; > + union { > + struct { > + __be32 saddr; > + __be32 daddr; > + } v4; > + struct { > + const struct in6_addr *saddr; > + const struct in6_addr *daddr; > + } v6; > + }; > + __be16 sport; > + u16 dport; > + struct sock *selected_sk; > + bool no_reuseport; > +}; > + > #endif /* __LINUX_FILTER_H__ */ > diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h > index 0cb8ec948816..8dd6e6ce5de9 100644 > --- a/include/uapi/linux/bpf.h > +++ b/include/uapi/linux/bpf.h > @@ -189,6 +189,7 @@ enum bpf_prog_type { > BPF_PROG_TYPE_STRUCT_OPS, > BPF_PROG_TYPE_EXT, > BPF_PROG_TYPE_LSM, > + BPF_PROG_TYPE_SK_LOOKUP, > }; > > enum bpf_attach_type { > @@ -226,6 +227,7 @@ enum bpf_attach_type { > BPF_CGROUP_INET4_GETSOCKNAME, > BPF_CGROUP_INET6_GETSOCKNAME, > BPF_XDP_DEVMAP, > + BPF_SK_LOOKUP, Point not specific to your changes, but I wanted to bring it up for a while now, so thought this one might be as good an opportunity as any. It seems like enum bpf_attach_type originally was intended for only cgroup BPF programs. To that end, cgroup_bpf has a bunch of fields with sizes proportional to MAX_BPF_ATTACH_TYPE. It costs at least 8+4+16=28 bytes for each different type *per each cgroup*. At this point, we have 22 cgroup-specific attach types, and this will be the 13th non-cgroup attach type. So cgroups pay a price for each time we extend bpf_attach_type with a new non-cgroup attach type. cgroup_bpf is now 336 bytes bigger than it needs to be. So I wanted to propose that we do the same thing for cgroup_bpf as you did for net_ns with netns_bpf_attach_type: have a densely-packed enum just for cgroup attach types and translate now generic bpf_attach_type to cgroup-specific cgroup_bpf_attach_type. I wonder what people think? Is that a good idea? Is anyone up for doing this? > __MAX_BPF_ATTACH_TYPE > }; > [...] > + > +static u32 sk_lookup_convert_ctx_access(enum bpf_access_type type, > + const struct bpf_insn *si, > + struct bpf_insn *insn_buf, > + struct bpf_prog *prog, > + u32 *target_size) Would it be too extreme to rely on BTF and direct memory access (similar to tp_raw, fentry/fexit, etc) for accessing context fields, instead of all this assembly rewrites? So instead of having bpf_sk_lookup and bpf_sk_lookup_kern, it will always be a full variant (bpf_sk_lookup_kern, or however we'd want to name it then) and verifier will just ensure that direct memory reads go to the right field boundaries? > +{ > + struct bpf_insn *insn = insn_buf; > +#if IS_ENABLED(CONFIG_IPV6) > + int off; > +#endif > + [...]