On Tue, Jan 14, 2020 at 12:51 AM CET, Martin Lau wrote: > On Fri, Jan 10, 2020 at 11:50:25AM +0100, Jakub Sitnicki wrote: >> SOCKMAP now supports storing references to listening sockets. Nothing keeps >> us from using it as an array of sockets to select from in SK_REUSEPORT >> programs. >> >> Whitelist the map type with the BPF helper for selecting socket. >> >> The restriction that the socket has to be a member of a reuseport group >> still applies. Socket from a SOCKMAP that does not have sk_reuseport_cb set >> is not a valid target and we signal it with -EINVAL. >> >> Signed-off-by: Jakub Sitnicki <jakub@xxxxxxxxxxxxxx> >> --- >> kernel/bpf/verifier.c | 6 ++++-- >> net/core/filter.c | 15 ++++++++++----- >> 2 files changed, 14 insertions(+), 7 deletions(-) >> >> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c >> index f5af759a8a5f..0ee5f1594b5c 100644 >> --- a/kernel/bpf/verifier.c >> +++ b/kernel/bpf/verifier.c >> @@ -3697,7 +3697,8 @@ static int check_map_func_compatibility(struct bpf_verifier_env *env, >> if (func_id != BPF_FUNC_sk_redirect_map && >> func_id != BPF_FUNC_sock_map_update && >> func_id != BPF_FUNC_map_delete_elem && >> - func_id != BPF_FUNC_msg_redirect_map) >> + func_id != BPF_FUNC_msg_redirect_map && >> + func_id != BPF_FUNC_sk_select_reuseport) >> goto error; >> break; >> case BPF_MAP_TYPE_SOCKHASH: >> @@ -3778,7 +3779,8 @@ static int check_map_func_compatibility(struct bpf_verifier_env *env, >> goto error; >> break; >> case BPF_FUNC_sk_select_reuseport: >> - if (map->map_type != BPF_MAP_TYPE_REUSEPORT_SOCKARRAY) >> + if (map->map_type != BPF_MAP_TYPE_REUSEPORT_SOCKARRAY && >> + map->map_type != BPF_MAP_TYPE_SOCKMAP) >> goto error; >> break; >> case BPF_FUNC_map_peek_elem: >> diff --git a/net/core/filter.c b/net/core/filter.c >> index a702761ef369..c79c62a54167 100644 >> --- a/net/core/filter.c >> +++ b/net/core/filter.c >> @@ -8677,6 +8677,7 @@ struct sock *bpf_run_sk_reuseport(struct sock_reuseport *reuse, struct sock *sk, >> BPF_CALL_4(sk_select_reuseport, struct sk_reuseport_kern *, reuse_kern, >> struct bpf_map *, map, void *, key, u32, flags) >> { >> + bool is_sockarray = map->map_type == BPF_MAP_TYPE_REUSEPORT_SOCKARRAY; > A nit. > Since map_type is tested, reuseport_array_lookup_elem() or sock_map_lookup() > can directly be called also. mostly for consideration. will not > insist. sock_map_lookup() isn't global currently. If I'm following your thinking, you're suggesting an optimization against retpoline overhead along the lines of INDIRECT_CALL_$n wrappers: /* * INDIRECT_CALL_$NR - wrapper for indirect calls with $NR known builtin * @f: function pointer * @f$NR: builtin functions names, up to $NR of them * @__VA_ARGS__: arguments for @f * * Avoid retpoline overhead for known builtin, checking @f vs each of them and * eventually invoking directly the builtin function. The functions are check * in the given order. Fallback to the indirect call. */ #define INDIRECT_CALL_1(f, f1, ...) \ ({ \ likely(f == f1) ? f1(__VA_ARGS__) : f(__VA_ARGS__); \ }) #define INDIRECT_CALL_2(f, f2, f1, ...) \ ({ \ likely(f == f2) ? f2(__VA_ARGS__) : \ INDIRECT_CALL_1(f, f1, __VA_ARGS__); \ }) Will resist the temptation to optimize it as part of this series, because the indirect call is already there.