On Fri, Jun 10, 2022 at 09:57:59AM -0700, Stanislav Fomichev wrote: > I don't see how to make it nice without introducing btf id lists > for the hooks where these helpers are allowed. Some LSM hooks > work on the locked sockets, some are triggering early and > don't grab any locks, so have two lists for now: > > 1. LSM hooks which trigger under socket lock - minority of the hooks, > but ideal case for us, we can expose existing BTF-based helpers > 2. LSM hooks which trigger without socket lock, but they trigger > early in the socket creation path where it should be safe to > do setsockopt without any locks > 3. The rest are prohibited. I'm thinking that this use-case might > be a good gateway to sleeping lsm cgroup hooks in the future. > We can either expose lock/unlock operations (and add tracking > to the verifier) or have another set of bpf_setsockopt > wrapper that grab the locks and might sleep. Another possibility is to acquire/release the sk lock in __bpf_prog_{enter,exit}_lsm_cgroup(). However, it will unnecessarily acquire it even the prog is not doing any get/setsockopt. It probably can make some checking to avoid the lock...etc. :/ sleepable bpf-prog is a cleaner way out. From a quick look, cgroup_storage is not safe for sleepable bpf-prog. All other BPF_MAP_TYPE_{SK,INODE,TASK}_STORAGE is already safe once their common infra in bpf_local_storage.c was made sleepable-safe. > > Signed-off-by: Stanislav Fomichev <sdf@xxxxxxxxxx> > --- > include/linux/bpf.h | 2 ++ > kernel/bpf/bpf_lsm.c | 40 +++++++++++++++++++++++++++++ > net/core/filter.c | 60 ++++++++++++++++++++++++++++++++++++++------ > 3 files changed, 95 insertions(+), 7 deletions(-) > > diff --git a/include/linux/bpf.h b/include/linux/bpf.h > index 503f28fa66d2..c0a269269882 100644 > --- a/include/linux/bpf.h > +++ b/include/linux/bpf.h > @@ -2282,6 +2282,8 @@ extern const struct bpf_func_proto bpf_for_each_map_elem_proto; > extern const struct bpf_func_proto bpf_btf_find_by_name_kind_proto; > extern const struct bpf_func_proto bpf_sk_setsockopt_proto; > extern const struct bpf_func_proto bpf_sk_getsockopt_proto; > +extern const struct bpf_func_proto bpf_unlocked_sk_setsockopt_proto; > +extern const struct bpf_func_proto bpf_unlocked_sk_getsockopt_proto; > extern const struct bpf_func_proto bpf_kallsyms_lookup_name_proto; > extern const struct bpf_func_proto bpf_find_vma_proto; > extern const struct bpf_func_proto bpf_loop_proto; > diff --git a/kernel/bpf/bpf_lsm.c b/kernel/bpf/bpf_lsm.c > index 83aa431dd52e..52b6e3067986 100644 > --- a/kernel/bpf/bpf_lsm.c > +++ b/kernel/bpf/bpf_lsm.c > @@ -45,6 +45,26 @@ BTF_ID(func, bpf_lsm_sk_alloc_security) > BTF_ID(func, bpf_lsm_sk_free_security) > BTF_SET_END(bpf_lsm_current_hooks) > > +/* List of LSM hooks that trigger while the socket is properly locked. > + */ > +BTF_SET_START(bpf_lsm_locked_sockopt_hooks) > +BTF_ID(func, bpf_lsm_socket_sock_rcv_skb) > +BTF_ID(func, bpf_lsm_sk_clone_security) >From looking how security_sk_clone() is used at sock_copy(), it has two sk args, one is listen sk and one is the clone. I think both of them are not locked. The bpf_lsm_inet_csk_clone below should be enough to do setsockopt in the new clone? > +BTF_ID(func, bpf_lsm_sock_graft) > +BTF_ID(func, bpf_lsm_inet_csk_clone) > +BTF_ID(func, bpf_lsm_inet_conn_established) > +BTF_ID(func, bpf_lsm_sctp_bind_connect) I didn't look at this one, so I can't comment. Do you have a use case? > +BTF_SET_END(bpf_lsm_locked_sockopt_hooks) > + > +/* List of LSM hooks that trigger while the socket is _not_ locked, > + * but it's ok to call bpf_{g,s}etsockopt because the socket is still > + * in the early init phase. > + */ > +BTF_SET_START(bpf_lsm_unlocked_sockopt_hooks) > +BTF_ID(func, bpf_lsm_socket_post_create) > +BTF_ID(func, bpf_lsm_socket_socketpair) > +BTF_SET_END(bpf_lsm_unlocked_sockopt_hooks) > +