On Tue, Dec 17, 2024 at 3:45 PM Juntong Deng <juntong.deng@xxxxxxxxxxx> wrote: > > -static int bpf_fs_kfuncs_filter(const struct bpf_prog *prog, u32 kfunc_id) > -{ > - if (!btf_id_set8_contains(&bpf_fs_kfunc_set_ids, kfunc_id) || > - prog->type == BPF_PROG_TYPE_LSM) > - return 0; > - return -EACCES; > -} > - > static const struct btf_kfunc_id_set bpf_fs_kfunc_set = { > .owner = THIS_MODULE, > .set = &bpf_fs_kfunc_set_ids, > - .filter = bpf_fs_kfuncs_filter, > }; > > static int __init bpf_fs_kfuncs_init(void) > { > - return register_btf_kfunc_id_set(BPF_PROG_TYPE_LSM, &bpf_fs_kfunc_set); > + int ret; > + > + ret = register_btf_kfunc_id_set(BPF_PROG_TYPE_LSM, &bpf_fs_kfunc_set); > + return ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_SYSCALL, &bpf_fs_kfunc_set); > } > > late_initcall(bpf_fs_kfuncs_init); > diff --git a/tools/testing/selftests/bpf/progs/verifier_vfs_reject.c b/tools/testing/selftests/bpf/progs/verifier_vfs_reject.c > index d6d3f4fcb24c..5aab75fd2fa5 100644 > --- a/tools/testing/selftests/bpf/progs/verifier_vfs_reject.c > +++ b/tools/testing/selftests/bpf/progs/verifier_vfs_reject.c > @@ -148,14 +148,4 @@ int BPF_PROG(path_d_path_kfunc_invalid_buf_sz, struct file *file) > return 0; > } > > -SEC("fentry/vfs_open") > -__failure __msg("calling kernel function bpf_path_d_path is not allowed") This is incorrect. You have to keep bpf_fs_kfuncs_filter() and prog->type == BPF_PROG_TYPE_LSM check because bpf_prog_type_to_kfunc_hook() aliases LSM and fentry into BTF_KFUNC_HOOK_TRACING category. It's been an annoying quirk. We're figuring out details for significant refactoring of register_btf_kfunc_id_set() and the whole registration process. Maybe you would be interested in working on it? The main goal is to get rid of run-time mask check in SCX_CALL_OP() and make it static by the verifier. To make that happen scx_kf_mask flags would need to become KF_* flags while each struct-ops callback will specify the expected mask. Then at struct-ops prog attach time the verifier will see the expected mask and can check that all kfuncs calls of this particular program satisfy the mask. Then all of the runtime overhead of current->scx.kf_mask and scx_kf_allowed() will go away.