Re: [PATCH v5 bpf-next 4/5] bpf: fs/xattr: Add BPF kfuncs to set and remove xattrs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Dec 18, 2024 at 1:47 PM Song Liu <songliubraving@xxxxxxxx> wrote:
>
> Hi Alexei,
>
> Thanks for the review!
>
> > On Dec 18, 2024, at 1:20 PM, Alexei Starovoitov <alexei.starovoitov@xxxxxxxxx> wrote:
> >
> > On Tue, Dec 17, 2024 at 8:48 PM Song Liu <song@xxxxxxxxxx> wrote:
> >>
> >>
> >> BTF_KFUNCS_START(bpf_fs_kfunc_set_ids)
> >> @@ -170,6 +330,10 @@ BTF_ID_FLAGS(func, bpf_put_file, KF_RELEASE)
> >> BTF_ID_FLAGS(func, bpf_path_d_path, KF_TRUSTED_ARGS)
> >> BTF_ID_FLAGS(func, bpf_get_dentry_xattr, KF_SLEEPABLE | KF_TRUSTED_ARGS)
> >> BTF_ID_FLAGS(func, bpf_get_file_xattr, KF_SLEEPABLE | KF_TRUSTED_ARGS)
> >> +BTF_ID_FLAGS(func, bpf_set_dentry_xattr, KF_SLEEPABLE | KF_TRUSTED_ARGS)
> >> +BTF_ID_FLAGS(func, bpf_remove_dentry_xattr, KF_SLEEPABLE | KF_TRUSTED_ARGS)
> >> +BTF_ID_FLAGS(func, bpf_set_dentry_xattr_locked, KF_SLEEPABLE | KF_TRUSTED_ARGS)
> >> +BTF_ID_FLAGS(func, bpf_remove_dentry_xattr_locked, KF_SLEEPABLE | KF_TRUSTED_ARGS)
> >> BTF_KFUNCS_END(bpf_fs_kfunc_set_ids)
> >
> > The _locked() versions shouldn't be exposed to bpf prog.
> > Don't add them to the above set.
> >
> > Also we need to somehow exclude them from being dumped into vmlinux.h
> >
> >> static int bpf_fs_kfuncs_filter(const struct bpf_prog *prog, u32 kfunc_id)
> >> @@ -186,6 +350,37 @@ static const struct btf_kfunc_id_set bpf_fs_kfunc_set = {
> >>        .filter = bpf_fs_kfuncs_filter,
> >> };
>
> [...]
>
> >> + */
> >> +static void remap_kfunc_locked_func_id(struct bpf_verifier_env *env, struct bpf_insn *insn)
> >> +{
> >> +       u32 func_id = insn->imm;
> >> +
> >> +       if (bpf_lsm_has_d_inode_locked(env->prog)) {
> >> +               if (func_id == special_kfunc_list[KF_bpf_set_dentry_xattr])
> >> +                       insn->imm =  special_kfunc_list[KF_bpf_set_dentry_xattr_locked];
> >> +               else if (func_id == special_kfunc_list[KF_bpf_remove_dentry_xattr])
> >> +                       insn->imm = special_kfunc_list[KF_bpf_remove_dentry_xattr_locked];
> >> +       } else {
> >> +               if (func_id == special_kfunc_list[KF_bpf_set_dentry_xattr_locked])
> >> +                       insn->imm =  special_kfunc_list[KF_bpf_set_dentry_xattr];
> >
> > This part is not necessary.
> > _locked() shouldn't be exposed and it should be an error
> > if bpf prog attempts to use invalid kfunc.
>
> I was implementing this in different way than the solution you and Kumar
> suggested. Instead of updating this in add_kfunc_call, check_kfunc_call,
> and fixup_kfunc_call, remap_kfunc_locked_func_id happens before
> add_kfunc_call. Then, for the rest of the process, the verifier handles
> _locked version and not _locked version as two different kfuncs. This is
> why we need the _locked version in bpf_fs_kfunc_set_ids. I personally
> think this approach is a lot cleaner.

I see. Blind rewrite in add_kfunc_call() looks simpler,
but allowing progs call _locked() version directly is not clean.

See specialize_kfunc() as an existing approach that does polymorphism.

_locked() doesn't need to be __bpf_kfunc annotated.
It can be just like bpf_dynptr_from_skb_rdonly.

There will be no issue with vmlinux.h as well.





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux