> On Dec 17, 2024, at 10:32 AM, Kumar Kartikeya Dwivedi <memxor@xxxxxxxxx> wrote: > > On Tue, 17 Dec 2024 at 19:25, Song Liu <songliubraving@xxxxxxxx> wrote: >> >> Hi Alexei, >> >> Thanks for the review! >> >>> On Dec 17, 2024, at 8:50 AM, Alexei Starovoitov <alexei.starovoitov@xxxxxxxxx> wrote: >>> >>> On Mon, Dec 16, 2024 at 10:38 PM Song Liu <song@xxxxxxxxxx> wrote: >>>> >>>> Add the following kfuncs to set and remove xattrs from BPF programs: >>>> >>>> bpf_set_dentry_xattr >>>> bpf_remove_dentry_xattr >>>> bpf_set_dentry_xattr_locked >>>> bpf_remove_dentry_xattr_locked >>>> >>>> The _locked version of these kfuncs are called from hooks where >>>> dentry->d_inode is already locked. >>> >>> ... >>> >>>> + * >>>> + * Setting and removing xattr requires exclusive lock on dentry->d_inode. >>>> + * Some hooks already locked d_inode, while some hooks have not locked >>>> + * d_inode. Therefore, we need different kfuncs for different hooks. >>>> + * Specifically, hooks in the following list (d_inode_locked_hooks) >>>> + * should call bpf_[set|remove]_dentry_xattr_locked; while other hooks >>>> + * should call bpf_[set|remove]_dentry_xattr. >>>> + */ >>> >>> the inode locking rules might change, so let's hide this >>> implementation detail from the bpf progs by making kfunc polymorphic. >>> >>> To struct bpf_prog_aux add: >>> bool use_locked_kfunc:1; >>> and set it in bpf_check_attach_target() if it's attaching >>> to one of d_inode_locked_hooks >>> >>> Then in fixup_kfunc_call() call some helper that >>> if (prog->aux->use_locked_kfunc && >>> insn->imm == special_kfunc_list[KF_bpf_remove_dentry_xattr]) >>> insn->imm = special_kfunc_list[KF_bpf_remove_dentry_xattr_locked]; >>> >>> The progs will be simpler and will suffer less churn >>> when the kernel side changes. >> >> I was thinking about something in similar direction. >> >> If we do this, shall we somehow hide the _locked version of the >> kfuncs, so that the user cannot use it? If so, what's the best >> way to do it? > > Just don't add BTF_ID_FLAGS entries for them. > You'd also need to make an extra call to add_kfunc_call to add its > details before you can do the fixup. > That allows find_kfunc_desc to work. > I did something similar in earlier versions of resilient locks. > In add_kfunc_call's end (instead of directly returning): > func_id = get_shadow_kfunc_id(func_id, offset); > if (!func_id) > return err; > return add_kfunc_call(env, func_id, offset); > > Then check in fixup_kfunc_call to find shadow kfunc id and substitute imm. > Can use some other naming instead of "shadow". > Probably need to take a prog pointer to make a decision to find the > underlying kfunc id in your case. Thanks for the hints! They helped a lot. I ended up doing this with a slightly different logic, which I think is cleaner. I will send v5 shortly. Song