Hi, Song On 2021/11/5 5:31 AM, Song Liu wrote: > In some profiler use cases, it is necessary to map an address to the > backing file, e.g., a shared library. bpf_find_vma helper provides a > flexible way to achieve this. bpf_find_vma maps an address of a task to > the vma (vm_area_struct) for this address, and feed the vma to an callback > BPF function. The callback function is necessary here, as we need to > ensure mmap_sem is unlocked. > > It is necessary to lock mmap_sem for find_vma. To lock and unlock mmap_sem > safely when irqs are disable, we use the same mechanism as stackmap with > build_id. Specifically, when irqs are disabled, the unlocked is postponed > in an irq_work. Refactor stackmap.c so that the irq_work is shared among > bpf_find_vma and stackmap helpers. > > Reported-by: kernel test robot <lkp@xxxxxxxxx> > Signed-off-by: Song Liu <songliubraving@xxxxxx> > --- [...] > > -BTF_ID_LIST(btf_task_file_ids) > -BTF_ID(struct, file) > -BTF_ID(struct, vm_area_struct) > - > static const struct bpf_iter_seq_info task_seq_info = { > .seq_ops = &task_seq_ops, > .init_seq_private = init_seq_pidns, > @@ -586,9 +583,74 @@ static struct bpf_iter_reg task_vma_reg_info = { > .seq_info = &task_vma_seq_info, > }; > > +BPF_CALL_5(bpf_find_vma, struct task_struct *, task, u64, start, > + bpf_callback_t, callback_fn, void *, callback_ctx, u64, flags) > +{ > + struct mmap_unlock_irq_work *work = NULL; > + struct vm_area_struct *vma; > + bool irq_work_busy = false; > + struct mm_struct *mm; > + int ret = -ENOENT; > + > + if (flags) > + return -EINVAL; > + > + if (!task) > + return -ENOENT; > + > + mm = task->mm; > + if (!mm) > + return -ENOENT; > + > + irq_work_busy = bpf_mmap_unlock_get_irq_work(&work); > + > + if (irq_work_busy || !mmap_read_trylock(mm)) > + return -EBUSY; > + > + vma = find_vma(mm, start); > + I found that when a BPF program attach to security_file_open which is in the bpf_d_path helper's allowlist, the bpf_d_path helper is also allowed to be called inside the callback function. So we can have this in callback function: bpf_d_path(&vma->vm_file->f_path, path, sizeof(path)); I wonder whether there is a guarantee that vma->vm_file will never be null, as you said in the commit message, a backing file. If that is not something to be concerned, feel free to add: Tested-by: Hengqi Chen <hengqi.chen@xxxxxxxxx> > + if (vma && vma->vm_start <= start && vma->vm_end > start) { > + callback_fn((u64)(long)task, (u64)(long)vma, > + (u64)(long)callback_ctx, 0, 0); > + ret = 0; > + } > + bpf_mmap_unlock_mm(work, mm); > + return ret; > +} > + > +const struct bpf_func_proto bpf_find_vma_proto = { > + .func = bpf_find_vma, > + .ret_type = RET_INTEGER, > + .arg1_type = ARG_PTR_TO_BTF_ID, > + .arg1_btf_id = &btf_task_struct_ids[0], > + .arg2_type = ARG_ANYTHING, > + .arg3_type = ARG_PTR_TO_FUNC, > + .arg4_type = ARG_PTR_TO_STACK_OR_NULL, > + .arg5_type = ARG_ANYTHING, > +}; > + [...] Cheers, -- Hengqi