On Fri, Aug 04, 2023 at 02:55:29PM -0700, Yonghong Song wrote: SNIP > > +static int uprobe_prog_run(struct bpf_uprobe *uprobe, > > + unsigned long entry_ip, > > + struct pt_regs *regs) > > +{ > > + struct bpf_uprobe_multi_link *link = uprobe->link; > > + struct bpf_uprobe_multi_run_ctx run_ctx = { > > + .entry_ip = entry_ip, > > + }; > > + struct bpf_prog *prog = link->link.prog; > > + bool sleepable = prog->aux->sleepable; > > + struct bpf_run_ctx *old_run_ctx; > > + int err = 0; > > + > > + might_fault(); > > Could you explain what you try to protect here > with might_fault()? > > In my opinion, might_fault() is unnecessary here > since the calling context is process context and > there is no mmap_lock held, so might_fault() > won't capture anything. > > might_fault() is used in iter.c and trampoline.c > since their calling context is more complex > than here and might_fault() may actually capture > issues. hum, I followed bpf_prog_run_array_sleepable, which is called the same way.. will check > > > + > > + migrate_disable(); > > + > > + if (sleepable) > > + rcu_read_lock_trace(); > > + else > > + rcu_read_lock(); > > Looking at trampoline.c and iter.c, typical > usage is > rcu_read_lock_trace()/rcu_read_lock() > migrate_disable() > > Your above sequenence could be correct too. But it > is great if we can keep consistency here. ok, will switch that SNIP > > + link->cnt = cnt; > > + link->uprobes = uprobes; > > + link->path = path; > > + > > + bpf_link_init(&link->link, BPF_LINK_TYPE_UPROBE_MULTI, > > + &bpf_uprobe_multi_link_lops, prog); > > + > > + err = bpf_link_prime(&link->link, &link_primer); > > + if (err) > > + goto error_free; > > + > > + for (i = 0; i < cnt; i++) { > > + err = uprobe_register_refctr(d_real_inode(link->path.dentry), > > + uprobes[i].offset, > > + ref_ctr_offsets ? ref_ctr_offsets[i] : 0, > > + &uprobes[i].consumer); > > + if (err) { > > + bpf_uprobe_unregister(&path, uprobes, i); > > + bpf_link_cleanup(&link_primer); > > + kvfree(ref_ctr_offsets); > > Is it possible we may miss some of below 'error_free' cleanups? > In my opinion, we should replace > kvfree(ref_ctr_offsets); > return err; > with > goto error_free; > > Could you double check? the problem here is that bpf_link_cleanup calls link's dealloc callback, so it get's released in bpf_uprobe_multi_link_dealloc.. which is missing task release :-\ I think we could init the link only after we create all the uprobes, and have single release function dealloc callback and error path in here thanks, jirka > > > + return err; > > + } > > + } > > + > > + kvfree(ref_ctr_offsets); > > + return bpf_link_settle(&link_primer); > > + > > +error_free: > > + kvfree(ref_ctr_offsets); > > + kvfree(uprobes); > > + kfree(link); > > +error_path_put: > > + path_put(&path); > > + return err; > > +} > > +#else /* !CONFIG_UPROBES */ > > +int bpf_uprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *prog) > > +{ > > + return -EOPNOTSUPP; > > +} > > +#endif /* CONFIG_UPROBES */ > [...]