On Wed, Dec 11, 2024 at 5:35 AM Jiri Olsa <jolsa@xxxxxxxxxx> wrote: > > Adding support to add special mapping for for user space trampoline typo: for for > with following functions: > > uprobe_trampoline_get - find or add related uprobe_trampoline > uprobe_trampoline_put - remove ref or destroy uprobe_trampoline > > The user space trampoline is exported as architecture specific user space > special mapping, which is provided by arch_uprobe_trampoline_mapping > function. > > The uprobe trampoline needs to be callable/reachable from the probe address, > so while searching for available address we use arch_uprobe_is_callable > function to decide if the uprobe trampoline is callable from the probe address. > > All uprobe_trampoline objects are stored in uprobes_state object and > are cleaned up when the process mm_struct goes down. > > Locking is provided by callers in following changes. > > Signed-off-by: Jiri Olsa <jolsa@xxxxxxxxxx> > --- > include/linux/uprobes.h | 12 +++++ > kernel/events/uprobes.c | 114 ++++++++++++++++++++++++++++++++++++++++ > kernel/fork.c | 1 + > 3 files changed, 127 insertions(+) > Ran out of time for today, will continue tomorrow for the rest of patches. Some comments below. The numbers are really encouraging, though! > diff --git a/include/linux/uprobes.h b/include/linux/uprobes.h > index 8843b7f99ed0..c4ee755ca2a1 100644 > --- a/include/linux/uprobes.h > +++ b/include/linux/uprobes.h > @@ -16,6 +16,7 @@ > #include <linux/types.h> > #include <linux/wait.h> > #include <linux/timer.h> > +#include <linux/mutex.h> > > struct uprobe; > struct vm_area_struct; > @@ -172,6 +173,13 @@ struct xol_area; > > struct uprobes_state { > struct xol_area *xol_area; > + struct hlist_head tramp_head; > +}; > + should we make uprobe_state be linked by a pointer from mm_struct instead of increasing mm for each added field? right now it's embedded, I don't think it's problematic to allocate it on demand and keep it until mm_struct is freed > +struct uprobe_trampoline { > + struct hlist_node node; > + unsigned long vaddr; > + atomic64_t ref; > }; > > extern void __init uprobes_init(void); > @@ -220,6 +228,10 @@ extern int arch_uprobe_verify_opcode(struct arch_uprobe *auprobe, struct page *p > unsigned long vaddr, uprobe_opcode_t *new_opcode, > int nbytes); > extern bool arch_uprobe_is_register(uprobe_opcode_t *insn, int nbytes); > +extern struct uprobe_trampoline *uprobe_trampoline_get(unsigned long vaddr); > +extern void uprobe_trampoline_put(struct uprobe_trampoline *area); > +extern bool arch_uprobe_is_callable(unsigned long vtramp, unsigned long vaddr); > +extern const struct vm_special_mapping *arch_uprobe_trampoline_mapping(void); > #else /* !CONFIG_UPROBES */ > struct uprobes_state { > }; > diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c > index 8068f91de9e3..f57918c624da 100644 > --- a/kernel/events/uprobes.c > +++ b/kernel/events/uprobes.c > @@ -615,6 +615,118 @@ set_orig_insn(struct arch_uprobe *auprobe, struct mm_struct *mm, unsigned long v > (uprobe_opcode_t *)&auprobe->insn, UPROBE_SWBP_INSN_SIZE); > } > > +bool __weak arch_uprobe_is_callable(unsigned long vtramp, unsigned long vaddr) bikeshedding some more, I still find "is_callable" confusing. How about "is_reachable_by_call"? slightly verbose, but probably more meaningful? > +{ > + return false; > +} > + > +const struct vm_special_mapping * __weak arch_uprobe_trampoline_mapping(void) > +{ > + return NULL; > +} > + > +static unsigned long find_nearest_page(unsigned long vaddr) > +{ > + struct mm_struct *mm = current->mm; > + struct vm_area_struct *vma, *prev; > + VMA_ITERATOR(vmi, mm, 0); > + > + prev = vma_next(&vmi); minor: we are missing an opportunity to add something between [PAGE_SIZE, <first_vma_start>). Probably fine, but why not? > + vma = vma_next(&vmi); > + while (vma) { > + if (vma->vm_start - prev->vm_end >= PAGE_SIZE) { > + if (arch_uprobe_is_callable(prev->vm_end, vaddr)) > + return prev->vm_end; > + if (arch_uprobe_is_callable(vma->vm_start - PAGE_SIZE, vaddr)) > + return vma->vm_start - PAGE_SIZE; > + } > + > + prev = vma; > + vma = vma_next(&vmi); > + } > + > + return 0; > +} > + [...] > +struct uprobe_trampoline *uprobe_trampoline_get(unsigned long vaddr) > +{ > + struct uprobes_state *state = ¤t->mm->uprobes_state; > + struct uprobe_trampoline *tramp = NULL; > + > + hlist_for_each_entry(tramp, &state->tramp_head, node) { > + if (arch_uprobe_is_callable(tramp->vaddr, vaddr)) { > + atomic64_inc(&tramp->ref); > + return tramp; > + } > + } > + > + tramp = create_uprobe_trampoline(vaddr); > + if (!tramp) > + return NULL; > + > + hlist_add_head(&tramp->node, &state->tramp_head); > + return tramp; > +} > + > +static void destroy_uprobe_trampoline(struct uprobe_trampoline *tramp) > +{ > + hlist_del(&tramp->node); > + kfree(tramp); hmm... shouldn't this be RCU-delayed (RCU Tasks Trace for uprobes), otherwise we might have some CPU executing code in that trampoline, no? > +} > + [...]