On Wed, Dec 11, 2024 at 5:35 AM Jiri Olsa <jolsa@xxxxxxxxxx> wrote: > > Putting together all the previously added pieces to support optimized > uprobes on top of 5-byte nop instruction. > > The current uprobe execution goes through following: > - installs breakpoint instruction over original instruction > - exception handler hit and calls related uprobe consumers > - and either simulates original instruction or does out of line single step > execution of it > - returns to user space > > The optimized uprobe path > > - checks the original instruction is 5-byte nop (plus other checks) > - adds (or uses existing) user space trampoline and overwrites original > instruction (5-byte nop) with call to user space trampoline > - the user space trampoline executes uprobe syscall that calls related uprobe > consumers > - trampoline returns back to next instruction > > This approach won't speed up all uprobes as it's limited to using nop5 as > original instruction, but we could use nop5 as USDT probe instruction (which > uses single byte nop ATM) and speed up the USDT probes. > > This patch overloads related arch functions in uprobe_write_opcode and > set_orig_insn so they can install call instruction if needed. > > The arch_uprobe_optimize triggers the uprobe optimization and is called after > first uprobe hit. I originally had it called on uprobe installation but then > it clashed with elf loader, because the user space trampoline was added in a > place where loader might need to put elf segments, so I decided to do it after > first uprobe hit when loading is done. > > We do not unmap and release uprobe trampoline when it's no longer needed, > because there's no easy way to make sure none of the threads is still > inside the trampoline. But we do not waste memory, because there's just > single page for all the uprobe trampoline mappings. > > We do waste frmae on page mapping for every 4GB by keeping the uprobe > trampoline page mapped, but that seems ok. > > Attaching the speed up from benchs/run_bench_uprobes.sh script: > > current: > > uprobe-nop : 3.281 ± 0.003M/s > uprobe-push : 3.085 ± 0.003M/s > uprobe-ret : 1.130 ± 0.000M/s > --> uprobe-nop5 : 3.276 ± 0.007M/s > uretprobe-nop : 1.716 ± 0.016M/s > uretprobe-push : 1.651 ± 0.017M/s > uretprobe-ret : 0.846 ± 0.006M/s > --> uretprobe-nop5 : 3.279 ± 0.002M/s > > after the change: > > uprobe-nop : 3.246 ± 0.004M/s > uprobe-push : 3.057 ± 0.000M/s > uprobe-ret : 1.113 ± 0.003M/s > --> uprobe-nop5 : 6.751 ± 0.037M/s > uretprobe-nop : 1.740 ± 0.015M/s > uretprobe-push : 1.677 ± 0.018M/s > uretprobe-ret : 0.852 ± 0.005M/s > --> uretprobe-nop5 : 6.769 ± 0.040M/s > > Signed-off-by: Jiri Olsa <jolsa@xxxxxxxxxx> > --- > arch/x86/include/asm/uprobes.h | 7 ++ > arch/x86/kernel/uprobes.c | 168 ++++++++++++++++++++++++++++++++- > include/linux/uprobes.h | 1 + > kernel/events/uprobes.c | 8 ++ > 4 files changed, 181 insertions(+), 3 deletions(-) > [...] > + > +int arch_uprobe_verify_opcode(struct arch_uprobe *auprobe, struct page *page, > + unsigned long vaddr, uprobe_opcode_t *new_opcode, > + int nbytes) > +{ > + uprobe_opcode_t old_opcode[5]; > + bool is_call, is_swbp, is_nop5; > + > + if (!test_bit(ARCH_UPROBE_FLAG_CAN_OPTIMIZE, &auprobe->flags)) > + return uprobe_verify_opcode(page, vaddr, new_opcode); > + > + /* > + * The ARCH_UPROBE_FLAG_CAN_OPTIMIZE flag guarantees the following > + * 5 bytes read won't cross the page boundary. > + */ > + uprobe_copy_from_page(page, vaddr, (uprobe_opcode_t *) &old_opcode, 5); > + is_call = is_call_insn((uprobe_opcode_t *) &old_opcode); > + is_swbp = is_swbp_insn((uprobe_opcode_t *) &old_opcode); > + is_nop5 = is_nop5_insn((uprobe_opcode_t *) &old_opcode); > + > + /* > + * We allow following trasitions for optimized uprobes: > + * > + * nop5 -> swbp -> call > + * || | | > + * |'--<---' | > + * '---<-----------' > + * > + * We return 1 to ack the write, 0 to do nothing, -1 to fail write. > + * > + * If the current opcode (old_opcode) has already desired value, > + * we do nothing, because we are racing with another thread doing > + * the update. > + */ > + switch (nbytes) { > + case 5: > + if (is_call_insn(new_opcode)) { > + if (is_swbp) > + return 1; > + if (is_call && !memcmp(new_opcode, &old_opcode, 5)) > + return 0; > + } else { > + if (is_call || is_swbp) > + return 1; > + if (is_nop5) > + return 0; > + } > + break; > + case 1: > + if (is_swbp_insn(new_opcode)) { > + if (is_nop5) > + return 1; > + if (is_swbp || is_call) > + return 0; > + } else { > + if (is_swbp || is_call) > + return 1; > + if (is_nop5) > + return 0; > + } > + } > + return -1; nit: -EINVAL? > +} > + > +bool arch_uprobe_is_register(uprobe_opcode_t *insn, int nbytes) > +{ > + return nbytes == 5 ? is_call_insn(insn) : is_swbp_insn(insn); > +} > + [...]