On Thu, 2024-10-24 at 14:11 +0100, Vadim Fedorenko wrote: [...] > > > @@ -20396,6 +20398,15 @@ static int fixup_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, > > > desc->func_id == special_kfunc_list[KF_bpf_rdonly_cast]) { > > > insn_buf[0] = BPF_MOV64_REG(BPF_REG_0, BPF_REG_1); > > > *cnt = 1; > > > + } else if (IS_ENABLED(CONFIG_X86) && > > > > It's better to introduce bpf_jit_inlines_kfunc_call() > > similar to bpf_jit_inlines_helper_call(). > > Yep, I thought about introducing it while adding more architectures, but > can do it from the beginning. After thinking a bit more, I think I messed up in a private discussion with Vadim. It is necessary to introduce bpf_jit_inlines_kfunc_call() and use it in the mark_fastcall_pattern_for_call(), otherwise the following situation is possible: - the program is executed on the arch where inlining for bpf_get_hw_counter() is not implemented; - there is a pattern in the code: r1 = *(u64 *)(r10 - 256); call bpf_get_hw_fastcall *(u64 *)(r10 - 256) = r1; ... r10 - 8 is not used ... ... r1 is read ... - mark_fastcall_pattern_for_call() would mark spill and fill as members of the pattern; - fastcall contract is not violated, because reserved stack slots are used as expected; - remove_fastcall_spills_fills() would remove spill and fill: call bpf_get_hw_fastcall ... r1 is read ... - since call is not transformed to instructions by a specific jit the value of r1 would be clobbered, making the program invalid. Vadim, sorry I did not point this out earlier, I thought that fastcall contract ensuring logic would handle everything. [...]