On Tue, 11 Jun 2024 at 10:48, Mark Rutland <mark.rutland@xxxxxxx> wrote: > > Fair enough if that's a pain on x86, but we already have them on arm64, and > hence using them is a smaller change there. We already have a couple of cases > which uses MOVZ;MOVK;MOVK;MOVK sequence, e.g. > > // in __invalidate_icache_max_range() > asm volatile(ALTERNATIVE_CB("movz %0, #0\n" > "movk %0, #0, lsl #16\n" > "movk %0, #0, lsl #32\n" > "movk %0, #0, lsl #48\n", > ARM64_ALWAYS_SYSTEM, > kvm_compute_final_ctr_el0) > : "=r" (ctr)); > > ... which is patched via the callback: > > void kvm_compute_final_ctr_el0(struct alt_instr *alt, > __le32 *origptr, __le32 *updptr, int nr_inst) > { > generate_mov_q(read_sanitised_ftr_reg(SYS_CTR_EL0), > origptr, updptr, nr_inst); > } > > ... where the generate_mov_q() helper does the actual instruction generation. > > So if we only care about a few specific constants, we could give them their own > callbacks, like kvm_compute_final_ctr_el0() above. I'll probably only have another day until my mailbox starts getting more pull requests (Mon-Tue outside the merge window is typically my quiet time when I have time to go through old emails and have time for private projects). So I'll look at doing this for x86 and see how it works. I do suspect that even then it's possibly more code with a site-specific callback for each case, but maybe it would be worth it just for the flexibility. Linus