This saves one insn byte per instance, summing up to a savings of over 4k in my (stripped down) configuration. No variant of to be patched in replacement code relies on the one byte larger size. Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> --- arch/x86/include/asm/paravirt_types.h | 6 ++++++ 1 file changed, 6 insertions(+) --- 4.18-rc2/arch/x86/include/asm/paravirt_types.h +++ 4.18-rc2-x86_64-pvops-call-RIPrel/arch/x86/include/asm/paravirt_types.h @@ -393,9 +393,15 @@ int paravirt_disable_iospace(void); * offset into the paravirt_patch_template structure, and can therefore be * freely converted back into a structure offset. */ +#ifdef CONFIG_X86_32 #define PARAVIRT_CALL \ ANNOTATE_RETPOLINE_SAFE \ "call *%c[paravirt_opptr];" +#else +#define PARAVIRT_CALL \ + ANNOTATE_RETPOLINE_SAFE \ + "call *%c[paravirt_opptr](%%rip);" +#endif /* * These macros are intended to wrap calls through one of the paravirt _______________________________________________ Virtualization mailing list Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linuxfoundation.org/mailman/listinfo/virtualization