On Wed, Jun 22, 2022, Jim Mattson wrote: > RSB-stuffing after VM-exit is only needed for legacy CPUs without > eIBRS. Move the RSB-stuffing code out of line. I assume the primary justification is purely to avoid the JMP on modern CPUs? > Preserve the non-sensical correlation of RSB-stuffing with retpoline. Either omit the blurb about retpoline, or better yet expand on why it's nonsensical and speculate a bit on why it got tied to retpoline? > Signed-off-by: Jim Mattson <jmattson@xxxxxxxxxx> > --- > arch/x86/kvm/vmx/vmenter.S | 10 +++++++--- > 1 file changed, 7 insertions(+), 3 deletions(-) > > diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S > index 435c187927c4..39009a4c86bd 100644 > --- a/arch/x86/kvm/vmx/vmenter.S > +++ b/arch/x86/kvm/vmx/vmenter.S > @@ -76,7 +76,12 @@ SYM_FUNC_END(vmx_vmenter) > */ > SYM_FUNC_START(vmx_vmexit) > #ifdef CONFIG_RETPOLINE > - ALTERNATIVE "jmp .Lvmexit_skip_rsb", "", X86_FEATURE_RETPOLINE > + ALTERNATIVE "", "jmp .Lvmexit_stuff_rsb", X86_FEATURE_RETPOLINE > +#endif > +.Lvmexit_return: > + RET > +#ifdef CONFIG_RETPOLINE > +.Lvmexit_stuff_rsb: > /* Preserve guest's RAX, it's used to stuff the RSB. */ > push %_ASM_AX There's a comment in the code here about stuffiing before RET, I think it makes sense to keep that before the RET, i.e. hoist it out of the actual stuffing sequence so that it looks like: #ifdef CONFIG_RETPOLINE /* IMPORTANT: Stuff the RSB immediately after VM-Exit, before RET! */ ALTERNATIVE "", "jmp .Lvmexit_stuff_rsb", X86_FEATURE_RETPOLINE #endif .Lvmexit_return: RET Ha! Better idea. Rather than have a bunch of nops to eat through before the !RETPOLINE case gets to the RET, encode the RET as the default. That allows using a single #ifdef and avoids both the JMP over the RET as well as the JMP back to the RET, and saves 4 nops to boot. SYM_FUNC_START(vmx_vmexit) #ifdef CONFIG_RETPOLINE /* IMPORTANT: Stuff the RSB immediately after VM-Exit, before RET! */ ALTERNATIVE "RET", "", X86_FEATURE_RETPOLINE /* Preserve guest's RAX, it's used to stuff the RSB. */ push %_ASM_AX FILL_RETURN_BUFFER %_ASM_AX, RSB_CLEAR_LOOPS, X86_FEATURE_RETPOLINE /* Clear RFLAGS.CF and RFLAGS.ZF to preserve VM-Exit, i.e. !VM-Fail. */ or $1, %_ASM_AX pop %_ASM_AX #endif RET SYM_FUNC_END(vmx_vmexit)