Re: [PATCH 6/6] KVM: VMX: Move VERW closer to VMentry for MDS mitigation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Oct 20, 2023 at 03:55:07PM -0700, Sean Christopherson wrote:
> On Fri, Oct 20, 2023, Pawan Gupta wrote:
> > During VMentry VERW is executed to mitigate MDS. After VERW, any memory
> > access like register push onto stack may put host data in MDS affected
> > CPU buffers. A guest can then use MDS to sample host data.
> > 
> > Although likelihood of secrets surviving in registers at current VERW
> > callsite is less, but it can't be ruled out. Harden the MDS mitigation
> > by moving the VERW mitigation late in VMentry path.
> > 
> > Note that VERW for MMIO Stale Data mitigation is unchanged because of
> > the complexity of per-guest conditional VERW which is not easy to handle
> > that late in asm with no GPRs available. If the CPU is also affected by
> > MDS, VERW is unconditionally executed late in asm regardless of guest
> > having MMIO access.
> > 
> > Signed-off-by: Pawan Gupta <pawan.kumar.gupta@xxxxxxxxxxxxxxx>
> > ---
> >  arch/x86/kvm/vmx/vmenter.S |  9 +++++++++
> >  arch/x86/kvm/vmx/vmx.c     | 10 +++++++---
> >  2 files changed, 16 insertions(+), 3 deletions(-)
> > 
> > diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S
> > index be275a0410a8..efa716cf4727 100644
> > --- a/arch/x86/kvm/vmx/vmenter.S
> > +++ b/arch/x86/kvm/vmx/vmenter.S
> > @@ -1,6 +1,7 @@
> >  /* SPDX-License-Identifier: GPL-2.0 */
> >  #include <linux/linkage.h>
> >  #include <asm/asm.h>
> > +#include <asm/segment.h>
> >  #include <asm/bitsperlong.h>
> >  #include <asm/kvm_vcpu_regs.h>
> >  #include <asm/nospec-branch.h>
> > @@ -31,6 +32,8 @@
> >  #define VCPU_R15	__VCPU_REGS_R15 * WORD_SIZE
> >  #endif
> >  
> > +#define GUEST_CLEAR_CPU_BUFFERS		USER_CLEAR_CPU_BUFFERS
> > +
> >  .macro VMX_DO_EVENT_IRQOFF call_insn call_target
> >  	/*
> >  	 * Unconditionally create a stack frame, getting the correct RSP on the
> > @@ -177,10 +180,16 @@ SYM_FUNC_START(__vmx_vcpu_run)
> >   * the 'vmx_vmexit' label below.
> >   */
> >  .Lvmresume:
> > +	/* Mitigate CPU data sampling attacks .e.g. MDS */
> > +	GUEST_CLEAR_CPU_BUFFERS
> 
> I have a very hard time believing that it's worth duplicating the mitigation
> for VMRESUME vs. VMLAUNCH just to land it after a Jcc.

VERW modifies the flags, so it either needs to be after Jcc or we
push/pop flags that adds 2 extra memory operations. Please let me know
if there is a better option.

> Also, would it'd be better to put the NOP first?  Or even better, out of line?
> It'd be quite hilarious if the CPU pulled a stupid and speculated on the operand
> of the NOP, i.e. if the user/guest controlled RAX allowed for pulling in data
> after the VERW.

I did confirm with CPU architects that NOP operand won't be
dereferenced, even speculatively. But yes, even if it did, moving NOP
first does take care of it.




[Index of Archives]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]     [Linux Resources]

  Powered by Linux