----- Original Message ----- > From: "Wanpeng Li" <kernellwp@xxxxxxxxx> > To: linux-kernel@xxxxxxxxxxxxxxx, kvm@xxxxxxxxxxxxxxx > Cc: "Paolo Bonzini" <pbonzini@xxxxxxxxxx>, "Radim Krčmář" <rkrcmar@xxxxxxxxxx>, "Wanpeng Li" <wanpeng.li@xxxxxxxxxxx> > Sent: Tuesday, April 11, 2017 5:49:21 PM > Subject: [PATCH] x86/kvm: virt_xxx memory barriers instead of mandatory barriers > > From: Wanpeng Li <wanpeng.li@xxxxxxxxxxx> > > virt_xxx memory barriers are implemented trivially using the low-level > __smp_xxx macros, __smp_xxx is equal to a compiler barrier for strong > TSO memory model, however, mandatory barriers will unconditional add > memory barriers, this patch replaces the rmb() in kvm_steal_clock() by > virt_rmb(). > > Cc: Paolo Bonzini <pbonzini@xxxxxxxxxx> > Cc: Radim Krčmář <rkrcmar@xxxxxxxxxx> > Signed-off-by: Wanpeng Li <wanpeng.li@xxxxxxxxxxx> > --- > arch/x86/kernel/kvm.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c > index 14f65a5..da5c097 100644 > --- a/arch/x86/kernel/kvm.c > +++ b/arch/x86/kernel/kvm.c > @@ -396,9 +396,9 @@ static u64 kvm_steal_clock(int cpu) > src = &per_cpu(steal_time, cpu); > do { > version = src->version; > - rmb(); > + virt_rmb(); > steal = src->steal; > - rmb(); > + virt_rmb(); > } while ((version & 1) || (version != src->version)); > > return steal; > -- > 2.7.4 Reviewed-by: Paolo Bonzini <pbonzini@xxxxxxxxxx>