On 02/06/19 21:11, Uros Bizjak wrote: > __vmcs_writel uses volatile asm, so there is no need to insert another > one between the first and the second call to __vmcs_writel in order > to prevent unwanted code moves for 32bit targets. > > Signed-off-by: Uros Bizjak <ubizjak@xxxxxxxxx> > --- > arch/x86/kvm/vmx/ops.h | 1 - > 1 file changed, 1 deletion(-) > > diff --git a/arch/x86/kvm/vmx/ops.h b/arch/x86/kvm/vmx/ops.h > index b8e50f76fefc..2200fb698dd0 100644 > --- a/arch/x86/kvm/vmx/ops.h > +++ b/arch/x86/kvm/vmx/ops.h > @@ -146,7 +146,6 @@ static __always_inline void vmcs_write64(unsigned long field, u64 value) > > __vmcs_writel(field, value); > #ifndef CONFIG_X86_64 > - asm volatile (""); > __vmcs_writel(field+1, value >> 32); > #endif > } > Queued, thanks. Paolo