On Fri, Aug 27, 2021, Peter Zijlstra wrote: > On Thu, Aug 26, 2021 at 05:57:10PM -0700, Sean Christopherson wrote: > > diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h > > index 5cedc0e8a5d5..4c5ba4128b38 100644 > > --- a/arch/x86/kvm/x86.h > > +++ b/arch/x86/kvm/x86.h > > @@ -395,9 +395,10 @@ static inline void kvm_unregister_perf_callbacks(void) > > > > DECLARE_PER_CPU(struct kvm_vcpu *, current_vcpu); > > > > -static inline void kvm_before_interrupt(struct kvm_vcpu *vcpu) > > +static inline void kvm_before_interrupt(struct kvm_vcpu *vcpu, bool is_nmi) > > { > > __this_cpu_write(current_vcpu, vcpu); > > + WRITE_ONCE(vcpu->arch.handling_nmi_from_guest, is_nmi); > > > > kvm_register_perf_callbacks(); > > } > > @@ -406,6 +407,7 @@ static inline void kvm_after_interrupt(struct kvm_vcpu *vcpu) > > { > > kvm_unregister_perf_callbacks(); > > > > + WRITE_ONCE(vcpu->arch.handling_nmi_from_guest, false); > > __this_cpu_write(current_vcpu, NULL); > > } > > Does this rely on kvm_{,un}register_perf_callback() being a function > call and thus implying a sequence point to order the stores? No, I'm just terrible at remembering which macros provide what ordering guarantees, i.e. I was thinking WRITE_ONCE provided guarantees against compiler reordering. _______________________________________________ kvmarm mailing list kvmarm@xxxxxxxxxxxxxxxxxxxxx https://lists.cs.columbia.edu/mailman/listinfo/kvmarm