On Tue, Mar 16, 2021, Vitaly Kuznetsov wrote: > Vitaly Kuznetsov <vkuznets@xxxxxxxxxx> writes: > > > Sean Christopherson <seanjc@xxxxxxxxxx> writes: > > > >> On Mon, Mar 15, 2021, Vitaly Kuznetsov wrote: > >>> @@ -1193,8 +1199,13 @@ static int kvm_hv_set_msr_pw(struct kvm_vcpu *vcpu, u32 msr, u64 data, > >>> } > >>> case HV_X64_MSR_REFERENCE_TSC: > >>> hv->hv_tsc_page = data; > >>> - if (hv->hv_tsc_page & HV_X64_MSR_TSC_REFERENCE_ENABLE) > >>> + if (hv->hv_tsc_page & HV_X64_MSR_TSC_REFERENCE_ENABLE) { > >>> + if (!host) > >>> + hv->hv_tsc_page_status = HV_TSC_PAGE_GUEST_CHANGED; > >>> + else > >>> + hv->hv_tsc_page_status = HV_TSC_PAGE_HOST_CHANGED; > >> > >> Writing the status without taking hv->hv_lock could cause the update to be lost, > >> e.g. if a different vCPU fails kvm_hv_setup_tsc_page() at the same time, its > >> write to set status to HV_TSC_PAGE_BROKEN would race with this write. > >> > > > > Oh, right you are, the lock was somewhere in my brain :-) Will do in > > v2. > > Actually no, kvm_hv_set_msr_pw() is only called from > kvm_hv_set_msr_common() with hv->hv_lock held so we're already > synchronized. > > ... and of course I figured that our by putting another > mutex_lock()/mutex_unlock() here and then wondering why everything hangs > :-) Doh, sorry :-/