On 13.11.2017 01:33, Wanpeng Li wrote: > From: Wanpeng Li <wanpeng.li@xxxxxxxxxxx> > > This patch reuses the preempted field in kvm_steal_time, and will export > the vcpu running/pre-empted information to the guest from host. This will > enable guest to intelligently send ipi to running vcpus and set flag for > pre-empted vcpus. This will prevent waiting for vcpus that are not running. > > Cc: Paolo Bonzini <pbonzini@xxxxxxxxxx> > Cc: Radim Krčmář <rkrcmar@xxxxxxxxxx> > Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx> > Signed-off-by: Wanpeng Li <wanpeng.li@xxxxxxxxxxx> > --- > arch/x86/include/uapi/asm/kvm_para.h | 3 +++ > arch/x86/kernel/kvm.c | 2 +- > arch/x86/kvm/x86.c | 4 ++-- > 3 files changed, 6 insertions(+), 3 deletions(-) > > diff --git a/arch/x86/include/uapi/asm/kvm_para.h b/arch/x86/include/uapi/asm/kvm_para.h > index 554aa8f..bf17b30 100644 > --- a/arch/x86/include/uapi/asm/kvm_para.h > +++ b/arch/x86/include/uapi/asm/kvm_para.h > @@ -51,6 +51,9 @@ struct kvm_steal_time { > __u32 pad[11]; > }; > > +#define KVM_VCPU_NOT_PREEMPTED (0 << 0) > +#define KVM_VCPU_PREEMPTED (1 << 0) These should have a prefix that makes it obvious that they are used for kvm_steal_time/preempted. What about renaming preempted to "flags" or something like that. Then we could have KVM_STEAL_TIME_(FLAG_)PREEMPTED KVM_STEAL_TIME_(FLAG_)NOT_PREEMPTED > + > #define KVM_CLOCK_PAIRING_WALLCLOCK 0 > struct kvm_clock_pairing { > __s64 sec; > diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c > index 8bb9594..1b1b641 100644 > --- a/arch/x86/kernel/kvm.c > +++ b/arch/x86/kernel/kvm.c > @@ -608,7 +608,7 @@ __visible bool __kvm_vcpu_is_preempted(long cpu) > { > struct kvm_steal_time *src = &per_cpu(steal_time, cpu); > > - return !!src->preempted; > + return !!(src->preempted & KVM_VCPU_PREEMPTED); > } > PV_CALLEE_SAVE_REGS_THUNK(__kvm_vcpu_is_preempted); > > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index 03869eb..5e63033 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -2113,7 +2113,7 @@ static void record_steal_time(struct kvm_vcpu *vcpu) > &vcpu->arch.st.steal, sizeof(struct kvm_steal_time)))) > return; > > - vcpu->arch.st.steal.preempted = 0; > + vcpu->arch.st.steal.preempted = KVM_VCPU_NOT_PREEMPTED; > > if (vcpu->arch.st.steal.version & 1) > vcpu->arch.st.steal.version += 1; /* first time write, random junk */ > @@ -2884,7 +2884,7 @@ static void kvm_steal_time_set_preempted(struct kvm_vcpu *vcpu) > if (!(vcpu->arch.st.msr_val & KVM_MSR_ENABLED)) > return; > > - vcpu->arch.st.steal.preempted = 1; > + vcpu->arch.st.steal.preempted = KVM_VCPU_PREEMPTED; > > kvm_write_guest_offset_cached(vcpu->kvm, &vcpu->arch.st.stime, > &vcpu->arch.st.steal.preempted, > -- Thanks, David / dhildenb