> -----邮件原件----- > 发件人: Peter Zijlstra <peterz@xxxxxxxxxxxxx> > 发送时间: 2022年1月13日 5:31 > 收件人: Sean Christopherson <seanjc@xxxxxxxxxx> > 抄送: Li,Rongqing <lirongqing@xxxxxxxxx>; pbonzini@xxxxxxxxxx; > vkuznets@xxxxxxxxxx; wanpengli@xxxxxxxxxxx; jmattson@xxxxxxxxxx; > tglx@xxxxxxxxxxxxx; bp@xxxxxxxxx; x86@xxxxxxxxxx; kvm@xxxxxxxxxxxxxxx; > joro@xxxxxxxxxx > 主题: Re: [PATCH] KVM: X86: set vcpu preempted only if it is preempted > > On Wed, Jan 12, 2022 at 05:30:47PM +0000, Sean Christopherson wrote: > > On Wed, Jan 12, 2022, Peter Zijlstra wrote: > > > On Wed, Jan 12, 2022 at 08:02:01PM +0800, Li RongQing wrote: > > > > vcpu can schedule out when run halt instruction, and set itself to > > > > INTERRUPTIBLE and switch to idle thread, vcpu should not be set > > > > preempted for this condition > > > > > > Uhhmm, why not? Who says the vcpu will run the moment it becomes > > > runnable again? Another task could be woken up meanwhile occupying > > > the real cpu. > > > > Hrm, but when emulating HLT, e.g. for an idling vCPU, KVM will > > voluntarily schedule out the vCPU and mark it as preempted from the > > guest's perspective. The vast majority, probably all, usage of > > steal_time.preempted expects it to truly mean "preempted" as opposed to > "not running". > > No, the original use-case was locking and that really cares about running. > > If the vCPU isn't running, we must not busy-wait for it etc.. > > Similar to the scheduler use of it, if the vCPU isn't running, we should not > consider it so. Getting the vCPU task scheduled back on the CPU can take a 'long' > time. > > If you have pinned vCPU threads and no overcommit, we have other knobs to > indicate this I tihnk. If vcpu is idle, and be marked as preempted, is it right in kvm_smp_send_call_func_ipi? static void kvm_smp_send_call_func_ipi(const struct cpumask *mask) { int cpu; native_send_call_func_ipi(mask); /* Make sure other vCPUs get a chance to run if they need to. */ for_each_cpu(cpu, mask) { if (vcpu_is_preempted(cpu)) { kvm_hypercall1(KVM_HC_SCHED_YIELD, per_cpu(x86_cpu_to_apicid, cpu)); break; } } } -Li