答复: [PATCH] KVM: X86: set vcpu preempted only if it is preempted

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> > On Wed, Jan 12, 2022 at 05:30:47PM +0000, Sean Christopherson wrote:
> > > On Wed, Jan 12, 2022, Peter Zijlstra wrote:
> > > > On Wed, Jan 12, 2022 at 08:02:01PM +0800, Li RongQing wrote:
> > > > > vcpu can schedule out when run halt instruction, and set itself
> > > > > to INTERRUPTIBLE and switch to idle thread, vcpu should not be
> > > > > set preempted for this condition
> > > >
> > > > Uhhmm, why not? Who says the vcpu will run the moment it becomes
> > > > runnable again? Another task could be woken up meanwhile occupying
> > > > the real cpu.
> > >
> > > Hrm, but when emulating HLT, e.g. for an idling vCPU, KVM will
> > > voluntarily schedule out the vCPU and mark it as preempted from the
> > > guest's perspective.  The vast majority, probably all, usage of
> > > steal_time.preempted expects it to truly mean "preempted" as opposed
> > > to
> > "not running".
> >
> > No, the original use-case was locking and that really cares about running.
> >
> > If the vCPU isn't running, we must not busy-wait for it etc..
> >
> > Similar to the scheduler use of it, if the vCPU isn't running, we
> > should not consider it so. Getting the vCPU task scheduled back on the CPU can
> take a 'long'
> > time.
> >
> > If you have pinned vCPU threads and no overcommit, we have other knobs
> > to indicate this I tihnk.
> 
> 
> If vcpu is idle, and be marked as preempted, is it right in
> kvm_smp_send_call_func_ipi?
> 
> static void kvm_smp_send_call_func_ipi(const struct cpumask *mask) {
>     int cpu;
> 
>     native_send_call_func_ipi(mask);
> 
>     /* Make sure other vCPUs get a chance to run if they need to. */
>     for_each_cpu(cpu, mask) {
>         if (vcpu_is_preempted(cpu)) {
>             kvm_hypercall1(KVM_HC_SCHED_YIELD,
> per_cpu(x86_cpu_to_apicid, cpu));
>             break;
>         }
>     }
> }
> 

Check if vcpu is idle before check vcpu is preempted?

diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index fe0aead..c1ebd69 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -619,7 +619,7 @@ static void kvm_smp_send_call_func_ipi(const struct cpumask *mask)

        /* Make sure other vCPUs get a chance to run if they need to. */
        for_each_cpu(cpu, mask) {
-               if (vcpu_is_preempted(cpu)) {
+               if (!idle_cpu(cpu) && vcpu_is_preempted(cpu)) {
                        kvm_hypercall1(KVM_HC_SCHED_YIELD, per_cpu(x86_cpu_to_apicid, cpu));
                        break;
                }


Similar in kvm_flush_tlb_multi() ?

-Li




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux