On 02/27/2014 10:22 AM, Raghavendra K T wrote:
On 02/27/2014 08:15 PM, Paolo Bonzini wrote:
[...]
But neither of the VCPUs being kicked here are halted -- they're either
running or runnable (descheduled by the hypervisor).
/me actually looks at Waiman's code...
Right, this is really different from pvticketlocks, where the *unlock*
primitive wakes up a sleeping VCPU. It is more similar to PLE
(pause-loop exiting).
Adding to the discussion, I see there are two possibilities here,
considering that in undercommit cases we should not exceed
HEAD_SPIN_THRESHOLD,
1. the looping vcpu in pv_head_spin_check() should do halt()
considering that we have done enough spinning (more than typical
lock-hold time), and hence we are in potential overcommit.
2. multiplex kick_cpu to do directed yield in qspinlock case.
But this may result in some ping ponging?
In the current code, the lock holder can't easily locate the CPU # of
the queue head when in the unlock path. That is why I try to keep the
queue head alive as long as possible so that it can take over when the
lock is free. I am trying out new code to let the CPUs that are waiting
other than the first 2 to go to halt to see if that will help the
overcommit case.
-Longman
_______________________________________________
Virtualization mailing list
Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linuxfoundation.org/mailman/listinfo/virtualization