On 09/21/2012 06:32 PM, Rik van Riel wrote:
On 09/21/2012 08:00 AM, Raghavendra K T wrote:
From: Raghavendra K T <raghavendra.kt@xxxxxxxxxxxxxxxxxx>
When total number of VCPUs of system is less than or equal to physical
CPUs,
PLE exits become costly since each VCPU can have dedicated PCPU, and
trying to find a target VCPU to yield_to just burns time in PLE handler.
This patch reduces overhead, by simply doing a return in such
scenarios by
checking the length of current cpu runqueue.
I am not convinced this is the way to go.
The VCPU that is holding the lock, and is not releasing it,
probably got scheduled out. That implies that VCPU is on a
runqueue with at least one other task.
I see your point here, we have two cases:
case 1)
rq1 : vcpu1->wait(lockA) (spinning)
rq2 : vcpu2->holding(lockA) (running)
Here Ideally vcpu1 should not enter PLE handler, since it would surely
get the lock within ple_window cycle. (assuming ple_window is tuned for
that workload perfectly).
May be this explains why we are not seeing benefit with kernbench.
On the other side, Since we cannot have a perfect ple_window tuned for
all type of workloads, for those workloads, which may need more than
4096 cycles, we gain. thinking is it that we are seeing in benefited
cases?
case 2)
rq1 : vcpu1->wait(lockA) (spinning)
rq2 : vcpu3 (running) , vcpu2->holding(lockA) [scheduled out]
I agree that checking rq1 length is not proper in this case, and as you
rightly pointed out, we are in trouble here.
nr_running()/num_online_cpus() would give more accurate picture here,
but it seemed costly. May be load balancer save us a bit here in not
running to such sort of cases. ( I agree load balancer is far too
complex).
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html