Re: Paravirtualized pause loop handling

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 09/13/2012 02:48 AM, Jiannan Ouyang wrote:
Hi Raghu,

I'm working on improving paravirtualized spinlock performance for a
while, with my past findings, I come up with a new idea to make the
pause-loop handler more efficient.

Our original idea is to expose vmm scheduling information to the
guest, so lock requester can sleep/yield upon lock holder been
scheduled out, instead of spinning for SPIN_THRESHOLD loops. However,
as I moving forward, I found that the problems of this approach are
- saving from SPIN_THRESHOLD is only few microseconds

we try to set SPIN_THRESHOLD to an optimal value (typical lock-holding time). If we are spinning more that, that would ideally mean LHP case.
But I agree having a good SPIN_THRESHOLD is little tricky.

- yields to another CPU is not efficient because it will only come
back after few ms, 1000x times more than normal lock waiting time

No. It is efficient if we are able to refine the candidate vcpus to
yield_to. But it is tricky to find good guy too.
Here was one successful attempt.
https://lkml.org/lkml/2012/7/18/247

- sleep upon lock holder preemption make sense, but that has been done
very well by your pv_lock patch

Below is some data I got
- 4 core guest x2 on 4 core host
- guest1: hackbench 10 run average completion time, lower is better
- guest2: 4 process while true

                                       Average(s)   Stdev
Native                             8.6739         0.51965
Stock kernel -ple             84.1841       17.37156
+ ple                              80.6322        27.6574
+ cpu binding                  25.6569       1.93028
+ pv_lock                       17.8462        0.74884
+ cpu binding&  pv_lock  16.9935        0.772416

Observations are:
- improvement from ple (4s) is much less than pv_lock and cpu_binding (60s~)
- best performance comes from pv_lock with cpu_binding, which bind
4vcpu to four physical core. Idea from (1)


Results are interesting. I am trying out V9 with all the improvements, took place after V8.

Then I came up with the "paravirtualized pause-loop exit" idea.
Current vcpu boosting strategy upon ple is not very efficient, because
1) it may boost the wrong vcpu, 2) time for the lock holder to come
back is very likely to be few ms, much longer than normal lock waiting
time, few us.

What we can do is expose guest lock waiting information to VMM, and
upon ple, the vmm can make vcpu to sleep on the lock holder's wait
queue. Later we can wake them up, when the lock holder is scheduled
in. Or take one stop further, make a vcpu sleep previous ticket
holder's wait queue, thus we ensure the order the wake up.


This is very interesting. Can you share the patches?

I'm almost done with the implementation, expect some testing work. Any
comments or suggestions?

Thanks
--Jiannan

Reference
(1) Is co-scheduling to expensive for smp vms? O. Sukwong, H. S. Kim, EuroSys 11

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux