Hi Raghu, I'm working on improving paravirtualized spinlock performance for a while, with my past findings, I come up with a new idea to make the pause-loop handler more efficient. Our original idea is to expose vmm scheduling information to the guest, so lock requester can sleep/yield upon lock holder been scheduled out, instead of spinning for SPIN_THRESHOLD loops. However, as I moving forward, I found that the problems of this approach are - saving from SPIN_THRESHOLD is only few microseconds - yields to another CPU is not efficient because it will only come back after few ms, 1000x times more than normal lock waiting time - sleep upon lock holder preemption make sense, but that has been done very well by your pv_lock patch Below is some data I got - 4 core guest x2 on 4 core host - guest1: hackbench 10 run average completion time, lower is better - guest2: 4 process while true Average(s) Stdev Native 8.6739 0.51965 Stock kernel -ple 84.1841 17.37156 + ple 80.6322 27.6574 + cpu binding 25.6569 1.93028 + pv_lock 17.8462 0.74884 + cpu binding & pv_lock 16.9935 0.772416 Observations are: - improvement from ple (4s) is much less than pv_lock and cpu_binding (60s~) - best performance comes from pv_lock with cpu_binding, which bind 4vcpu to four physical core. Idea from (1) Then I came up with the "paravirtualized pause-loop exit" idea. Current vcpu boosting strategy upon ple is not very efficient, because 1) it may boost the wrong vcpu, 2) time for the lock holder to come back is very likely to be few ms, much longer than normal lock waiting time, few us. What we can do is expose guest lock waiting information to VMM, and upon ple, the vmm can make vcpu to sleep on the lock holder's wait queue. Later we can wake them up, when the lock holder is scheduled in. Or take one stop further, make a vcpu sleep previous ticket holder's wait queue, thus we ensure the order the wake up. I'm almost done with the implementation, expect some testing work. Any comments or suggestions? Thanks --Jiannan Reference (1) Is co-scheduling to expensive for smp vms? O. Sukwong, H. S. Kim, EuroSys 11 On Mon, Sep 10, 2012 at 4:33 AM, Raghavendra K T <raghavendra.kt@xxxxxxxxxxxxxxxxxx> wrote: > > Hi Jiannan, > > Happy to see your interest and and it would be nice to have collaboration for further improvements. > > I agree for being co author in-case I am able to provide some value additions to your paper too. > > FYI, our paper on paravirtualization, that includes both pv-spinlock and paravirt-flush tlb is on the way for ieee cloud conf. > > http://ewh.ieee.org/ieee/ccem/program.html#2 > ( It is accepted for publication as of now). > > Regards > Raghu > > On 09/07/2012 08:57 PM, Jiannan Ouyang wrote: >> >> Hey Raghavendra, >> >> This is Jiannan from Computer Science Department of University of Pittsburgh. >> >> Recently I'm working on a research project on improving paravirtualized >> spinlock performance, and this work is targeted in a conference paper this >> year. I've idea to improve the pv_lock performance further on top of your >> patch, and I'm writing to ask do you have any interest in some kind of >> collaboration, and become a coauthor of this paper? >> >> If you are interested, that's great, then we can setup a voice conference >> later. >> >> Thanks >> Jiannan >> homepage: http://www.cs.pitt.edu/~ouyang/ >> >>> On 06/20/2012 11:19 PM, Jiannan Ouyang wrote: >>> >>> Hi Jiannan, >>> Yep, getting from there is a little pain. I am thinking to host on >>> github soon. >>> >>> Anyways, here is the tarball of V8 of patches. (was for 3.4). But it >>> will apply easily without much pain to 3.5 also. (only jp_full.patch is >>> enough, also you can tune SPIN_THRESHOLD to 4k or so for experiment.) >>> >>> hope this helps >>> Regards, >>> Raghu >>> >>>> Hello Raghavendra, >>>> >>>> I find your work of pv ticket spinlock V8 on lkml, I'm wondering how can >>>> I >>>> get a direct patch file, so that I can apply to my source tree, and try >>>> out and benchmark it. >>>> >>>> Thanks >>>> Jiannan >>>> >>>> >>> >>> >> >> > -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html