On Mon, Jun 6, 2016 at 5:45 AM, Paolo Bonzini <pbonzini@xxxxxxxxxx> wrote: > > > On 04/06/2016 02:42, Yunhong Jiang wrote: >> It adds a little bit latency for each VM-entry because we need setup the >> preemption timer each time. > > Really it doesn't according to your tests: > >> 1. enable_hv_timer=Y. >> >> 000004 002174 >> 000005 042961 >> 000006 479383 >> 000007 071123 >> 000008 003720 >> >> 2. enable_hv_timer=N. >> >> # Histogram >> ...... >> 000005 000042 >> 000006 000772 >> 000007 008262 >> 000008 200759 >> 000009 381126 >> 000010 008056 > > So perhaps you can replace that paragraph with "The benefits offset the > small extra work to do on each VM-entry to setup the preemption timer". > > I'll play with this patch and kvm-unit-tests in the next few days. LMK how this goes, especially vmexit.c with enable_hv_timer=Y. It's turning out to be non-trivial to get this patchset into a kernel that works with my test setup. But if you find any regressions I can spend some more time getting it working. > > David, it would be great if you could also try this on your > message-passing benchmarks (e.g. TCP_RR). On one hand they are heavy on > vmexits, on the other hand they also have many expensive TSC deadline > WRMSRs. I have requested a few small changes, but I am very happy with > the logic and the vmentry cost. > > Thanks, > > Paolo -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html