On Sun, Nov 9, 2014 at 12:52 AM, Gleb Natapov <gleb@xxxxxxxxxx> wrote: > On Sat, Nov 08, 2014 at 08:44:42AM -0800, Andy Lutomirski wrote: >> On Sat, Nov 8, 2014 at 8:00 AM, Andy Lutomirski <luto@xxxxxxxxxxxxxx> wrote: >> > On Nov 8, 2014 4:01 AM, "Gleb Natapov" <gleb@xxxxxxxxxx> wrote: >> >> >> >> On Fri, Nov 07, 2014 at 09:59:55AM -0800, Andy Lutomirski wrote: >> >> > On Thu, Nov 6, 2014 at 11:17 PM, Paolo Bonzini <pbonzini@xxxxxxxxxx> wrote: >> >> > > >> >> > > >> >> > > On 07/11/2014 07:27, Andy Lutomirski wrote: >> >> > >> Is there an easy benchmark that's sensitive to the time it takes to >> >> > >> round-trip from userspace to guest and back to userspace? I think I >> >> > >> may have a big speedup. >> >> > > >> >> > > The simplest is vmexit.flat from >> >> > > git://git.kernel.org/pub/scm/virt/kvm/kvm-unit-tests.git >> >> > > >> >> > > Run it with "x86/run x86/vmexit.flat" and look at the inl_from_qemu >> >> > > benchmark. >> >> > >> >> > Thanks! >> >> > >> >> > That test case is slower than I expected. I think my change is likely >> >> > to save somewhat under 100ns, which is only a couple percent. I'll >> >> > look for more impressive improvements. >> >> > >> >> > On a barely related note, in the process of poking around with this >> >> > test, I noticed: >> >> > >> >> > /* On ept, can't emulate nx, and must switch nx atomically */ >> >> > if (enable_ept && ((vmx->vcpu.arch.efer ^ host_efer) & EFER_NX)) { >> >> > guest_efer = vmx->vcpu.arch.efer; >> >> > if (!(guest_efer & EFER_LMA)) >> >> > guest_efer &= ~EFER_LME; >> >> > add_atomic_switch_msr(vmx, MSR_EFER, guest_efer, host_efer); >> >> > return false; >> >> > } >> >> > >> >> > return true; >> >> > >> >> > This heuristic seems wrong to me. wrmsr is serializing and therefore >> >> > extremely slow, whereas I imagine that, on CPUs that support it, >> >> > atomically switching EFER ought to be reasonably fast. >> >> > >> >> > Indeed, changing vmexit.c to disable NX (thereby forcing atomic EFER >> >> > switching, and having no other relevant effect that I've thought of) >> >> > speeds up inl_from_qemu by ~30% on Sandy Bridge. Would it make sense >> >> > to always use atomic EFER switching, at least when >> >> > cpu_has_load_ia32_efer? >> >> > >> >> The idea behind current logic is that we want to avoid writing an MSR >> >> at all for lightweight exists (those that do not exit to userspace). So >> >> if NX bit is the same for host and guest we can avoid writing EFER on >> >> exit and run with guest's EFER in the kernel. But if userspace exit is >> >> required only then we write host's MSR back, only if guest and host MSRs >> >> are different of course. What bit should be restored on userspace exit >> >> in vmexit tests? Is it SCE? What if you set it instead of unsetting NXE? >> > >> > I don't understand. AFAICT there are really only two cases: EFER >> > switched atomically using the best available mechanism on the host >> > CPU, or EFER switched on userspace exit. I think there's a >> > theoretical third possibility: if the guest and host EFER match, then >> > EFER doesn't need to be switched at all, but this doesn't seem to be >> > implemented. >> >> I got this part wrong. It looks like the user return notifier is >> smart enough not to set EFER at all if the guest and host values >> match. Indeed, with stock KVM, if I modify vmexit.c to have exactly >> the same EFER as the host (NX and SCE both set), then it runs quickly. >> But I get almost exactly the same performance if NX is clear, which is >> the case where the built-in entry/exit switching is used. >> > What's the performance difference? Negative. That is, switching EFER atomically was faster than not switching it at all. But this could just be noise. Here are the numbers comparing the status quo (SCE cleared in vmexit.c, so switch on user return) vs. switching atomically at entry/exit. Sorry about the formatting. Test Before After Change cpuid 2000 1932 -3.40% vmcall 1914 1817 -5.07% mov_from_cr8 13 13 0.00% mov_to_cr8 19 19 0.00% inl_from_pmtimer 19164 10619 -44.59% inl_from_qemu 15662 10302 -34.22% inl_from_kernel 3916 3802 -2.91% outl_to_kernel 2230 2194 -1.61% mov_dr 172 176 2.33% ipi (skipped) (skipped) ipi+halt (skipped) (skipped) ple-round-robin 13 13 0.00% wr_tsc_adjust_msr 1920 1845 -3.91% rd_tsc_adjust_msr 1892 1814 -4.12% mmio-no-eventfd:pci-mem 16394 11165 -31.90% mmio-wildcard-eventfd:pci-mem 4607 4645 0.82% mmio-datamatch-eventfd:pci-mem 4601 4610 0.20% portio-no-eventfd:pci-io 11507 7942 -30.98% portio-wildcard-eventfd:pci-io 2239 2225 -0.63% portio-datamatch-eventfd:pci- io 2250 2234 -0.71% The tiny differences for the non-userspace exits could be just noise or CPU temperature at the time or anything else. > >> Admittedly, most guests probably do match the host, so this effect may >> be rare in practice. But possibly the code should be changed either >> the way I patched it (always use the built-in switching if available) >> or to only do it if the guest and host EFER values differ. ISTM that, >> on modern CPUs, switching EFER on return to userspace is always a big >> loss. > We should be careful to not optimise for a wrong case. In common case > userspace exits are extremely rare. Try to trace common workloads with > Linux guest. Windows as a guest has its share of userspace exists, but > this is due to the lack of PV timer support (was it fixed already?). > So if switching EFER has measurable overhead doing it on each exit is a > net loss. > >> >> If neither change is made, then maybe the test should change to set >> SCE so that it isn't so misleadingly slow. >> > The purpose of vmexit test is to show us various overheads, so why not > measure EFER switch overhead by having two tests one with equal EFER > another with different EFER, instead of hiding it. > I'll try this. We might need three tests, though: NX different, NX same but SCE different, and all flags the same. --Andy -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html