Re: Seeking a KVM benchmark

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Nov 10, 2014 at 2:45 AM, Gleb Natapov <gleb@xxxxxxxxxx> wrote:
> On Mon, Nov 10, 2014 at 11:03:35AM +0100, Paolo Bonzini wrote:
>>
>>
>> On 09/11/2014 17:36, Andy Lutomirski wrote:
>> >> The purpose of vmexit test is to show us various overheads, so why not
>> >> measure EFER switch overhead by having two tests one with equal EFER
>> >> another with different EFER, instead of hiding it.
>> >
>> > I'll try this.  We might need three tests, though: NX different, NX
>> > same but SCE different, and all flags the same.
>>
>> The test actually explicitly enables NX in order to put itself in the
>> "common case":
>>
>> commit 82d4ccb9daf67885a0316b1d763ce5ace57cff36
>> Author: Marcelo Tosatti <mtosatti@xxxxxxxxxx>
>> Date:   Tue Jun 8 15:33:29 2010 -0300
>>
>>     test: vmexit: enable NX
>>
>>     Enable NX to disable MSR autoload/save. This is the common case anyway.
>>
>>     Signed-off-by: Marcelo Tosatti <mtosatti@xxxxxxxxxx>
>>     Signed-off-by: Avi Kivity <avi@xxxxxxxxxx>
>>
>> (this commit is in qemu-kvm.git), so I guess forgetting to set SCE is
>> just a bug.  The results on my Xeon Sandy Bridge are very interesting:
>>
>> NX different            ~11.5k (load/save EFER path)
>> NX same, SCE different  ~19.5k (urn path)
>> all flags the same      ~10.2k
>>
>> The inl_from_kernel results have absolutely no change, usually at most 5
>> cycles difference.  This could be because I've added the SCE=1 variant
>> directly to vmexit.c, so I'm running the tests one next to the other.
>>
>> I tried making also the other shared MSRs the same between guest and
>> host (STAR, LSTAR, CSTAR, SYSCALL_MASK), so that the user return notifier
>> has nothing to do.  That saves about 4-500 cycles on inl_from_qemu.  I
>> do want to dig out my old Core 2 and see how the new test fares, but it
>> really looks like your patch will be in 3.19.
>>
> Please test on wide variety of HW before final decision. Also it would
> be nice to ask Intel what is expected overhead. It is awesome if they
> mange to add EFER switching with non measurable overhead, but also hard
> to believe :) Also Andy had an idea do disable switching in case host
> and guest EFERs are the same but IIRC his patch does not include it yet.

I'll send that patch as a followup in a sec.  It doesn't seem to make
a difference, which reinforces my hypothesis that microcode is
fiddling with EFER on entry and exit anyway to handle LME and LMA
anyway, so adjusting the other bits doesn't effect performance.

--Andy

>
> --
>                         Gleb.



-- 
Andy Lutomirski
AMA Capital Management, LLC
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux