Re: [RFC PATCH v2 1/1] kvm: Add documentation and ABI/API header for VM introspection

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 2017-07-18 at 14:51 +0300, Mihai Donțu wrote:
> On Thu, 2017-07-13 at 09:32 +0200, Paolo Bonzini wrote:
> > On 13/07/2017 07:57, Mihai Donțu wrote:
> > > > Actually it makes more sense for SKIP, I think, where the introspector
> > > > is actually doing emulation?
> > > 
> > > I'm afraid I don't undestand your question, however we were looking at
> > > using KVM's x86 emulator rather than putting together our own, as such
> > > software might be fun to write but they take a very long time to get
> > > right. I'd argue that KVM's emulator has already seen a lot of
> > > coverage.
> > 
> > Of course!  But there could be some special cases (e.g. hypercalls)
> > where you do emulation on your own.  In that case, KVMI_SET_REGS + SKIP
> > is the right thing to do.
> 
> I think I finally understand what you're saying. That SKIP would tell
> the introspection subsystem to just write back the registers and enter
> the guest, no in-host emulation needed. So, to reiterate, the possible
> actions would be:
> 
>  * SKIP - re-enter the guest (the introspection tool has adjusted all
>    registers)
>  * RETRY - re-enter the guest
>  * ALLOW - use the emulator
>  * CRASH - kill the guest process
> 
> It seems that SKIP requires a variant of KVMI_SET_REGS (_FULL?) that
> sets all registers that might have been affected by the emulation
> (control, MSR-s, MMX/SSE/AVX). I guess there can be an usecase for
> that. It also looks like its identical with KVMI_SET_REGS_FULL + RETRY.

I mentioned in a previous email something about statistics. The
following are a sort of that:

eventCount: 13.2 eventsMemAccess: 12.4 eventsWriteCtrlReg: 0.8 partialContext: 0.2 partialCpu: 0.2 xcGetMemAccess: 116 xcMapPage: 191.4 xcSetMemAccessMulti: 1.6 
eventCount: 4261.8 eventsBreakPoint: 4170.2 eventsMemAccess: 84.4 eventsWriteCtrlReg: 7.2 setRegisters: 4170.2 xcMapPage: 3.4 xcSetMemAccessMulti: 0.2 
eventCount: 8098.6 eventsBreakPoint: 4783.4 eventsMemAccess: 3302.4 eventsWriteCtrlReg: 12.8 setRegisters: 4783.4 xcMapPage: 2.6 
eventCount: 3495 eventsBreakPoint: 2631.6 eventsMemAccess: 846.6 eventsWriteCtrlReg: 16.8 setRegisters: 2631.6 xcMapPage: 2.6 xcSetMemAccessMulti: 0.4 
eventCount: 538.4 eventsBreakPoint: 248 eventsMemAccess: 279.2 eventsWriteCtrlReg: 11.2 setRegisters: 248 xcMapPage: 1.2 
eventCount: 4876.2 eventsBreakPoint: 3683.4 eventsMemAccess: 1172.8 eventsWriteCtrlReg: 20 setRegisters: 3683.4 xcMapPage: 4.8 
eventCount: 5937.4 eventsBreakPoint: 4403.8 eventsMemAccess: 1507.2 eventsWriteCtrlReg: 26.4 setRegisters: 4403.8 xcMapPage: 4.8 
eventCount: 9992.4 eventsBreakPoint: 7948.6 eventsMemAccess: 2019.8 eventsWriteCtrlReg: 24 setRegisters: 7948.6 xcMapPage: 1.4 xcSetMemAccessMulti: 5 
eventCount: 5150.6 eventsBreakPoint: 2175 eventsMemAccess: 2902.8 eventsWriteCtrlReg: 72.8 setRegisters: 2175 xcGetMemAccess: 0.4 xcMapPage: 8 xcSetMemAccessMulti: 1 
eventCount: 5422.2 eventsBreakPoint: 4362.2 eventsMemAccess: 1012.8 eventsWriteCtrlReg: 47.2 setRegisters: 4362.2 xcGetMemAccess: 2.8 xcMapPage: 10.2 xcSetMemAccessMulti: 3.4 
eventCount: 1910.2 eventsBreakPoint: 1665.6 eventsMemAccess: 231.8 eventsWriteCtrlReg: 12.8 setRegisters: 1665.6 xcGetMemAccess: 0.2 xcMapPage: 2.2 xcSetMemAccessMulti: 0.6 
eventCount: 1834.4 eventsBreakPoint: 1357.6 eventsMemAccess: 462.4 eventsWriteCtrlReg: 14.4 setRegisters: 1357.6 xcGetMemAccess: 0.2 xcMapPage: 4.8 xcSetMemAccessMulti: 0.4 
eventCount: 6081.2 eventsBreakPoint: 4855.6 eventsMemAccess: 1208.8 eventsWriteCtrlReg: 16.8 setRegisters: 4855.6 xcMapPage: 4 
eventCount: 1105.4 eventsBreakPoint: 855 eventsMemAccess: 226.4 eventsWriteCtrlReg: 24 setRegisters: 855 xcMapPage: 1.6 
eventCount: 8362.8 eventsBreakPoint: 4409.2 eventsMemAccess: 3917.6 eventsWriteCtrlReg: 36 setRegisters: 4409.2 xcGetMemAccess: 117.4 xcMapPage: 9 xcSetMemAccessMulti: 254.6 
eventCount: 2222.2 eventsBreakPoint: 32.2 eventsMemAccess: 2169.2 eventsWriteCtrlReg: 20.8 setRegisters: 32.2 xcGetMemAccess: 2.8 xcMapPage: 5 xcSetMemAccessMulti: 104.4 
eventCount: 2889.2 eventsBreakPoint: 1447.8 eventsMemAccess: 1419.8 eventsWriteCtrlReg: 21.6 partialContext: 0.8 partialCpu: 0.8 setRegisters: 1447.8 xcGetMemAccess: 1.4 xcMapPage: 16.6 xcSetMemAccessMulti: 2.2 
eventCount: 1698.8 eventsBreakPoint: 1031.8 eventsMemAccess: 655 eventsWriteCtrlReg: 12 setRegisters: 1031.8 xcGetMemAccess: 0.6 xcMapPage: 5 xcSetMemAccessMulti: 0.8 
eventCount: 691.8 eventsBreakPoint: 250.4 eventsMemAccess: 435.8 eventsWriteCtrlReg: 5.6 setRegisters: 250.4 xcGetMemAccess: 0.4 xcMapPage: 4 xcSetMemAccessMulti: 0.4 
eventCount: 756.8 eventsBreakPoint: 293.2 eventsMemAccess: 454.8 eventsWriteCtrlReg: 8.8 setRegisters: 293.2 xcGetMemAccess: 1.4 xcMapPage: 3.4 xcSetMemAccessMulti: 1.4 

These represent the number events/calls per second during a normal
introspection session (start Windows 10 x64, open Edge). The breakpoint
events correspond to the various API calls invoked and, as it can be
seen, for each of them we do a 'setRegisters'. We were hoping we can
reduce the overhead by a bit by bundling KVMI_SET_REGISTERS with the
event response.

If I have not managed to convince you, I think we can go ahead and keep
them separate, have an initial implementation and see some actual
performance numbers. Should be no hassle. :-)

> > > In the future we are looking at maybe moving away from it on Intel-s,
> > > by way of VMFUNC and #VE.
> > > 
> > > > But why is KVMI_SET_REGS slower than a set regs command followed by an
> > > > action?
> > > 
> > > To be honest, we just looked at the Xen implementation which gates
> > > writing back the registers to VMCS on them actually having been
> > > changed.
> > 
> > That would be possible on KVMI too.  Just don't do the KVMI_SET_REGS
> > unless the registers have changed.

-- 
Mihai Donțu




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux