On 2014-03-07 12:42, Paolo Bonzini wrote: > Alex Williamson reported that a Windows game does something weird that > makes the guest save and restore debug registers on each context switch. > This cause several hundred thousands vmexits per second, and basically > cuts performance in half when running under KVM. > > However, when not running in guest-debug mode, the guest controls the > debug registers and having to take an exit for each DR access is a waste > of time. We just need one vmexit to load any stale values of DR0-DR6, > and then we can let the guest run freely. On the next vmexit (whatever > the reason) we will read out whatever changes the guest made to the > debug registers. > > Tested with x86/debug.flat on both Intel and AMD, both direct and > nested virtualization. > > Changes from RFC: changed get_dr7 callback to sync_dirty_debug_regs, > new patches 5-7. This looks good now to me from KVM perspective. I was just wondering how the case is handled that the host used debug registers on the thread the runs a VCPU? What if I set a hw breakpoint on its userspace path e.g.? What if I debug the kernel side with kgdb? Jan > > Paolo Bonzini (7): > KVM: vmx: we do rely on loading DR7 on entry > KVM: x86: change vcpu->arch.switch_db_regs to a bit mask > KVM: x86: Allow the guest to run with dirty debug registers > KVM: vmx: Allow the guest to run with dirty debug registers > KVM: nVMX: Allow nested guests to run with dirty debug registers > KVM: svm: set/clear all DR intercepts in one swoop > KVM: svm: Allow the guest to run with dirty debug registers > > arch/x86/include/asm/kvm_host.h | 8 ++++- > arch/x86/kvm/svm.c | 68 ++++++++++++++++++++++++++++------------- > arch/x86/kvm/vmx.c | 43 ++++++++++++++++++++++++-- > arch/x86/kvm/x86.c | 20 +++++++++++- > 4 files changed, 114 insertions(+), 25 deletions(-) >
Attachment:
signature.asc
Description: OpenPGP digital signature