On 24/10/2017 23:50, geoff@xxxxxxxxxxxxxxx wrote: > In svm.c, by just changing the line in `init_vmcb` that reads: > > save->g_pat = svm->vcpu.arch.pat; > > To: > > save->g_pat = 0x0606060606060606; > > The problem is resolved. From what I understand this is setting a > MTTR value that enables Write Back (WB). That's cool, you certainly are onto something. Currently, SVM is disregarding the guest PAT setting (PA0=PA4=WB, PA1=PA5=WT, PA2=PA6=UC-, PA3=UC). The guest might be using a different setting so you're getting slow accesses (UC- or UC, i.e. uncacheable) instead of fast accesses (WB or WC, respectively writeback and write combining). It would be great if you could proceed with the following tests: 1) see if this patch has any effect diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index af256b786a70..b2e4b912f053 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -3626,6 +3626,12 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr) u32 ecx = msr->index; u64 data = msr->data; switch (ecx) { + case MSR_IA32_CR_PAT: + if (!kvm_mtrr_valid(vcpu, MSR_IA32_CR_PAT, data)) + return 1; + vcpu->arch.pat = data; + svm->vmcb->save.g_pat = data; + break; case MSR_IA32_TSC: kvm_write_tsc(vcpu, msr); break; 2) if it doesn't, add a printk("%#016lx", data); to the patch and get the last value written by the guest. Hard-code it in the "save->g_pat = ..." line where you've been using 0x0606060606060606 successfully. Test that things work (though they should still be slow). 3) starting from the rightmost byte, change one byte to 0x06, test that and see if things get fast. For each byte you change, take a note of the full value and whether things are slow or fast. Thank you very much! Paolo