Re: [PATCH] kvm: Nested KVM MMUs need PAE root too

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello

On 26. 08. 19 14:16, Vitaly Kuznetsov wrote:
Jiří Paleček <jpalecek@xxxxxx> writes:


The reason for this is that when setting up nested KVM instance,
arch.mmu is set to &arch.guest_mmu (while normally, it would be
&arch.root_mmu).
MMU switch to guest_mmu happens when we're about to start L2 guest - and
we switch back to root_mmu when we want to execute L1 again (e.g. after
vmexit) so this is not a one-time thing ('when setting up nested KVM
instance' makes me think so).
OK I'll rephrase it.

However, the initialization and allocation of
pae_root only creates it in root_mmu. KVM code (ie. in
mmu_alloc_shadow_roots) then accesses arch.mmu->pae_root, which is the
unallocated arch.guest_mmu->pae_root.
Fixes: 14c07ad89f4d ("x86/kvm/mmu: introduce guest_mmu")
Thanks
This fix just allocates (and frees) pae_root in both guest_mmu and
root_mmu (and also lm_root if it was allocated). The allocation is
subject to previous restrictions ie. it won't allocate anything on
64-bit and AFAIK not on Intel.
Right, it is only NPT 32 bit which uses that (and it's definitely
undertested).

See bug 203923 for details.
Personally, I'd prefer this to be full link
https://bugzilla.kernel.org/show_bug.cgi?id=203923
as there are multiple bugzillas out threre.
Will do
Signed-off-by: Jiri Palecek <jpalecek@xxxxxx>
Tested-by: Jiri Palecek <jpalecek@xxxxxx>
This is weird. I always thought "Signed-off-by" implies some form of
testing (unless stated otherwise) :-)
Well, I thought it was quite common that someone authors a patch but
doesn't have means to test it. Anyway, after reading Kernel Newbies, I
added that to indicate that I did test it and if there's need to test
anything reasonably sized on this particualr configuration, I'm open for it.


---
  arch/x86/kvm/mmu.c | 30 ++++++++++++++++++++++--------
  1 file changed, 22 insertions(+), 8 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 24843cf49579..efa8285bb56d 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -5646,7 +5647,19 @@ int kvm_mmu_create(struct kvm_vcpu *vcpu)
  		vcpu->arch.guest_mmu.prev_roots[i] = KVM_MMU_ROOT_INFO_INVALID;

  	vcpu->arch.nested_mmu.translate_gpa = translate_nested_gpa;
-	return alloc_mmu_pages(vcpu);
+
+	ret = alloc_mmu_pages(vcpu, &vcpu->arch.guest_mmu);
+	if (ret)
+		return ret;
+
+	ret = alloc_mmu_pages(vcpu, &vcpu->arch.root_mmu);
+	if (ret)
+		goto fail_allocate_root;
(personal preference) here you're just jumping over 'return' so I'd
prefer this to be written as:

         ret = alloc_mmu_pages(vcpu, &vcpu->arch.root_mmu);
         if (!ret)
             return 0;

         free_mmu_pages(&vcpu->arch.guest_mmu);
         return ret;

and no label/goto required.

OK I'll change it. However, I like the solution by Sean Christopherson more.


Regards

    Jiri Palecek







[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux