Re: [PATCH 01/16] KVM: arm64: nv: Support multiple nested Stage-2 mmu structures

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 07 May 2024 07:17:13 +0100,
Oliver Upton <oliver.upton@xxxxxxxxx> wrote:
> 
> Hey Marc,
> 
> On Tue, Apr 09, 2024 at 06:54:33PM +0100, Marc Zyngier wrote:
> > +static inline bool kvm_s2_mmu_valid(struct kvm_s2_mmu *mmu)
> > +{
> > +	return !(mmu->tlb_vttbr & 1);
> > +}
> 
> More readable if you use VTTBR_CNP_BIT here.

Yes, well spotted.

[...]

> > +int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu, unsigned long type)
> > +{
> > +	int cpu, err;
> > +	struct kvm_pgtable *pgt;
> > +
> > +	/*
> > +	 * If we already have our page tables in place, and that the
> > +	 * MMU context is the canonical one, we have a bug somewhere,
> > +	 * as this is only supposed to ever happen once per VM.
> > +	 *
> > +	 * Otherwise, we're building nested page tables, and that's
> > +	 * probably because userspace called KVM_ARM_VCPU_INIT more
> > +	 * than once on the same vcpu. Since that's actually legal,
> > +	 * don't kick a fuss and leave gracefully.
> > +	 */
> >  	if (mmu->pgt != NULL) {
> > +		if (&kvm->arch.mmu != mmu)
> 
> A helper might be a good idea, I see this repeated several times:
> 
> static inline bool kvm_is_nested_s2_mmu(struct kvm_s2_mmu *mmu)
> {
> 	return &arch->mmu != mmu;
> }

Yeah, I can probably fit something like this in a number of spots.
Just need to be careful as mmu is not initialised at all in some
contexts.

> 
> > +			return 0;
> > +
> >  		kvm_err("kvm_arch already initialized?\n");
> >  		return -EINVAL;
> >  	}
> >  
> > +	/*
> > +	 * We only initialise the IPA range on the canonical MMU, so
> > +	 * the type is meaningless in all other situations.
> > +	 */
> > +	if (&kvm->arch.mmu != mmu)
> > +		type = kvm_get_pa_bits(kvm);
> 
> I'm not sure I follow this comment, because kvm_init_ipa_range() still
> gets called on nested MMUs. Is this suggesting that the configured IPA
> limit of the shadow MMUs doesn't matter as they can only ever map things
> in the canonical IPA space?

Yes, that's exactly what I meant. Just because we limit the IPA space
to some number of bits doesn't mean we can limit the guest's own S2 to
the same thing, because they mean different things:

- the canonical IPA space (aka type) is a contract between KVM and
  userspace on which ranges the MMIO exits are valid

- the nested IPA space is whatever the virtual HW exposes as PARange,
  and the only constraint is that the *output* of the nested IPA space
  must be contained in the canonical IPA space

Does this make sense? Happy to rework the comment to clarify this.

> 
> > +	err = kvm_init_ipa_range(mmu, type);
> > +	if (err)
> > +		return err;
> > +
> >  	pgt = kzalloc(sizeof(*pgt), GFP_KERNEL_ACCOUNT);
> >  	if (!pgt)
> >  		return -ENOMEM;
> > @@ -925,6 +960,10 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu, unsigned long t
> >  
> >  	mmu->pgt = pgt;
> >  	mmu->pgd_phys = __pa(pgt->pgd);
> > +
> > +	if (&kvm->arch.mmu != mmu)
> > +		kvm_init_nested_s2_mmu(mmu);
> > +
> >  	return 0;
> >  
> >  out_destroy_pgtable:
> > @@ -976,7 +1015,7 @@ static void stage2_unmap_memslot(struct kvm *kvm,
> >  
> >  		if (!(vma->vm_flags & VM_PFNMAP)) {
> >  			gpa_t gpa = addr + (vm_start - memslot->userspace_addr);
> > -			unmap_stage2_range(&kvm->arch.mmu, gpa, vm_end - vm_start);
> > +			kvm_unmap_stage2_range(&kvm->arch.mmu, gpa, vm_end - vm_start);
> >  		}
> >  		hva = vm_end;
> >  	} while (hva < reg_end);
> > @@ -2054,11 +2093,6 @@ void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen)
> >  {
> >  }
> >  
> > -void kvm_arch_flush_shadow_all(struct kvm *kvm)
> > -{
> > -	kvm_uninit_stage2_mmu(kvm);
> > -}
> > -
> >  void kvm_arch_flush_shadow_memslot(struct kvm *kvm,
> >  				   struct kvm_memory_slot *slot)
> >  {
> > @@ -2066,7 +2100,7 @@ void kvm_arch_flush_shadow_memslot(struct kvm *kvm,
> >  	phys_addr_t size = slot->npages << PAGE_SHIFT;
> >  
> >  	write_lock(&kvm->mmu_lock);
> > -	unmap_stage2_range(&kvm->arch.mmu, gpa, size);
> > +	kvm_unmap_stage2_range(&kvm->arch.mmu, gpa, size);
> >  	write_unlock(&kvm->mmu_lock);
> >  }
> >  
> > diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
> > index ced30c90521a..1f4f80a8c011 100644
> > --- a/arch/arm64/kvm/nested.c
> > +++ b/arch/arm64/kvm/nested.c
> > @@ -7,7 +7,9 @@
> >  #include <linux/kvm.h>
> >  #include <linux/kvm_host.h>
> >  
> > +#include <asm/kvm_arm.h>
> >  #include <asm/kvm_emulate.h>
> > +#include <asm/kvm_mmu.h>
> >  #include <asm/kvm_nested.h>
> >  #include <asm/sysreg.h>
> >  
> > @@ -16,6 +18,209 @@
> >  /* Protection against the sysreg repainting madness... */
> >  #define NV_FTR(r, f)		ID_AA64##r##_EL1_##f
> >  
> > +void kvm_init_nested(struct kvm *kvm)
> > +{
> > +	kvm->arch.nested_mmus = NULL;
> > +	kvm->arch.nested_mmus_size = 0;
> > +}
> > +
> > +int kvm_vcpu_init_nested(struct kvm_vcpu *vcpu)
> > +{
> > +	struct kvm *kvm = vcpu->kvm;
> > +	struct kvm_s2_mmu *tmp;
> > +	int num_mmus;
> > +	int ret = -ENOMEM;
> > +
> > +	if (!test_bit(KVM_ARM_VCPU_HAS_EL2, vcpu->kvm->arch.vcpu_features))
> > +		return 0;
> > +
> > +	if (!cpus_have_final_cap(ARM64_HAS_NESTED_VIRT))
> > +		return -EINVAL;
> 
> nitpick: maybe guard the call to kvm_vcpu_init_nested() with
> vcpu_has_nv() and collapse these into
> 
> 	if (!vcpu_has_nv(vcpu))
> 		return -EINVAL;

Indeed, this is definitely old cruft we can get rid off. We don't even
need to error out, as there is a single call site.

> 
> > +	/*
> > +	 * Let's treat memory allocation failures as benign: If we fail to
> > +	 * allocate anything, return an error and keep the allocated array
> > +	 * alive. Userspace may try to recover by intializing the vcpu
> > +	 * again, and there is no reason to affect the whole VM for this.
> > +	 */
> 
> This code feels a bit tricky, and I'm not sure much will be done to
> recover the VM in practice should this allocation / ioctl fail.

I think this is a question of consistency. We don't break the VM when
VPCU_INIT fails in any other case. But yeah, I agree that the whole
fixup code is tricky.

> Is it possible to do this late in kvm_arch_vcpu_run_pid_change() and
> only have the first vCPU to reach the call do the initialization for the
> whole VM? We could then dispose of the reallocation / fixup scheme
> below.

We could, but then the error becomes pretty non-recoverable.

Another thing is that I really should move this over to be vmalloc'd
rather than kmalloc'd -- there is no benefit in having this physically
contiguous.

> 
> If we keep this code...
> 
> > +	num_mmus = atomic_read(&kvm->online_vcpus) * 2;
> > +	tmp = krealloc(kvm->arch.nested_mmus,
> > +		       num_mmus * sizeof(*kvm->arch.nested_mmus),
> > +		       GFP_KERNEL_ACCOUNT | __GFP_ZERO);
> 
> Just do an early 'return -ENOMEM' here to cut a level of indendation for
> the rest that follows.
>
> > +	if (tmp) {
> > +		/*
> > +		 * If we went through a realocation, adjust the MMU
> > +		 * back-pointers in the previously initialised
> > +		 * pg_table structures.
> 
> nitpick: pgtable or kvm_pgtable structures
>
> > +		 */
> > +		if (kvm->arch.nested_mmus != tmp) {
> > +			int i;
> > +
> > +			for (i = 0; i < num_mmus - 2; i++)
> > +				tmp[i].pgt->mmu = &tmp[i];
> > +		}
> > +
> > +		if (kvm_init_stage2_mmu(kvm, &tmp[num_mmus - 1], 0) ||
> > +		    kvm_init_stage2_mmu(kvm, &tmp[num_mmus - 2], 0)) {
> > +			kvm_free_stage2_pgd(&tmp[num_mmus - 1]);
> > +			kvm_free_stage2_pgd(&tmp[num_mmus - 2]);
> > +		} else {
> > +			kvm->arch.nested_mmus_size = num_mmus;
> > +			ret = 0;
> > +		}
> > +
> > +		kvm->arch.nested_mmus = tmp;
> > +	}
> > +
> > +	return ret;
> > +}
> > +
> > +struct kvm_s2_mmu *lookup_s2_mmu(struct kvm_vcpu *vcpu)
> > +{
> > +	bool nested_stage2_enabled;
> > +	u64 vttbr, vtcr, hcr;
> > +	struct kvm *kvm;
> > +	int i;
> > +
> > +	kvm = vcpu->kvm;
> 
> nit: just do this when declaring the local.
> 
> > +	lockdep_assert_held_write(&kvm->mmu_lock);
> > +
> > +	vttbr = vcpu_read_sys_reg(vcpu, VTTBR_EL2);
> > +	vtcr = vcpu_read_sys_reg(vcpu, VTCR_EL2);
> > +	hcr = vcpu_read_sys_reg(vcpu, HCR_EL2);
> > +
> > +	nested_stage2_enabled = hcr & HCR_VM;
> > +
> > +	/* Don't consider the CnP bit for the vttbr match */
> > +	vttbr = vttbr & ~VTTBR_CNP_BIT;
> 
> nit: &=
> 
> > +	/*
> > +	 * Two possibilities when looking up a S2 MMU context:
> > +	 *
> > +	 * - either S2 is enabled in the guest, and we need a context that is
> > +         *   S2-enabled and matches the full VTTBR (VMID+BADDR) and VTCR,
> > +         *   which makes it safe from a TLB conflict perspective (a broken
> > +         *   guest won't be able to generate them),
> > +	 *
> > +	 * - or S2 is disabled, and we need a context that is S2-disabled
> > +         *   and matches the VMID only, as all TLBs are tagged by VMID even
> > +         *   if S2 translation is disabled.
> > +	 */
> 
> Looks like some spaces snuck in and got the indendation weird.


Ack on all the above.

Thanks for having looked into it!

	M.

-- 
Without deviation from the norm, progress is not possible.




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux