Re: [PATCH 02/18] KVM: arm64: Use the S2 MMU context to iterate over S2 table

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu,  9 Feb 2023 17:58:04 +0000
Marc Zyngier <maz@xxxxxxxxxx> wrote:

> Most of our S2 helpers take a kvm_s2_mmu pointer, but quickly
> revert back to using the kvm structure. By doing so, we lose
> track of which S2 MMU context we were initially using, and fallback
> to the "canonical" context.
> 
> If we were trying to unmap a S2 context managed by a guest hypervisor,
> we end-up parsing the wrong set of page tables, and bad stuff happens
> (as this is often happening on the back of a trapped TLBI from the
> guest hypervisor).
> 
> Instead, make sure we always use the provided MMU context all the way.
> This has no impact on non-NV, as we always pass the canonical MMU
> context.

Indeed this just changes stage2_apply_range() and all its callers, in
a manner that shouldn't change the current behaviour, but preserves the
S2 MMU passed in:

> Signed-off-by: Marc Zyngier <maz@xxxxxxxxxx>

Reviewed-by: Andre Przywara <andre.przywara@xxxxxxx>

Cheers,
Andre

> ---
>  arch/arm64/kvm/mmu.c | 16 ++++++++--------
>  1 file changed, 8 insertions(+), 8 deletions(-)
> 
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index a3ee3b605c9b..892d6a5fb2f5 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -46,16 +46,17 @@ static phys_addr_t stage2_range_addr_end(phys_addr_t addr, phys_addr_t end)
>   * long will also starve other vCPUs. We have to also make sure that the page
>   * tables are not freed while we released the lock.
>   */
> -static int stage2_apply_range(struct kvm *kvm, phys_addr_t addr,
> +static int stage2_apply_range(struct kvm_s2_mmu *mmu, phys_addr_t addr,
>  			      phys_addr_t end,
>  			      int (*fn)(struct kvm_pgtable *, u64, u64),
>  			      bool resched)
>  {
> +	struct kvm *kvm = kvm_s2_mmu_to_kvm(mmu);
>  	int ret;
>  	u64 next;
>  
>  	do {
> -		struct kvm_pgtable *pgt = kvm->arch.mmu.pgt;
> +		struct kvm_pgtable *pgt = mmu->pgt;
>  		if (!pgt)
>  			return -EINVAL;
>  
> @@ -71,8 +72,8 @@ static int stage2_apply_range(struct kvm *kvm, phys_addr_t addr,
>  	return ret;
>  }
>  
> -#define stage2_apply_range_resched(kvm, addr, end, fn)			\
> -	stage2_apply_range(kvm, addr, end, fn, true)
> +#define stage2_apply_range_resched(mmu, addr, end, fn)			\
> +	stage2_apply_range(mmu, addr, end, fn, true)
>  
>  static bool memslot_is_logging(struct kvm_memory_slot *memslot)
>  {
> @@ -235,7 +236,7 @@ static void __unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64
>  
>  	lockdep_assert_held_write(&kvm->mmu_lock);
>  	WARN_ON(size & ~PAGE_MASK);
> -	WARN_ON(stage2_apply_range(kvm, start, end, kvm_pgtable_stage2_unmap,
> +	WARN_ON(stage2_apply_range(mmu, start, end, kvm_pgtable_stage2_unmap,
>  				   may_block));
>  }
>  
> @@ -250,7 +251,7 @@ static void stage2_flush_memslot(struct kvm *kvm,
>  	phys_addr_t addr = memslot->base_gfn << PAGE_SHIFT;
>  	phys_addr_t end = addr + PAGE_SIZE * memslot->npages;
>  
> -	stage2_apply_range_resched(kvm, addr, end, kvm_pgtable_stage2_flush);
> +	stage2_apply_range_resched(&kvm->arch.mmu, addr, end, kvm_pgtable_stage2_flush);
>  }
>  
>  /**
> @@ -934,8 +935,7 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
>   */
>  static void stage2_wp_range(struct kvm_s2_mmu *mmu, phys_addr_t addr, phys_addr_t end)
>  {
> -	struct kvm *kvm = kvm_s2_mmu_to_kvm(mmu);
> -	stage2_apply_range_resched(kvm, addr, end, kvm_pgtable_stage2_wrprotect);
> +	stage2_apply_range_resched(mmu, addr, end, kvm_pgtable_stage2_wrprotect);
>  }
>  
>  /**




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux