Re: [PATCH] KVM: x86/mmu: add lockdep check before lookup_address_in_mm()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Please ignore this one and will provide an updated patch shortly.

On Sun, Mar 27, 2022 at 1:35 PM Mingwei Zhang <mizhang@xxxxxxxxxx> wrote:
>
> Add a lockdep check before invoking lookup_address_in_mm().
> lookup_address_in_mm() walks all levels of host page table without
> accquiring any lock. This is usually unsafe unless we are walking the
> kernel addresses (check other usage cases of lookup_address_in_mm and
> lookup_address_in_pgd).
>
> Walking host page table (especially guest addresses) usually requires
> holding two types of locks: 1) mmu_lock in mm or the lock that protects
> the reverse maps of host memory in range; 2) lock for the leaf paging
> structures.
>
> One exception case is when we take the mmu_lock of the secondary mmu.
> Holding mmu_lock of KVM MMU in either read mode or write mode prevents host
> level entities from modifying the host page table concurrently. This is
> because all of them will have to invoke KVM mmu_notifier first before doing
> the actual work. Since KVM mmu_notifier invalidation operations always take
> the mmu write lock, we are safe if we hold the mmu lock here.
>
> Note: this means that KVM cannot allow concurrent multiple mmu_notifier
> invalidation callbacks by using KVM mmu read lock. Since, otherwise, any
> host level entity can cause race conditions with this one. Walking host
> page table here may get us stale information or may trigger NULL ptr
> dereference that is hard to reproduce.
>
> Having a lockdep check here will prevent or at least warn future
> development that directly walks host page table simply in a KVM ioctl
> function. In addition, it provides a record for any future development on
> KVM mmu_notifier.
>
> Cc: Sean Christopherson <seanjc@xxxxxxxxxx>
> Cc: Ben Gardon <bgardon@xxxxxxxxxx>
> Cc: David Matlack <dmatlack@xxxxxxxxxx>
>
> Signed-off-by: Mingwei Zhang <mizhang@xxxxxxxxxx>
> ---
>  arch/x86/kvm/mmu/mmu.c | 18 ++++++++++++++++++
>  1 file changed, 18 insertions(+)
>
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 1361eb4599b4..342aa184c0a2 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -2820,6 +2820,24 @@ static int host_pfn_mapping_level(struct kvm *kvm, gfn_t gfn, kvm_pfn_t pfn,
>          */
>         hva = __gfn_to_hva_memslot(slot, gfn);
>
> +       /*
> +        * lookup_address_in_mm() walks all levels of host page table without
> +        * accquiring any lock. This is not safe when KVM does not take the
> +        * mmu_lock. Holding mmu_lock in either read mode or write mode prevents
> +        * host level entities from modifying the host page table. This is
> +        * because all of them will have to invoke KVM mmu_notifier first before
> +        * doing the actual work. Since KVM mmu_notifier invalidation operations
> +        * always take the mmu write lock, we are safe if we hold the mmu lock
> +        * here.
> +        *
> +        * Note: this means that KVM cannot allow concurrent multiple
> +        * mmu_notifier invalidation callbacks by using KVM mmu read lock.
> +        * Otherwise, any host level entity can cause race conditions with this
> +        * one. Walking host page table here may get us stale information or may
> +        * trigger NULL ptr dereference that is hard to reproduce.
> +        */
> +       lockdep_is_held(kvm->mmu_lock);
> +
>         pte = lookup_address_in_mm(kvm->mm, hva, &level);
>         if (unlikely(!pte))
>                 return PG_LEVEL_4K;
> --
> 2.35.1.1021.g381101b075-goog
>



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux