On 07/03/2017 10:03 AM, Christoffer Dall wrote:
Hi Alex,
On Fri, Jun 23, 2017 at 05:21:59PM +0200, Alexander Graf wrote:
If we want to age an HVA while the VM is getting destroyed, we have a
tiny race window during which we may end up dereferencing an invalid
kvm->arch.pgd value.
CPU0 CPU1
kvm_age_hva()
kvm_mmu_notifier_release()
kvm_arch_flush_shadow_all()
kvm_free_stage2_pgd()
<grab mmu_lock>
stage2_get_pmd()
<wait for mmu_lock>
set kvm->arch.pgd = 0
<free mmu_lock>
<grab mmu_lock>
stage2_get_pud()
<access kvm->arch.pgd>
<use incorrect value>
I don't think this sequence, can happen, but I think kvm_age_hva() can
be called with the mmu_lock held and kvm->pgd already being NULL.
Is that possible for the mmu notifiers to be calling clear(_flush)_young
while also calling notifier_release?
I *think* the aging happens completely orthogonally to release. But
let's ask Andrea - I'm sure he knows :).
Alex
If so, the patch below looks good to me.
Thanks,
-Christoffer
This patch adds a check for that case.
Signed-off-by: Alexander Graf <agraf@xxxxxxx>
---
virt/kvm/arm/mmu.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index f2d5b6c..227931f 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -861,6 +861,10 @@ static pud_t *stage2_get_pud(struct kvm *kvm, struct kvm_mmu_memory_cache *cache
pgd_t *pgd;
pud_t *pud;
+ /* Do we clash with kvm_free_stage2_pgd()? */
+ if (!kvm->arch.pgd)
+ return NULL;
+
pgd = kvm->arch.pgd + stage2_pgd_index(addr);
if (WARN_ON(stage2_pgd_none(*pgd))) {
if (!cache)
--
1.8.5.6
_______________________________________________
kvmarm mailing list
kvmarm@xxxxxxxxxxxxxxxxxxxxx
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm