On Wed, Sep 21, 2022, Michal Luczaj wrote: > There's a race between kvm_xen_set_evtchn_fast() and kvm_gpc_activate() > resulting in a near-NULL pointer write. > > 1. Deactivate shinfo cache: > > kvm_xen_hvm_set_attr > case KVM_XEN_ATTR_TYPE_SHARED_INFO > kvm_gpc_deactivate > kvm_gpc_unmap > gpc->valid = false > gpc->khva = NULL > gpc->active = false > > Result: active = false, valid = false > > 2. Cause cache refresh: > > kvm_arch_vm_ioctl > case KVM_XEN_HVM_EVTCHN_SEND > kvm_xen_hvm_evtchn_send > kvm_xen_set_evtchn > kvm_xen_set_evtchn_fast > kvm_gpc_check > return -EWOULDBLOCK because !gpc->valid > kvm_xen_set_evtchn_fast > return -EWOULDBLOCK > kvm_gpc_refresh > hva_to_pfn_retry > gpc->valid = true > gpc->khva = not NULL > > Result: active = false, valid = true This is the real bug. KVM should not succesfully refresh an inactive cache. It's not just the potential for NULL pointer deref, the cache also isn't on the list of active caches, i.e. won't get mmu_notifier events, and so KVM could get a use-after-free of userspace memory. KVM_XEN_HVM_EVTCHN_SEND does check that the per-vCPU cache is active, but does so outside of the gpc->lock. Minus your race condition analysis, which I'll insert into the changelog (assuming this works), I believe the proper fix is to check "active" during check and refresh. Oof, and there are ordering bugs too. Compile-tested patch below. If this fixes things on your end (I'll properly test tomorrow too), I'll post a v2 of the entire series. There are some cleanups that can be done on top, e.g. I think we should drop kvm_gpc_unmap() entirely until there's actually a user, because it's not at all obvious that it's (a) necessary and (b) has desirable behavior. Note, the below patch applies after patch 1 of this series. I don't know if anyone will actually want to backport the fix, but it's not too hard to keep the backport dependency to just patch 1. -- From: Sean Christopherson <seanjc@xxxxxxxxxx> Date: Mon, 10 Oct 2022 13:06:13 -0700 Subject: [PATCH] KVM: Reject attempts to consume or refresh inactive gfn_to_pfn_cache Reject kvm_gpc_check() and kvm_gpc_refresh() if the cache is inactive. No checking the active flag during refresh is particular egregious, as KVM can end up with a valid, inactive cache, which can lead to a variety of use-after-free bugs, e.g. consuming a NULL kernel pointer or missing an mmu_notifier invalidation due to the cache not being on the list of gfns to invalidate. Note, "active" needs to be set if and only if the cache is on the list of caches, i.e. is reachable via mmu_notifier events. If a relevant mmu_notifier event occurs while the cache is "active" but not on the list, KVM will not acquire the cache's lock and so will not serailize the mmu_notifier event with active users and/or kvm_gpc_refresh(). A race between KVM_XEN_ATTR_TYPE_SHARED_INFO and KVM_XEN_HVM_EVTCHN_SEND can be exploited to trigger the bug. <will insert your awesome race analysis> Reported-by: : Michal Luczaj <mhal@xxxxxxx> Signed-off-by: Sean Christopherson <seanjc@xxxxxxxxxx> --- virt/kvm/pfncache.c | 36 ++++++++++++++++++++++++++++++------ 1 file changed, 30 insertions(+), 6 deletions(-) diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c index b32ed4a7c900..dfc72aa88d71 100644 --- a/virt/kvm/pfncache.c +++ b/virt/kvm/pfncache.c @@ -81,6 +81,9 @@ bool kvm_gfn_to_pfn_cache_check(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, { struct kvm_memslots *slots = kvm_memslots(kvm); + if (!gpc->active) + return false; + if ((gpa & ~PAGE_MASK) + len > PAGE_SIZE) return false; @@ -240,8 +243,9 @@ int kvm_gfn_to_pfn_cache_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, { struct kvm_memslots *slots = kvm_memslots(kvm); unsigned long page_offset = gpa & ~PAGE_MASK; - kvm_pfn_t old_pfn, new_pfn; + bool unmap_old = false; unsigned long old_uhva; + kvm_pfn_t old_pfn; void *old_khva; int ret = 0; @@ -261,6 +265,9 @@ int kvm_gfn_to_pfn_cache_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, write_lock_irq(&gpc->lock); + if (!gpc->active) + goto out_unlock; + old_pfn = gpc->pfn; old_khva = gpc->khva - offset_in_page(gpc->khva); old_uhva = gpc->uhva; @@ -305,14 +312,15 @@ int kvm_gfn_to_pfn_cache_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, gpc->khva = NULL; } - /* Snapshot the new pfn before dropping the lock! */ - new_pfn = gpc->pfn; + /* Detect a pfn change before dropping the lock! */ + unmap_old = (old_pfn != gpc->pfn); +out_unlock: write_unlock_irq(&gpc->lock); mutex_unlock(&gpc->refresh_lock); - if (old_pfn != new_pfn) + if (unmap_old) gpc_unmap_khva(kvm, old_pfn, old_khva); return ret; @@ -368,11 +376,19 @@ int kvm_gpc_activate(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, gpc->vcpu = vcpu; gpc->usage = usage; gpc->valid = false; - gpc->active = true; spin_lock(&kvm->gpc_lock); list_add(&gpc->list, &kvm->gpc_list); spin_unlock(&kvm->gpc_lock); + + /* + * Activate the cache after adding it to the list, a concurrent + * refresh must not establish a mapping until the cache is + * reachable by mmu_notifier events. + */ + write_lock_irq(&gpc->lock); + gpc->active = true; + write_unlock_irq(&gpc->lock); } return kvm_gfn_to_pfn_cache_refresh(kvm, gpc, gpa, len); } @@ -381,12 +397,20 @@ EXPORT_SYMBOL_GPL(kvm_gpc_activate); void kvm_gpc_deactivate(struct kvm *kvm, struct gfn_to_pfn_cache *gpc) { if (gpc->active) { + /* + * Deactivate the cache before removing it from the list, KVM + * must stall mmu_notifier events until all users go away, i.e. + * until gpc->lock is dropped and refresh is guaranteed to fail. + */ + write_lock_irq(&gpc->lock); + gpc->active = false; + write_unlock_irq(&gpc->lock); + spin_lock(&kvm->gpc_lock); list_del(&gpc->list); spin_unlock(&kvm->gpc_lock); kvm_gfn_to_pfn_cache_unmap(kvm, gpc); - gpc->active = false; } } EXPORT_SYMBOL_GPL(kvm_gpc_deactivate); base-commit: 09e5b3d617d28e3011253370f827151cc6cba6ad --