The patch below does not apply to the 5.4-stable tree. If someone wants it applied there, or to any other stable or longterm tree, then please email the backport, including the original git commit id to <stable@xxxxxxxxxxxxxxx>. To reproduce the conflict and resubmit, you may use the following commands: git fetch https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/ linux-5.4.y git checkout FETCH_HEAD git cherry-pick -x 5ef1d8c1ddbf696e47b226e11888eaf8d9e8e807 # <resolve conflicts, build, test, etc.> git commit -s git send-email --to '<stable@xxxxxxxxxxxxxxx>' --in-reply-to '2024032702-emphasis-favorite-5e62@gregkh' --subject-prefix 'PATCH 5.4.y' HEAD^.. Possible dependencies: 5ef1d8c1ddbf ("KVM: SVM: Flush pages under kvm->lock to fix UAF in svm_register_enc_region()") 19a23da53932 ("Fix unsynchronized access to sev members through svm_register_enc_region") a8d908b5873c ("KVM: x86: report sev_pin_memory errors with PTR_ERR") dc42c8ae0a77 ("KVM: SVM: convert get_user_pages() --> pin_user_pages()") 78824fabc72e ("KVM: SVM: fix svn_pin_memory()'s use of get_user_pages_fast()") 996ed22c7a52 ("arch/x86/kvm/svm/sev.c: change flag passed to GUP fast in sev_pin_memory()") eaf78265a4ab ("KVM: SVM: Move SEV code to separate file") ef0f64960d01 ("KVM: SVM: Move AVIC code to separate file") 883b0a91f41a ("KVM: SVM: Move Nested SVM Implementation to nested.c") 46a010dd6896 ("kVM SVM: Move SVM related files to own sub-directory") 8c1b724ddb21 ("Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm") thanks, greg k-h ------------------ original commit in Linus's tree ------------------ >From 5ef1d8c1ddbf696e47b226e11888eaf8d9e8e807 Mon Sep 17 00:00:00 2001 From: Sean Christopherson <seanjc@xxxxxxxxxx> Date: Fri, 16 Feb 2024 17:34:30 -0800 Subject: [PATCH] KVM: SVM: Flush pages under kvm->lock to fix UAF in svm_register_enc_region() Do the cache flush of converted pages in svm_register_enc_region() before dropping kvm->lock to fix use-after-free issues where region and/or its array of pages could be freed by a different task, e.g. if userspace has __unregister_enc_region_locked() already queued up for the region. Note, the "obvious" alternative of using local variables doesn't fully resolve the bug, as region->pages is also dynamically allocated. I.e. the region structure itself would be fine, but region->pages could be freed. Flushing multiple pages under kvm->lock is unfortunate, but the entire flow is a rare slow path, and the manual flush is only needed on CPUs that lack coherency for encrypted memory. Fixes: 19a23da53932 ("Fix unsynchronized access to sev members through svm_register_enc_region") Reported-by: Gabe Kirkpatrick <gkirkpatrick@xxxxxxxxxx> Cc: Josh Eads <josheads@xxxxxxxxxx> Cc: Peter Gonda <pgonda@xxxxxxxxxx> Cc: stable@xxxxxxxxxxxxxxx Signed-off-by: Sean Christopherson <seanjc@xxxxxxxxxx> Message-Id: <20240217013430.2079561-1-seanjc@xxxxxxxxxx> Signed-off-by: Paolo Bonzini <pbonzini@xxxxxxxxxx> diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index f760106c31f8..a132547fcfb5 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -1975,20 +1975,22 @@ int sev_mem_enc_register_region(struct kvm *kvm, goto e_free; } + /* + * The guest may change the memory encryption attribute from C=0 -> C=1 + * or vice versa for this memory range. Lets make sure caches are + * flushed to ensure that guest data gets written into memory with + * correct C-bit. Note, this must be done before dropping kvm->lock, + * as region and its array of pages can be freed by a different task + * once kvm->lock is released. + */ + sev_clflush_pages(region->pages, region->npages); + region->uaddr = range->addr; region->size = range->size; list_add_tail(®ion->list, &sev->regions_list); mutex_unlock(&kvm->lock); - /* - * The guest may change the memory encryption attribute from C=0 -> C=1 - * or vice versa for this memory range. Lets make sure caches are - * flushed to ensure that guest data gets written into memory with - * correct C-bit. - */ - sev_clflush_pages(region->pages, region->npages); - return ret; e_free: