Excerpts from David Stevens's message of June 24, 2021 1:57 pm: > KVM supports mapping VM_IO and VM_PFNMAP memory into the guest by using > follow_pte in gfn_to_pfn. However, the resolved pfns may not have > assoicated struct pages, so they should not be passed to pfn_to_page. > This series removes such calls from the x86 and arm64 secondary MMU. To > do this, this series modifies gfn_to_pfn to return a struct page in > addition to a pfn, if the hva was resolved by gup. This allows the > caller to call put_page only when necessated by gup. > > This series provides a helper function that unwraps the new return type > of gfn_to_pfn to provide behavior identical to the old behavior. As I > have no hardware to test powerpc/mips changes, the function is used > there for minimally invasive changes. Additionally, as gfn_to_page and > gfn_to_pfn_cache are not integrated with mmu notifier, they cannot be > easily changed over to only use pfns. > > This addresses CVE-2021-22543 on x86 and arm64. Does this fix the problem? (untested I don't have a POC setup at hand, but at least in concept) I have no problem with improving the API and probably in the direction of your series is good. But there seems to be a lot of unfixed arch code and broken APIs remaining left to do after your series too. This might be most suitable to backport and as a base for your series that can take more time to convert to new APIs. Thanks, Nick --- diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 6a6bc7af0e28..e208c279d903 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2104,13 +2104,21 @@ static int hva_to_pfn_remapped(struct vm_area_struct *vma, * Whoever called remap_pfn_range is also going to call e.g. * unmap_mapping_range before the underlying pages are freed, * causing a call to our MMU notifier. + * + * Certain IO or PFNMAP mappings can be backed with valid + * struct pages, but be allocated without refcounting e.g., + * tail pages of non-compound higher order allocations, which + * would then underflow the refcount when the caller does the + * required put_page. Don't allow those pages here. */ - kvm_get_pfn(pfn); + if (!kvm_try_get_pfn(pfn)) + r = -EFAULT; out: pte_unmap_unlock(ptep, ptl); *p_pfn = pfn; - return 0; + + return r; } /* @@ -2487,6 +2495,13 @@ void kvm_set_pfn_accessed(kvm_pfn_t pfn) } EXPORT_SYMBOL_GPL(kvm_set_pfn_accessed); +static int kvm_try_get_pfn(kvm_pfn_t pfn) +{ + if (kvm_is_reserved_pfn(pfn)) + return 1; + return get_page_unless_zero(pfn_to_page(pfn)); +} + void kvm_get_pfn(kvm_pfn_t pfn) { if (!kvm_is_reserved_pfn(pfn))