On Tue, 23 May 2023 13:48:22 +0800 Yan Zhao <yan.y.zhao@xxxxxxxxx> wrote: > On Mon, May 22, 2023 at 01:00:30PM -0600, Alex Williamson wrote: > > On Fri, 19 May 2023 14:58:43 +0800 > > Yan Zhao <yan.y.zhao@xxxxxxxxx> wrote: > > > > > Check physical PFN is valid before converting the PFN to a struct page > > > pointer to be returned to caller of vfio_pin_pages(). > > > > > > vfio_pin_pages() pins user pages with contiguous IOVA. > > > If the IOVA of a user page to be pinned belongs to vma of vm_flags > > > VM_PFNMAP, pin_user_pages_remote() will return -EFAULT without returning > > > struct page address for this PFN. This is because usually this kind of PFN > > > (e.g. MMIO PFN) has no valid struct page address associated. > > > Upon this error, vaddr_get_pfns() will obtain the physical PFN directly. > > > > > > While previously vfio_pin_pages() returns to caller PFN arrays directly, > > > after commit > > > 34a255e67615 ("vfio: Replace phys_pfn with pages for vfio_pin_pages()"), > > > PFNs will be converted to "struct page *" unconditionally and therefore > > > the returned "struct page *" array may contain invalid struct page > > > addresses. > > > > > > Given current in-tree users of vfio_pin_pages() only expect "struct page * > > > returned, check PFN validity and return -EINVAL to let the caller be > > > aware of IOVAs to be pinned containing PFN not able to be returned in > > > "struct page *" array. So that, the caller will not consume the returned > > > pointer (e.g. test PageReserved()) and avoid error like "supervisor read > > > access in kernel mode". > > > > > > Fixes: 34a255e67615 ("vfio: Replace phys_pfn with pages for vfio_pin_pages()") > > > Cc: Sean Christopherson <seanjc@xxxxxxxxxx> > > > Reviewed-by: Jason Gunthorpe <jgg@xxxxxxxxxx> > > > Signed-off-by: Yan Zhao <yan.y.zhao@xxxxxxxxx> > > > > > > --- > > > v2: update commit message to explain background/problem clearly. (Sean) > > > --- > > > drivers/vfio/vfio_iommu_type1.c | 5 +++++ > > > 1 file changed, 5 insertions(+) > > > > > > diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c > > > index 493c31de0edb..0620dbe5cca0 100644 > > > --- a/drivers/vfio/vfio_iommu_type1.c > > > +++ b/drivers/vfio/vfio_iommu_type1.c > > > @@ -860,6 +860,11 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data, > > > if (ret) > > > goto pin_unwind; > > > > > > + if (!pfn_valid(phys_pfn)) { > > > > Why wouldn't we use our is_invalid_reserved_pfn() test here? Doing > > so would also make it more consistent why we don't need to call > > put_pfn() or rewind accounting for this page. Thanks, > > > I actually struggled in choosing is_invalid_reserved_pfn() or > pfn_valid() when writing this patch. > > Choosing pfn_valid() is because invalid PFN obviously cannot have > struct page address and it's a bug fix. > > While declining reserved pages will have the IOVA range supported by > vfio_pin_pages() even more reduced. So I don't know if there's enough > justification to do so, given that (1) device zone memory usually has > PG_reserved set. (2) vm_normal_page() also contains reserved page. Based on the exclusion we have in vaddr_get_pfn() where we unpin zero-page pfns because they hit on the is_invalid_reserved_pfn() test and break our accounting otherwise, this does seem like the correct choice. I can imagine a scenario where the device wants to do a DMA read from VM memory backed by the zero page. Ok. Thanks, Alex