On Tue, 9 Mar 2021 14:48:24 -0500 Peter Xu <peterx@xxxxxxxxxx> wrote: > On Tue, Mar 09, 2021 at 12:26:07PM -0700, Alex Williamson wrote: > > On Tue, 9 Mar 2021 13:47:39 -0500 > > Peter Xu <peterx@xxxxxxxxxx> wrote: > > > > > On Tue, Mar 09, 2021 at 12:40:04PM -0400, Jason Gunthorpe wrote: > > > > On Tue, Mar 09, 2021 at 08:29:51AM -0700, Alex Williamson wrote: > > > > > On Tue, 9 Mar 2021 08:46:09 -0400 > > > > > Jason Gunthorpe <jgg@xxxxxxxxxx> wrote: > > > > > > > > > > > On Tue, Mar 09, 2021 at 03:49:09AM +0000, Zengtao (B) wrote: > > > > > > > Hi guys: > > > > > > > > > > > > > > Thanks for the helpful comments, after rethinking the issue, I have proposed > > > > > > > the following change: > > > > > > > 1. follow_pte instead of follow_pfn. > > > > > > > > > > > > Still no on follow_pfn, you don't need it once you use vmf_insert_pfn > > > > > > > > > > vmf_insert_pfn() only solves the BUG_ON, follow_pte() is being used > > > > > here to determine whether the translation is already present to avoid > > > > > both duplicate work in inserting the translation and allocating a > > > > > duplicate vma tracking structure. > > > > > > > > Oh.. Doing something stateful in fault is not nice at all > > > > > > > > I would rather see __vfio_pci_add_vma() search the vma_list for dups > > > > than call follow_pfn/pte.. > > > > > > It seems to me that searching vma list is still the simplest way to fix the > > > problem for the current code base. I see io_remap_pfn_range() is also used in > > > the new series - maybe that'll need to be moved to where PCI_COMMAND_MEMORY got > > > turned on/off in the new series (I just noticed remap_pfn_range modifies vma > > > flags..), as you suggested in the other email. > > > > > > In the new series, I think the fault handler becomes (untested): > > > > static vm_fault_t vfio_pci_mmap_fault(struct vm_fault *vmf) > > { > > struct vm_area_struct *vma = vmf->vma; > > struct vfio_pci_device *vdev = vma->vm_private_data; > > unsigned long base_pfn, pgoff; > > vm_fault_t ret = VM_FAULT_SIGBUS; > > > > if (vfio_pci_bar_vma_to_pfn(vma, &base_pfn)) > > return ret; > > > > pgoff = (vmf->address - vma->vm_start) >> PAGE_SHIFT; > > > > down_read(&vdev->memory_lock); > > > > if (__vfio_pci_memory_enabled(vdev)) > > ret = vmf_insert_pfn(vma, vmf->address, pgoff + base_pfn); > > > > up_read(&vdev->memory_lock); > > > > return ret; > > } > > It's just that the initial MMIO access delay would be spread to the 1st access > of each mmio page access rather than using the previous pre-fault scheme. I > think an userspace cares the delay enough should pre-fault all pages anyway, > but just raise this up. Otherwise looks sane. Yep, this is a concern. Is it safe to have loops concurrently and fully populating the same vma with vmf_insert_pfn()? If it is then we could just ignore that we're doing duplicate work when we hit this race condition. Otherwise we'd need to serialize again, perhaps via a lock and flag stored in a struct linked from vm_private_data, along with tracking to free that object :-\ Thanks, Alex