> From: Alex Williamson [mailto:alex.williamson@xxxxxxxxxx] > Sent: Wednesday, May 04, 2016 6:43 AM > > + int prot, unsigned long *pfn_base) > > { > > + struct vfio_domain *domain = domain_data; > > unsigned long limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT; > > bool lock_cap = capable(CAP_IPC_LOCK); > > long ret, i; > > bool rsvd; > > + struct mm_struct *mm; > > > > - if (!current->mm) > > + if (!domain) > > return -ENODEV; > > > > - ret = vaddr_get_pfn(vaddr, prot, pfn_base); > > + if (domain->vfio_iommu_api_only) > > + mm = domain->vmm_mm; > > + else > > + mm = current->mm; > > + > > + if (!mm) > > + return -ENODEV; > > + > > + ret = vaddr_get_pfn(mm, vaddr, prot, pfn_base); > > We could pass domain->mm unconditionally to vaddr_get_pfn(), let it be > NULL in the !api_only case and use it as a cue to vaddr_get_pfn() which > gup variant to use. Of course we need to deal with mmap_sem somewhere > too without turning the code into swiss cheese. > > Correct me if I'm wrong, but I assume the main benefit of interweaving > this into type1 vs pulling out common code and making a new vfio iommu > backend is the page accounting, ie. not over accounting locked pages. > TBH, I don't know if it's worth it. Any idea what the high water mark > of pinned pages for a vgpu might be? The baseline is same as today's PCI device passthrough, i.e. we need to pin all memory pages allocated to the said VM at least for current KVMGT. Ideally we may reduce the pinned set based on fined-grained resource tracking within vGPU device model (then it might be in 100MBs based on active graphics memory working set). Thanks Kevin -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html