> From: Neo Jia [mailto:cjia@xxxxxxxxxx] > Sent: Friday, May 13, 2016 3:49 AM > > > > > > Perhaps one possibility would be to allow the vgpu driver to register > > > map and unmap callbacks. The unmap callback might provide the > > > invalidation interface that we're so far missing. The combination of > > > map and unmap callbacks might simplify the Intel approach of pinning the > > > entire VM memory space, ie. for each map callback do a translation > > > (pin) and dma_map_page, for each unmap do a dma_unmap_page and release > > > the translation. > > > > Yes adding map/unmap ops in pGPU drvier (I assume you are refering to > > gpu_device_ops as > > implemented in Kirti's patch) sounds a good idea, satisfying both: 1) > > keeping vGPU purely > > virtual; 2) dealing with the Linux DMA API to achive hardware IOMMU > > compatibility. > > > > PS, this has very little to do with pinning wholly or partially. Intel KVMGT has > > once been had the whole guest memory pinned, only because we used a spinlock, > > which can't sleep at runtime. We have removed that spinlock in our another > > upstreaming effort, not here but for i915 driver, so probably no biggie. > > > > OK, then you guys don't need to pin everything. The next question will be if you > can send the pinning request from your mediated driver backend to request memory > pinning like we have demonstrated in the v3 patch, function vfio_pin_pages and > vfio_unpin_pages? > Jike can you confirm this statement? My feeling is that we don't have such logic in our device model to figure out which pages need to be pinned on demand. So currently pin-everything is same requirement in both KVM and Xen side... Thanks Kevin -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html