On Wed, Nov 23, 2016 at 02:14:40PM -0500, Serguei Sagalovitch wrote: > > On 2016-11-23 02:05 PM, Jason Gunthorpe wrote: > >As Bart says, it would be best to be combined with something like > >Mellanox's ODP MRs, which allows a page to be evicted and then trigger > >a CPU interrupt if a DMA is attempted so it can be brought back. > Please note that in the general case (including MR one) we could have > "page fault" from the different PCIe device. So all PCIe device must > be synchronized. Standard RDMA MRs require pinned pages, the DMA address cannot change while the MR exists (there is no hardware support for this at all), so page faulting from any other device is out of the question while they exist. This is the same requirement as typical simple driver DMA which requires pages pinned until the simple device completes DMA. ODP RDMA MRs do not require that, they just page fault like the CPU or really anything and the kernel has to make sense of concurrant page faults from multiple sources. The upshot is that GPU scenarios that rely on highly dynamic virtual->physical translation cannot sanely be combined with standard long-life RDMA MRs. Certainly, any solution for GPUs must follow the typical page pinning semantics, changing the DMA address of a page must be blocked while any DMA is in progress. > >Does HMM solve the peer-peer problem? Does it do it generically or > >only for drivers that are mirroring translation tables? > In current form HMM doesn't solve peer-peer problem. Currently it allow > "mirroring" of "malloc" memory on GPU which is not always what needed. > Additionally there is need to have opportunity to share VRAM allocations > between different processes. Humm, so it can be removed from Alexander's list then :\ As Dan suggested, maybe we need to do both. Some kind of fix for get_user_pages() for smaller mappings (eg ZONE_DEVICE) and a mandatory API conversion to get_user_dma_sg() for other cases? Jason -- To unsubscribe from this list: send the line "unsubscribe linux-pci" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html